- RagMetrics
- Posts
- Where does AI knowledge come from?
Where does AI knowledge come from?
Homer Simpson has some improvement ideas
How can you improve your AI application over time?
Managing feedback for an AI-powered application is tricky, just like it is for us humans. You need to settle an epistemological question, “where does knowledge come from?”
It sounds abstract, but is actually quite literal. To get better over time, any ML/AI system needs feedback. As the designer or owner of this system, you’ll need to decide where that feedback should come from. Here are the most common options:
a) “From what's written down”: Knowledge comes from reference materials, such as documents or a database. This is the foundation of the RAG approach.
b) “From what works”: Correlate answers to desirable future outcomes, such as whether a user clicked on an ad, comes back for another session or buys a product. Then give more of the answers that drive the outcomes you are looking for.
c) “From what customers want”: Knowledge comes from user feedback, such as a thumbs up/down, or a survey.
None of these approaches are perfect:
1) They are error-prone: Written references have mistakes. Looking at future outcomes risks confounding correlation with causation. "Post hoc ergo propter hoc" is a key logical fallacy. And customers often like things that aren't right for them.
2) They can get stale. Reference materials, customer behavior and tastes all change.
3) They are ethically challenging: A bot that applies rules without empathy can be tone-deaf or draconian. Maximizing engagement leads to filter-bubbles and anger-porn. Giving customer what they want may be illegal.
The only solution is to use all three!
What do you think? How do you evaluate your LLM application?