LLMs think better with connected evidence chains
Chain of Evidence (CoE) helps LLMs make better decisions by ensuring knowledge pieces are both relevant and mutually supportive, like evidence in a criminal case.
-----
https://arxiv.org/abs/2412.12632
🔍 Original Problem:
→ LLMs struggle with outdated knowledge and hallucinations when using external information
→ Current retrieval methods don't effectively handle irrelevant or misleading information
→ Complex queries requiring multiple knowledge pieces are particularly challenging
-----
🛠️ Solution in this Paper:
→ Introduces Chain of Evidence (CoE), inspired by criminal law principles
→ CoE requires knowledge pieces to show both relevance to the question and mutual support
→ Developed automated CoE discrimination approach to identify valid knowledge chains
→ Created ScopeCoE, a retrieval strategy that selects minimal sets of knowledge forming CoE
-----
💡 Key Insights:
→ LLMs show higher accuracy with CoE-structured knowledge
→ CoE helps resist misinformation and knowledge conflicts
→ LLMs exhibit strong faithfulness to CoE, even with factual errors
→ Fewer but well-connected knowledge pieces outperform larger sets
-----
📊 Results:
→ ScopeCoE improved accuracy by 10.4% on HotpotQA
→ Achieved 28.7% improvement on 2WikiMultihopQA
→ Required only 4.6-4.8 knowledge pieces vs standard 5 pieces
→ Maintained 85.4% faithfulness rate across tested models
Share this post