0:00
/
0:00
Transcript

"Fictitious Synthetic Data Can Improve LLM Factuality via Prerequisite Learning"

The podcast on this paper is generated with Google's Illuminate.

Novel approach uses fictitious data to teach LLMs when to trust their knowledge

PREREQ-TUNE separates knowledge and skills during fine-tuning to reduce LLM hallucinations:

📚 https://arxiv.org/abs/2410.19290

🎯 Original Problem:

LLMs often hallucinate due to knowledge inconsistency between pre-training and fine-tuning stages, where unfamiliar data during fine-tuning leads to fabricated outputs.

-----

🔧 Solution in this Paper:

• PREREQ-TUNE: A two-stage fine-tuning strategy using separate LoRA modules

• Stage 1: Knowledge LoRA learns prerequisite knowledge. This is Prerequisite learning stage to acquire necessary knowledge

• Stage 2: Skill LoRA learns task skills while knowledge LoRA remains frozen. This is supervised fine-tuning stage focused purely on task skills.

• Uses fictitious synthetic data to create multiple knowledge versions about same entities

• During inference, drops knowledge LoRA and retains only skill LoRA

• Enables modular LLM design with plug-and-play knowledge modules

-----

💡 Key Insights:

• Knowledge and skills can be effectively disentangled during fine-tuning

• Fictitious synthetic data, normally harmful, becomes beneficial with PREREQ-TUNE

• Model can generalize knowledge grounding from knowledge LoRA to pre-trained knowledge

• Enables scalable training with cheap synthetic data

• Opens possibilities for novel retrieval augmented generation paradigms

-----

📊 Results:

• Outperforms existing hallucination reduction methods across QA and generation tasks

• Biography Generation: 45.30% accuracy vs 32.70% baseline

• Medical QA: 74.35% accuracy vs 69.94% baseline

• Short QA: 47.91% accuracy vs 46.42% baseline

Discussion about this video