0:00
/
0:00
Transcript

"Just What You Desire: Constrained Timeline Summarization with Self-Reflection for Enhanced Relevance"

Generated below podcast on this paper with Google's Illuminate.

Self-reflecting LLMs now create timelines that stick to what you asked for.

Timeline summarization gets personal: This paper introduces Constrained Timeline Summarization, letting users specify exactly what events they want to see in their timelines.

https://arxiv.org/abs/2412.17408v1

🎯 Original Problem:

Traditional timeline summarization includes all important events without considering user preferences, making it hard to find specific information like "only Stephen King's book releases" or "only Tiger Woods' legal battles."

-----

🔍 Solution in this Paper:

→ Created CREST dataset with 201 timelines across 47 topics, using GPT-4 to generate constraints and human annotators to verify events

→ Developed REACTS method that uses LLMs to generate constraint-specific summaries from news articles

→ Implemented self-reflection where LLM verifies if summaries meet specified constraints

→ Clustered similar events using GTE embeddings and selected top clusters for final timeline

-----

🧠 Key Insights:

→ Self-reflection significantly improves timeline accuracy by filtering irrelevant events

→ Method requires no training or fine-tuning, only decoding parameter settings

→ Approach works effectively with streaming news, unlike baseline methods

-----

📊 Results:

→ REACTS with Llama-3.1 70B showed 2.82% improvement in AR-1 F1 score

→ Date F1 score improved by 6.38% with self-reflection component

→ Achieved 94.7% inter-annotator agreement on dataset creation