Your knowledge graph just got a time machine - now it answers questions about when things happened.
TimelineKGQA introduces a framework for generating temporal question-answer pairs from any knowledge graph.
-----
https://arxiv.org/abs/2501.04343
🤔 Original Problem:
→ Current temporal knowledge graph question answering (TKGQA) datasets are limited in scope and complexity
→ Existing methods already achieve over 90% accuracy on available benchmarks
→ No comprehensive framework exists to categorize and generate diverse temporal questions
-----
🔧 Solution in this Paper:
→ TimelineKGQA introduces a novel categorization framework based on context complexity (Simple, Medium, Complex)
→ The framework classifies questions by answer focus (Temporal vs Factual) and temporal relations (Allen Relations, Time Range Sets, Duration)
→ A Python package converts any knowledge graph to temporal format with flexible time granularity
→ Uses fact sampling prioritizing temporally close events and LLM paraphrasing for natural question generation
-----
💡 Key Insights:
→ Question complexity can be systematically categorized by context facts required (1, 2, or 3 facts)
→ Temporal capabilities fall into four categories: TCR, TPR, TSO, and TAO
→ LLM paraphrasing helps avoid template limitations in question generation
-----
📊 Results:
→ Generated two benchmark datasets: ICEWS Actor (89,372 questions) and CronQuestion KG (41,720 questions)
→ RAG baseline shows clear difficulty progression: Simple (70% accuracy), Medium (10%), Complex (1%)
Share this post