This paper introduces Fast Think-on-Graph (FastToG) to enhance Large Language Model reasoning on Knowledge Graphs by processing information community by community for improved accuracy and speed.
-----
Paper - https://arxiv.org/abs/2501.14300
Original Problem 😥:
→ Existing Graph Retrieval Augmented Generation methods struggle with complex queries.
→ Simple methods fail to capture deep relationships in Knowledge Graphs.
→ Tightly coupled methods become computationally expensive on dense graphs.
-----
Solution in this Paper 💡:
→ FastToG guides LLMs to reason "community by community" within Knowledge Graphs.
→ It uses community detection to find deeper correlations.
→ FastToG employs Local Community Search (LCS) for efficient community identification.
→ LCS includes coarse pruning based on modularity to filter candidate communities structurally.
→ Fine pruning uses LLMs to select the most relevant communities.
→ Two Community-to-Text methods, Triple2Text and Graph2Text, convert graph structures into text for LLMs.
→ Graph2Text uses a fine-tuned smaller language model for better text conversion and summarization.
-----
Key Insights from this Paper 🧠:
→ Reasoning at the community level, rather than individual nodes or paths, significantly reduces reasoning chain length.
→ Community-based approach enhances the accuracy and explainability of LLM responses.
→ Modularity-based coarse pruning effectively reduces candidate communities while preserving structural information.
→ Converting community structures to text is crucial for LLMs to effectively utilize graph information.
Share this post