0:00
/
0:00
Transcript

"Graph Linearization Methods for Reasoning on Graphs with Large Language Models"

The podcast on this paper is generated with Google's Illuminate.

Make LLMs read graphs by converting them into meaningful text sequences

📚 https://arxiv.org/abs/2410.19494

🎯 Original Problem:

LLMs can't directly process graph data as they only understand text. Converting graphs into sequences (linearization) while preserving structural information remains an unsolved challenge.

-----

🔧 Solution in this Paper:

• Developed graph linearization methods using:

- Graph centrality (PageRank and degree)

- Graph degeneracy (k-core decomposition)

- Node relabeling schemes

• Created edge ordering strategies based on node importance

• Applied node relabeling to maintain global alignment

• Tested on synthetic dataset with 3000 graphs using Llama 3 Instruct 8B

-----

💡 Key Insights:

• Graphs need local dependency and global alignment properties for LLMs to process them effectively

• Centrality-based methods consistently outperform random baselines

• Node relabeling shows mixed effects across different tasks

• Edge ordering significantly impacts LLM's graph understanding

-----

📊 Results:

• Node Counting: Degree-based method achieved 62.28% accuracy

• Max Degree: Degree centrality reached 30.89% accuracy

• Motif Classification: PageRank method hit 47.27% accuracy

• All proposed methods consistently outperformed random baselines

Discussion about this video

User's avatar