0:00
/
0:00
Transcript

"LLM is Knowledge Graph Reasoner: LLM's Intuition-aware Knowledge Graph Reasoning for Cold-start Sequential Recommendation"

Generated below podcast on this paper with Google's Illuminate.

LLMs guide Knowledge Graphs to make smarter recommendations with limited user data.

LIKR combines LLMs with Knowledge Graphs through reinforcement learning to solve cold-start recommendation challenges by treating LLMs as intuitive path reasoners.

-----

https://arxiv.org/abs/2412.12464

🤔 Original Problem:

→ Traditional recommendation systems struggle with cold-start scenarios where user interaction data is limited

→ LLM-based recommendations face scalability issues due to token limits

→ Knowledge Graph methods lack temporal awareness and perform poorly with sparse data

-----

🔧 Solution in this Paper:

→ LIKR treats LLMs as KG path reasoners that output intuitive exploration strategies.

→ The system feeds temporally-aware prompts to LLMs to predict user preferences.

→ A reinforcement learning agent explores the KG using rewards from both LLM intuition and KG embeddings.

→ The model combines LLM's general knowledge with KG's domain-specific insights through carefully balanced reward functions.

-----

💡 Key Insights:

→ LLMs can effectively guide KG exploration without needing the entire dataset

→ Temporal awareness in prompts significantly improves recommendation quality

→ Optimal balance between LLM intuition and KG embedding rewards varies by domain

-----

📊 Results:

→ Outperforms state-of-the-art methods on MovieLens-1M with higher recall@20 and nDCG@20

→ GPT-4-preview shows best performance among tested LLMs

→ Achieves 4.83% recall and 19.14% nDCG on MovieLens-1M dataset