0:00
/
0:00
Transcript

"Leveraging Memory Retrieval to Enhance LLM-based Generative Recommendation"

Generated below podcast on this paper with Google's Illuminate.

Memory-powered LLMs now understand your long-term interests, not just recent clicks

Paper proposes AutoMR, a framework that enhances LLM-based recommendations by effectively utilizing users' long-term interaction histories through automated memory retrieval.

https://arxiv.org/abs/2412.17593

Original Problem 💡:

→ LLMs have limited context windows, restricting them to focus only on recent user interactions in recommendation systems.

→ This limitation causes LLMs to miss important long-term user interests and preferences.

Solution in this Paper 🔧:

→ AutoMR stores users' long-term interaction histories in external memory, encoded by LLMs.

→ It uses a trained retriever to extract relevant historical information when needed.

→ The retriever is trained using annotations based on perplexity reduction of ground-truth items.

→ AutoMR combines retrieved long-term data with recent interactions to generate recommendations.

Key Insights 💡:

→ Manual retrieval design is challenging but annotating memory usefulness is straightforward

→ Learning-based retrieval outperforms semantic retrieval for recommendation tasks

→ Distant historical interactions can provide valuable signals for current recommendations

Results 📊:

→ Tested on Amazon Book and Movie datasets from 2017

→ Outperformed baselines: BIGRec, ReLLa, TRSR, and SASRec

→ Achieved 0.0291 Recall@1 and 0.0379 Recall@5 on Book dataset

→ Showed 0.0601 Recall@1 and 0.0638 Recall@5 on Movie dataset

Discussion about this video