0:00
/
0:00
Transcript

"Preference Discerning with LLM-Enhanced Generative Retrieval"

Generated below podcast on this paper with Google's Illuminate.

Making recommendation systems that listen to user preferences, not just watch their actions.

Sequential recommendation systems struggle with personalization because they can't explicitly understand user preferences.

-----

https://arxiv.org/abs/2412.08604

🤔 Original Problem:

→ Current recommendation systems rely on implicit modeling of user preferences from interaction history, leading to limited personalization and inability to adapt to explicit user preferences.

-----

🔧 Solution in this Paper:

→ Introduces "preference discerning" - a new paradigm that uses LLMs to generate explicit user preferences from reviews and item data.

→ Implements Mender (Multimodal Preference Discerner) that fuses pre-trained language encoders with generative retrieval.

→ Uses cross-attention mechanism to condition recommendations on generated preferences in natural language.

→ Enables dynamic steering of recommendations through user-specified preferences.

-----

💡 Key Insights:

→ Explicit preference modeling consistently improves recommendation quality

→ Fine-grained steering capabilities emerge naturally from training

→ Larger language models significantly improve preference understanding

→ Models struggle with sentiment following without explicit training

-----

📊 Results:

→ Up to 45% relative improvement in recommendation performance

→ Achieves state-of-the-art results on preference-based recommendations

→ Successfully generalizes to new user sequences not seen during training

Discussion about this video