Retrieval, prompt augmentation, and a new LoRA method improve LLM recommendation on long behavior sequences.
This paper enhances LLMs for recommendation by addressing their difficulty in understanding long user behavior sequences. It proposes a framework (ReLLaX) with optimizations at the data, prompt, and parameter levels.
----------
Paper - https://arxiv.org/abs/2501.13344
🤔: Original Problem
→ LLMs struggle to extract useful information from long user behavior sequences in recommendations, even within their context window limit. This is termed "lifelong sequential behavior incomprehension."
----------
⚙️: Solution in this Paper
→ ReLLaX uses semantic user behavior retrieval (SUBR) to select semantically relevant behaviors, improving data quality.
→ It augments prompts with soft prompts derived from conventional recommendation models, injecting collaborative knowledge.
→ It proposes Component Fully-interactive LoRA (CFLORA) which allows full interaction between LoRA components, making parameters more expressive for capturing long-sequence information. Theoretical analysis shows other LoRA-based methods are degraded versions of CFLORA with limited interaction between components.
----------
💡: Key Insights
→ Longer sequences provide more useful information for conventional recommendation models. LLM performance peaks at shorter sequence lengths then declines, unlike their performance in typical NLP tasks.
→ User behavior sequences are heterogeneous, making it hard for LLMs to extract useful information.
----------
📈: Results
→ Achieves significant performance improvement over traditional models and other LLM-based methods across BookCrossing, MovieLens-1M, and MovieLens-25M datasets.
→ Shows steady improvement with longer behavior sequences, alleviating the lifelong sequence incomprehension problem.
→ Outperforms vanilla LoRA-based methods, demonstrating the effectiveness of CFLORA.
Share this post