0:00
/
0:00
Transcript

"ComMer: a Framework for Compressing and Merging User Data for Personalization"

Generated below podcast on this paper with Google's Illuminate.

Compress once, personalize forever - ComMer's approach to efficient LLM adaptation.

ComMer introduces a framework that compresses and merges user data into compact representations for efficient LLM personalization, reducing computational costs while maintaining performance.

-----

https://arxiv.org/abs/2501.03276

🤔 Original Problem:

Personalizing LLMs faces two major challenges: prompt engineering hits context window limits and is computationally expensive, while fine-tuning requires substantial resources for training individual user models.

-----

🔧 Solution in this Paper:

→ ComMer compresses each user document independently into a soft prompt using a frozen LLM with trainable compression embeddings

→ These compressed representations are merged through mean pooling into a single compact form

→ The merged representation is fed into a frozen LLM for generating personalized responses

→ The entire process is trained end-to-end using cross-entropy loss

-----

💡 Key Insights:

→ ComMer excels in personalized skill learning tasks but shows limitations in knowledge-intensive scenarios

→ Performance improves with more documents in skill learning tasks, following a power-law relationship

→ Mean pooling proves more effective than concatenation for merging document representations

→ The choice of pretraining dataset has minimal impact on final performance

-----

📊 Results:

→ Achieves superior quality with fewer resources compared to prompt-tuning within 128-token budget

→ Shows improved perplexity metrics when exposed to more documents than training

→ Demonstrates degraded performance in knowledge-intensive tasks with multiple documents

Discussion about this video