LLM4PR, proposed in this paper, brings LLM power to search engine post-ranking, handling both text and numerical features seamlessly
https://arxiv.org/abs/2411.01178
🎯 Original Problem:
Search engines need a post-ranking stage to optimize user satisfaction beyond just relevance scores. Current LLM methods focus mainly on retrieval and ranking, leaving post-ranking unexplored. The challenge lies in handling heterogeneous features and adapting LLMs for post-ranking tasks.
-----
🛠️ Solution in this Paper:
→ Introduces LLM4PR framework with Query-Instructed Adapter (QIA) that processes diverse input features
→ Uses feature adaptation step to align user/item representations with LLM semantics through template-based generation
→ Implements two-step training: feature adaptation and learning to post-rank
→ Employs main task for generating ranking orders and auxiliary task for comparing candidate lists
-----
💡 Key Insights:
→ First framework to leverage LLMs specifically for post-ranking in search engines
→ QIA effectively combines heterogeneous features using query-based attention
→ Template-based approach aligns numerical features with LLM's semantic understanding
→ Two-task training strategy improves ranking quality
-----
📊 Results:
→ Achieved state-of-the-art performance on BEIR, MovieLens-1M and KuaiSAR datasets
→ Demonstrated significant improvements in handling both pure text and heterogeneous features
→ Successfully optimized user satisfaction metrics in practical search applications
Share this post