Layer-of-Thoughts Prompting (LoT): Leveraging LLM-Based Retrieval with Constraint Hierarchies
Layer-of-Thoughts (LoT) prompting filters LLM outputs through hierarchical layers, like a smart document sieve.
Layer-of-Thoughts (LoT) prompting filters LLM outputs through hierarchical layers, like a smart document sieve.
And this structured thought layers turn messy LLM responses into precise legal document retrieval, as proposed in this paper,
Original Problem 🎯:
Current prompting techniques lack structured frameworks for handling complex multi-turn interactions with LLMs, especially in information retrieval tasks requiring precise filtering and reasoning.
Solution in this Paper 🔧:
• Layer-of-Thoughts (LoT) introduces hierarchical constraint-based prompting
• Uses two thought types: layer thoughts (conceptual steps) and option thoughts (partial solutions)
• Each layer processes inputs through filtering criteria, with outputs passed to subsequent layers
• Implements metrics like all, at-least-k, locally-better, and max-count for aggregating results
• Supports both single-level (equal option thoughts) and multi-level (prioritized thoughts) structures
Key Insights from this Paper 💡:
• Hierarchical filtering reduces computation by avoiding full corpus exploration
• Clear reasoning pathways enhance explainability in decision-making
• Structured layers balance precision and recall effectively
• Constraint hierarchies enable efficient scaling for large datasets
Results 📊:
• Japanese Civil Law Retrieval:
Highest F2 score: 0.835
Precision: 0.838
Recall: 0.839
Outperformed all COLIEE 2024 systems
• Normative Sentence Retrieval:
Achieved 0.966 recall score
Surpassed baseline methods (Chain-of-Thoughts: 0.862, BM25: 0.793)
Maintained reasonable precision while maximizing recall
🧩 The architecture and workflow of Layer-of-Thoughts (LoT) Prompting
The system works through multiple layers where:
Layer thoughts receive inputs from previous layers and determine option thoughts generation
Option thoughts generate partial solutions based on specific criteria
Outputs are aggregated and passed to next layers
Each layer can have single-level (equal option thoughts) or multiple-level (prioritized option thoughts) structure.