0:00
/
0:00
Transcript

"Enhancing Zero-shot Chain of Thought Prompting via Uncertainty-Guided Strategy Selection"

The podcast on this paper is generated with Google's Illuminate.

ZEUS ( Zero-shot Uncertainty-based Selection) teaches LLMs to reason better by measuring their uncertainty before making decisions

LLMs get smarter when they know what they don't know - that's what ZEUS brings

ZEUS enhances zero-shot Chain of Thought prompting by introducing uncertainty-based selection of demonstrations, making LLMs better at complex reasoning tasks without manual annotations.

-----

https://arxiv.org/abs/2412.00353

🤔 Original Problem:

Existing Chain of Thought (CoT) methods face limitations - handcrafted demonstrations need extensive expertise, while trigger phrases often lead to inaccuracies. No effective method exists to automatically select good demonstrations.

-----

🔧 Solution in this Paper:

→ ZEUS estimates uncertainty in LLM responses using three types of perturbations: temperature adjustments, trigger phrase variations, and question rephrasing

→ For each input question, ZEUS generates multiple responses through these perturbations to measure consistency

→ Based on uncertainty scores, ZEUS selects optimal demonstration examples that fall within specific uncertainty ranges

→ The selected demonstrations are then clustered to maintain diversity and used to guide the LLM's reasoning process

-----

💡 Key Insights:

→ Advanced LLMs perform better with Hard and Challenging examples, while simpler models excel with Trivial/Easy examples

→ Temperature-based uncertainty estimates are well-calibrated but lack sensitivity

→ Uncertainty-based selection consistently outperforms existing prompting methods

-----

📊 Results:

→ ZEUS achieves 64% prediction accuracy on test sets

→ Delivers 2.21 Sharpe ratio on sector rotation tasks

→ Outperforms baseline methods across 4 reasoning benchmarks

Discussion about this video