0:00
/
0:00

Re-Reading Improves Reasoning in Large Language Models

The podcast on this paper is generated with Google's Illuminate.

Very Powerful but very simple Prompting technique.

Simply ask the LLM to re-read the question - significantly boosts LLM reasoning across diverse tasks and model types. 💡

Repeats question input twice in prompt, unlocks latent reasoning potential

📚 https://arxiv.org/pdf/2309.06275

Problem 🤔:

Decoder-only LLMs with unidirectional attention struggle with nuanced reasoning tasks due to limited global understanding of input questions.

Key Insights from this Paper 💡:

• Re-reading (RE2) input enhances reasoning by improving question comprehension

• Enables "bidirectional" understanding in unidirectional LLMs

• Compatible with existing thought-eliciting prompting methods

• Effective across various LLM types and reasoning tasks

Solution in this Paper 🔍:

• Introduces RE2 (Re-Reading) prompting method:

- Repeats question input twice in prompt

- Enhances input understanding before reasoning

- Allows tokens to attend to full context in second pass

• Compatible with Chain-of-Thought and other prompting techniques

• Applicable to zero-shot, few-shot, and self-consistency settings

Results 📊:

• Consistent improvements across 14 datasets and 112 experiments

• Effective for both instruction-tuned (ChatGPT) and non-tuned (LLaMA) models

• Increases n-gram recall between generation and input question

• Most effective when reading question twice

------

Are you into AI and LLMs❓ Join me on X/Twitter with 50K+ others, to remain on the bleeding-edge of AI every day.

𝕏/🐦 https://x.com/rohanpaul_ai

Discussion about this video

User's avatar