0:00
/
0:00
Transcript

"A Study of the Plausibility of Attention between RNN Encoders in Natural Language Inference"

Below podcast is generated with Google's Illuminate.

Attention in NLI (Natural Language Inference) needs more than just raw attention weights.

This paper assesses the plausibility of attention mechanisms in natural language inference, comparing model-based attention with human and heuristic annotations.

Paper - https://arxiv.org/abs/2501.13735

Original Problem 🤔:

→ Attention maps are used to explain LLM decisions, but their plausibility (usefulness for human understanding) is not well-studied, especially in complex tasks like natural language inference.

Solution in this Paper 💡:

→ The paper compares cross-attention weights between two RNN encoders with human annotations and a heuristic based on word similarity in the eSNLI dataset.

→ The heuristic focuses on highlighting words with similar meanings between premise and hypothesis sentences, particularly for entailment.

→ The model architecture includes two LSTM encoders, a cross-attention mechanism, and a classification layer.

Key Insights from this Paper 🤯:

→ The heuristic correlates reasonably well with human annotations, providing a potential automated evaluation method for plausibility.

→ Raw attention weights are loosely related to plausible explanations.

→ Model-based attention often focuses on unimportant words, resulting in low plausibility compared to both human and heuristic maps.

Results 💯:

→ The heuristic method shows a better match with human annotations (AUC of 0.63 at epsilon=0.5) than the model-based attention.

→ Correlation between heuristic and human attention is moderate (Pearson: 0.52, Spearman: 0.53).

Discussion about this video