0:00
/
0:00
Transcript

"Decoding Reading Goals from Eye Movements"

The podcast on this paper is generated with Google's Illuminate.

Your eye movements reveal your reading intentions - and AI can decode them

AI knows why you're reading just by watching your eyes

📚 https://arxiv.org/abs/2410.20779

🔍 Original Problem:

Can we decode a reader's goal (information seeking vs normal reading) just by analyzing their eye movements while reading text? No prior research attempted this, despite its potential applications in understanding human reading behavior.

-----

🛠️ Solution in this Paper:

→ Used eye-tracking data from 360 native English speakers reading 54 paragraphs

→ Implemented multiple model architectures:

- Eye Movements-Only Models (using just eye tracking data)

- Eye Movements + Text Models (combining both)

- Logistic Ensemble (combining all models)

→ Key models include:

- RoBERTa-Eye-F: Processes fixation-level data with text

- BEyeLSTM: Uses LSTM for eye movement sequence

- PostFusion-Eye: Uses cross-attention between text and eye movements

-----

💡 Key Insights:

→ Eye movements contain strong signals for predicting reading goals

→ Fixation-based models outperform word-based models

→ Model ensemble significantly improves accuracy

→ Reading speed alone is a strong baseline predictor

-----

📊 Results:

→ Logistic Ensemble achieved best performance:

- 77.3% accuracy for New Items

- 64.6% accuracy for New Participants

- 64.3% accuracy for New Items & Participants

- 70.5% overall accuracy

→ RoBERTa-Eye-F was the best single model performer

Discussion about this video

User's avatar