0:00
/
0:00
Transcript

"SAFE: Slow and Fast Parameter-Efficient Tuning for Continual Learning with Pre-Trained Models"

The podcast on this paper is generated with Google's Illuminate.

Two brains are better than one: SAFE's dual-learner approach prevents AI amnesia

SAFE, proposed in this paper, combines a wise old turtle and quick young rabbit to help AI remember everything it learns

This paper introduces SAFE (Slow And Fast Parameter-Efficient tuning), a framework that solves catastrophic forgetting in continual learning by combining two complementary learners. The slow learner preserves general knowledge from pre-trained models, while the fast learner rapidly adapts to new concepts, achieving superior performance without storing historical data.

-----

https://arxiv.org/abs/2411.02175

🤔 Original Problem:

Current continual learning methods using pre-trained models either lose inherent knowledge during adaptation or lack plasticity for new concepts. They either freeze parameters completely or require storing old data, leading to suboptimal performance.

-----

🔧 Solution in this Paper:

→ SAFE employs a slow learner that explicitly transfers knowledge from pre-trained models using correlation matrices and transfer loss functions

→ The fast learner continuously updates to learn new concepts while being guided by the slow learner through feature alignment

→ An entropy-based aggregation strategy dynamically combines predictions from both learners during inference

→ Cross-classification loss with feature alignment prevents catastrophic forgetting without storing exemplars

-----

💡 Key Insights:

→ Direct parameter-efficient tuning loses general knowledge from pre-trained models

→ Freezing parameters hinders model plasticity for new concepts

→ Dynamic combination of slow and fast learners provides optimal balance

-----

📊 Results:

→ Surpassed state-of-the-art by 4.4% on ImageNet-A

→ Improved average accuracy by 2.1% across six datasets

→ Achieved 67.82% accuracy on DomainNet domain-incremental learning

Discussion about this video