0:00
/
0:00
Transcript

"Distillation Quantification for Large Language Models"

Below podcast is generated with Google's Illuminate.

Distillation technique in LLM is making waves after DeepSeek-R1

This framework helps measure LLM distillation, promoting diversity and robustness in smaller models.

Revealing potential over-distillation issues.

Proposes a framework to quantify the distillation, or knowledge transfer, from larger to smaller LLMs. It aims to address the issue of over-distillation, which can lead to a lack of diversity and robustness in smaller models.

-----

https://arxiv.org/abs/2501.12619

Original Problem 🤔:

→ Over-reliance on distillation from advanced LLMs can hinder the development of diverse and robust smaller LLMs.

→ Current research lacks clear metrics to quantify the degree of distillation in LLMs.

-----

Solution in this Paper 💡:

→ This paper introduces two metrics: Response Similarity Evaluation (RSE) and Identity Consistency Evaluation (ICE).

→ RSE compares responses from smaller LLMs to a reference LLM (GPT) across style, structure, and content.

→ ICE uses a jailbreaking framework (GPTFuzz) to probe LLMs for inconsistencies in their self-reported identity information, revealing potential distillation from source LLMs.

-----

Key Insights 😲:

→ Base LLMs show higher distillation degrees than aligned LLMs.

→ Many well-known LLMs, both closed and open-source, show high distillation degrees (except Claude, Gemini, and Doubao).

-----

Results 📊:

→ GLM4-Plus, Qwen-Max, and Deepseek-V3 showed the highest suspected distillation degrees based on ICE.

→ GPT series models exhibited the highest response similarity to GPT-4 in RSE (average similarity of 4.240).

→ Claude, Doubao, and Llama3.1 showed lower response similarity, suggesting less distillation (around 3.6-3.7 average similarity).

Discussion about this video

User's avatar