0:00
/
0:00
Transcript

"Alopex: A Computational Framework for Enabling On-Device Function Calls with LLMs"

The podcast on this paper is generated with Google's Illuminate.

Rule-based data generation outperforms LLM-based approaches for function call training

Alopex enables precise on-device function calls while preserving LLM's general abilities

https://arxiv.org/abs/2411.05209

Original Problem 🤔:

Function call capabilities in on-device LLMs face three major challenges: scarce training data requiring manual verification, ineffective question formatting leading to inaccuracies, and catastrophic forgetting of general abilities after fine-tuning.

-----

Solution in this Paper 🛠️:

→ Introduces Alopex framework with a Rule-Based Logic approach for generating high-quality training data without manual verification

→ Implements a novel "description-question-output" format that outperforms existing approaches and reduces function information leakage

→ Uses a 1:1 data mixing strategy with textbook datasets to prevent catastrophic forgetting while maintaining general capabilities

→ Achieves automated adaptation pipeline for data generation and LLM fine-tuning

-----

Key Insights 🔍:

→ Rule-Based Logic generates training data with 99% accuracy compared to 70% for LLM-based generation

→ Placing function descriptions before questions improves out-of-logic accuracy

→ Mixing function call data with textbook datasets in 1:1 ratio preserves general capabilities

→ Works effectively with smaller LLMs (1.6B-2B parameters)

-----

Results 📊:

→ Achieves 99% function call accuracy across multiple models

→ Maintains performance on general tasks (MMLU, GSM8K, etc.)

→ Significantly reduces catastrophic forgetting compared to baseline

→ Fox-1-1.6B shows highest robustness and average accuracy among tested models

Discussion about this video