0:00
/
0:00
Transcript

"LLM-Based Routing in Mixture of Experts: A Novel Framework for Trading"

Generated below podcast on this paper with Google's Illuminate.

LLMOE uses LLMs as routers in Mixture of Experts for stock trading, improving expert selection with multimodal data like news and prices.

To dynamically route stock data and news to specialized expert models for better predictions.

-----

https://arxiv.org/abs/2501.09636

Original Problem 😔:

→ Traditional trading methods struggle with market complexity.

→ Deep learning models often rely on single predictors, leading to instability.

→ Existing MoE models use static routers, neglecting textual data and context.

-----

Solution in this Paper 💡:

→ LLMOE replaces the traditional neural network router in MoE with an LLM.

→ This LLM router processes both historical stock prices and news headlines.

→ The LLM dynamically selects the most suitable expert model based on this context.

→ Experts are categorized based on market sentiment (optimistic or pessimistic).

→ An "All-in All-out" trading strategy is used, maximizing returns based on expert predictions.

-----

Key Insights from this Paper 🤔:

→ LLMs enhance expert selection by integrating multimodal data and context.

→ Dynamic routing improves performance compared to static MoE models.

→ Context-based expert selection leads to more stable and robust trading strategies.

-----

Results 🚀:

→ LLMOE achieved a Total Return of 65.44% on the MSFT dataset and 31.43% on the AAPL dataset. These are higher than other baseline models by over 25%.

→ LLMOE also improved Sharpe Ratio to 2.14 and 1.17, and Calmar Ratio to 5.91 and 2.12 for MSFT and AAPL respectively.

→ LLMOE reduced Maximum Drawdown to 11.32% and 18.21% for MSFT and AAPL datasets.

Discussion about this video