This paper investigates how effectively LLMs can process and execute financial trading instructions, proposing a pipeline to convert natural language trading orders into standardized formats.
https://arxiv.org/abs/2412.04856
Original Problem 🤔:
Current trading systems struggle to process natural language inputs, especially with complex or incomplete trading orders. This creates a gap between human-generated strategies and automated execution systems.
-----
Solution in this Paper 🛠️:
→ Developed an intelligent trade instruction recognition pipeline that converts natural language into standardized JSON format
→ Created a 500-item dataset of diverse trading instructions, enhanced with strategic noise injection and data segmentation
→ Designed evaluation metrics to assess LLMs' performance in processing trading instructions
→ Implemented comprehensive security checks to verify card ownership and prevent unauthorized access
-----
Key Insights 💡:
→ LLMs show high generation rates but struggle with accuracy and completeness
→ Models tend to over-interrogate, collecting more information than necessary
→ Security vulnerabilities exist in token access and ownership verification
→ Balancing execution efficiency with model accuracy remains challenging
-----
Results 📊:
→ Generation rates: 87.50% to 98.33%
→ Accuracy rates: 5% to 10%
→ Missing rates: 14.29% to 67.29%
→ Follow-up rates: 100% across all tested models
Share this post