0:00
/
0:00
Transcript

"SILC-EFSA: Self-aware In-context Learning Correction for Entity-level Financial Sentiment Analysis"

Generated below podcast on this paper with Google's Illuminate.

Two-stage sentiment analysis with built-in error correction improves financial predictions

This paper introduces SILC, a two-stage framework that improves financial sentiment analysis by combining LLM fine-tuning with self-correction mechanisms for better accuracy.

-----

https://arxiv.org/abs/2412.19140v1

🤔 Original Problem:

Financial sentiment analysis lacks large entity-level datasets and struggles with multi-entity texts where different entities have varying sentiments within the same context.

-----

🔧 Solution in this Paper:

→ SILC uses a two-stage approach where first stage fine-tunes base LLMs (LlaMA2-7b for English, Baichuan2-7b for Chinese) to generate initial sentiment predictions

→ Second stage implements a GNN-based example retriever to find relevant correction examples from training data

→ The system filters and retains incorrectly predicted samples while sampling correct predictions to train the correction model

→ A Graph Attention Network processes both linguistic and sentiment features to retrieve similar examples for correction

-----

💡 Key Insights:

→ Three in-context examples provide optimal performance for the model

→ Retaining 60-80% of correct samples yields best correction results

→ Entity-level sentiment shows higher correlation with crypto prices than sequence-level analysis

-----

📊 Results:

→ Improved F1 score by 5.1% over previous methods on FinEntity dataset

→ Achieved RMSE of 0.07936 in Bitcoin price prediction

→ Outperformed GPT-4 and GPT-3.5 on both English and Chinese datasets

Discussion about this video