LLMs graduate from writing code to designing malicious hardware modifications and learns to insert undetectable hardware backdoors.
This paper introduces GHOST, a framework that uses LLMs to automatically design and insert Hardware Trojans into integrated circuits. It evaluates GPT-4, Gemini-1.5-pro, and Llama-3-70B's capabilities in generating stealthy hardware attacks.
-----
https://arxiv.org/abs/2412.02816
🔍 Original Problem:
Creating Hardware Trojans manually requires deep hardware expertise and significant time investment. Current automated tools need extensive training data and have limited generalizability across different hardware designs.
-----
🛠️ Solution in this Paper:
→ GHOST framework combines three prompting strategies: Role-Based Prompting, Reflexive Validation Prompting, and Contextual Trojan Prompting.
→ The framework analyzes clean RTL designs and generates malicious modifications using LLM capabilities.
→ It supports both ASIC and FPGA platforms, working across diverse hardware architectures.
→ The system validates generated Trojans through pre-synthesis simulation and post-synthesis verification.
-----
💡 Key Insights:
→ LLMs can effectively automate Hardware Trojan design without requiring extensive training data
→ GPT-4 outperforms other models in generating functional and stealthy Trojans
→ Current detection tools struggle to identify LLM-generated Hardware Trojans
-----
📊 Results:
→ GPT-4 achieved 88.88% success rate in generating functional Hardware Trojans
→ 100% of GHOST-generated Trojans evaded detection by ML-based security tools
→ Framework demonstrated effectiveness across SRAM, AES, and UART designs