LlamaRestTest uses LLMs, fine-tuning, and quantization to improve REST API testing effectiveness and efficiency. It dynamically refines API requests based on server feedback.
-----
https://arxiv.org/abs/2501.08598
Original Problem 🤖:
→ Current REST API testing tools struggle with complex parameter formats and inter-parameter dependencies.
→ State-of-the-art tools enhancing specifications with natural language lack dynamic interaction and refinement based on server feedback.
-----
Solution in this Paper 💡:
→ LlamaRestTest employs two fine-tuned LLMs: LlamaREST-IPD for identifying inter-parameter dependencies and LlamaREST-EX for generating input values.
→ LlamaREST-IPD uses server responses to refine parameter selection, ensuring valid combinations.
→ LlamaREST-EX generates semantically valid values based on descriptions and server feedback.
→ Quantization techniques are applied to reduce model size and improve efficiency for resource-constrained environments.
→ LlamaRestTest integrates with ARAT-RL, an adaptive REST API testing framework using reinforcement learning.
-----
Key Insights from this Paper 🧠:
→ Fine-tuning small LLMs can outperform larger models in REST API testing tasks, achieving a balance between effectiveness and efficiency.
→ Server feedback is crucial for refining test inputs and improving coverage by capturing dynamic API behavior.
→ Quantization significantly improves efficiency without substantial loss of accuracy.
-----
Results 📊:
→ LlamaRestTest achieved 48.2%-195.3% more branch coverage than other state-of-the-art tools.
→ It detected 204 internal server errors, outperforming other tools by a significant margin (44 to 74 more errors).
→ The 8-bit quantized model showed improvements of 13.4%, 21.9%, and 11.2% in method, branch, and line coverage, respectively, compared to the vanilla model.
Share this post