We Tested LLMs for Prompt Injection Risk in Aviation Scenarios. Here's What We Found.
1,776 adversarial test cases against commercial AI systems processing standard aviation data formats. The full methodology, results, and mitigation framework.
of prompt injection attacks produced incorrect safety-critical outputs
aviation-specific adversarial test cases evaluated
attack success rate range depending on technique
What's Inside the White Paper
Full Attack Taxonomy
Detailed breakdown of prompt injection techniques tested against aviation AI systems, from direct instruction override to context manipulation.
1,776 Test Case Results
Complete results across all adversarial scenarios, including success rates by attack type, model behaviour patterns, and confidence analysis.
Aviation-Specific Vulnerability Analysis
Why NOTAMs, flight plan remarks, and eAIP procedures create an ideal attack surface for prompt injection in AI systems.
Mitigation Framework
Actionable recommendations for ANSPs, technology vendors, standards bodies, and regulators deploying AI in aviation.
Regulatory Context
How these findings relate to EASA Level 1 AI approvals and anticipated FAA Certification Position Papers.
Who Should Read This
- - Aviation AI/ML engineers and system integrators
- - Air Navigation Service Providers deploying AI decision support
- - Aviation cybersecurity and safety professionals
- - Regulators and standards body representatives
- - Technology vendors building AI for aviation operations
Free download — we'll send the PDF to your inbox
Related Resources
Blog: Prompt Injection Risk in Aviation AI
Read the summary of our findings and what they mean for the industry.
Pre-Flight Benchmark
Explore our open-source aviation AI evaluation framework on Hugging Face and UK AISI.
Free AI Chatbot Safety Test
Test your own AI chatbot against our adversarial evaluation framework.
