Airside Labs - AI Security Testing and Compliance
    New Research

    We Tested LLMs for Prompt Injection Risk in Aviation Scenarios. Here's What We Found.

    1,776 adversarial test cases against commercial AI systems processing standard aviation data formats. The full methodology, results, and mitigation framework.

    43%

    of prompt injection attacks produced incorrect safety-critical outputs

    1,776

    aviation-specific adversarial test cases evaluated

    24-65%

    attack success rate range depending on technique

    What's Inside the White Paper

    Full Attack Taxonomy

    Detailed breakdown of prompt injection techniques tested against aviation AI systems, from direct instruction override to context manipulation.

    1,776 Test Case Results

    Complete results across all adversarial scenarios, including success rates by attack type, model behaviour patterns, and confidence analysis.

    Aviation-Specific Vulnerability Analysis

    Why NOTAMs, flight plan remarks, and eAIP procedures create an ideal attack surface for prompt injection in AI systems.

    Mitigation Framework

    Actionable recommendations for ANSPs, technology vendors, standards bodies, and regulators deploying AI in aviation.

    Regulatory Context

    How these findings relate to EASA Level 1 AI approvals and anticipated FAA Certification Position Papers.

    Who Should Read This

    • - Aviation AI/ML engineers and system integrators
    • - Air Navigation Service Providers deploying AI decision support
    • - Aviation cybersecurity and safety professionals
    • - Regulators and standards body representatives
    • - Technology vendors building AI for aviation operations

    Free download — we'll send the PDF to your inbox