Airside Labs builds AI evaluation tools that reveal whether your systems or chatbots truly understand what they're doing—or just sound convincing. We help organisations in any market deploy AI with confidence through rigorous, industry and use case specific testing frameworks.
MPhys Astr. CEng FIET CMgr MCMI
Alex has consistently pursued innovation throughout his career. Since building his first website from a shared house in 1998, while studying for his Astrophysics degree, he has spent 25+ years advancing technology across critical industries. From early e-commerce platforms during the internet boom to safety-critical systems that help defend multiple NATO nations at sea and in the air.
At BAE Systems , Alex architected Mission Planning systems used across the entire UK Royal Navy fleet. His innovation leadership was recognised with six company awards in 2009 for breakthrough product development across multiple defense divisions.
In aviation, Alex developed the Laminar Data Hub, which received the IHC Janes Award for Innovation in 2016. He led development of XMAN, the arrival management system that reduces CO2 emissions at Heathrow and Gatwick, earning the IHC Janes Technology Award in 2015. When the aviation industry required global data standards, Alex and the Snowflake team delivered the world's first compliant flight plan validation system in the Middle East, later presented to the United Nations as a model for global innovation.
As VP of R&D at Cirium (RELX PLC), Alex led innovation across 400+ engineers, creating new product categories that are being further deloped today.
At Airside Labs, Alex applies decades of product development experience to ensuring AI systems truly understand their domains rather than simply producing plausible responses.
AI Chatbot Compliance
Comprehensive evaluation framework for customer service and internal AI chatbots, testing for accuracy, bias, safety, and regulatory compliance. Includes adversarial scenarios, edge case detection, and bias auditing to prevent costly customer service failures or PR disasters.
Pre-Flight Aviation Benchmark
Open-source evaluation suite with 300+ aviation-specific questions, testing everything from air traffic control to ground operations safety scenarios. Now part of the UK AI Safety Institute's community evaluation collection.
Custom Adversarial Testing
Industry-specific evaluation frameworks that expose AI weaknesses through edge cases, contradictory data, and domain-specific constraints your models haven't seen before.
The form has been successfully submitted.