POLICY & CERTIFICATION TEAM
01
For the past several months artificial intelligence (AI) has dominated the headlines due to significant technological advancements and its potential to address some of the world’s toughest challenges. But such a powerful technology — with promises of enormous benefits across medicine, defense, transportation, and more — also comes with challenges and risks.
While AI has been an integral part of aviation for many years, at Merlin, we’re taking a phased approach to ensure safety as we accelerate toward a future of autonomous flight, leveraging both traditional and bleeding-edge machine learning (ML) software in our technology. To fully harness the benefits of AI, we believe in implementing a framework that balances the level of government-imposed rigor commensurate with the risks of the technology. Merlin also encourages government stakeholders to consider the following approaches to establish comprehensive and efficient coverage of actionable AI concerns.
Building a technology-agnostic safety & security continuum
The Federal Aviation Administration aligns its regulations for aircraft in the context of a safety continuum, which balances the needs of the flying public while facilitating innovation. The realm of possibility for AI is incredibly broad; thus, Merlin believes in establishing a similar framework by which regulatory involvement can be scaled with the potential risk of a given AI application.
Setting criteria for safe and secure human-machine teaming
Many AI applications will interact with a human. Merlin encourages the development of scalable safety and security criteria for these interactions. For future AI regulations to promote growth, those developing technologies may benefit from guidance on safe and secure human-machine teaming (HMT) interactions throughout the design and deployment process. At Merlin, how our system interacts with a human operator plays an integral role throughout the research and development process, and we believe effective HMT will ultimately enable the safe deployment of autonomous flight. More complex interactions with human operators, such as autonomous flight systems, will require additional explicit discussion in regulations and policy.
Maturing assurance processes for machine Learning
Merlin believes government can also support AI applications in aviation by maturing software and hardware assurance processes. For decades, civil aviation has required aircraft implementing automation to demonstrate that the system performs autonomous functions reliably, accurately, and safely. A machine learning assurance process would ensure that the ML performs as expected, closes the loop on user safety, and addresses security concerns.
***
Merlin’s thoughts on AI, while borne from its aviation experience, can be beneficial to the development and maturity of a more unified framework for AI. In many ways, the national dialogue about AI regulation echoes the aviation industry’s recurring conversations on integrating emerging technologies into the National Airspace. Autonomous systems, such as the Merlin Pilot, will offer substantial advantages to flight operations globally, improving overall safety and performance, but they will also test established regulatory frameworks and introduce a new set of challenges. By focusing on the concepts of an agnostic Safety & Security Continuum, Human Machine Teaming, and ML assurance, the US can establish a comprehensive set of policy and guidance that models current government processes.