top of page

Adapting regulation to AI

                                               

 

Leading the way with the AI roadmap of the European Union Aviation Safety Agency (EASA)

How to ensure trust in a machine learning application? Certification in aviation consists in being able to meet safety requirements defined by the regulator. If you want to test landing in wet conditions you can flood the runway, if you want to test resistance of materials, you can twist a wing. But how can you evaluate a system that can improve its performances by learning thanks to data sets? How to give operators confidence in the “AI black box”? What is a relevant set of data, or an ethical system?


Early 2018, European Union Aviation Safety Agency (EASA) created its Innovation Network to deal with disruptive innovation topics. It allows the Agency to work on technologies that are impacting simultaneously several aeronautical domains, thus requiring coordination between different domains.

 

The first topic tackled by the Innovation Network is Artificial Intelligence (AI), especially ML (Machine Learning) and DL (Deep Learning). The work led to the publication of a roadmap entitled Artificial Intelligence Roadmap: A human-centric approach to AI in aviation in February 2020 (available here).
This first version of the roadmap was drafted by a task-force led by Guillaume SOUDAIN, Software Senior Expert in the Certification Directorate.
He accepted to answer our questions to explain how EASA deals with issues raised by a new disruptive innovation like ML and the way regulation has to be adapted. This interview is the third article of Flyinstinct’s summer brief AI & Aeronautics, a promising relationship.

EASA, A CATALYST FOR INNOVATION

Flyinstinct: How could you present EASA and its main missions?

Guillaume Soudain: EASA is the European Union Aviation Safety Agency and was established in 2002 as an Agency of the European Union. It now employs more than 800 aviation experts and administrators mainly in its headquarters in Cologne, Germany.

​

EASA vision is “Ever safer and greener civil aviation“ and our main mission is to provide safe air travel for EU citizens in Europe and worldwide. 

Figure 1. The EASA Headquarters in Cologne, Germany

Flyinstinct: What is EASA rulemaking process, from the initial workshop to the publication of a regulation? 

G.S.: EASA rulemaking is following a well-established process that involves the relevant stakeholders all along the process. The Rulemaking priorities are established by the EASA European Plan for Aviation Safety (EPAS).  

 

There are different levels of rules. For so-called “Hard law”, which defines the binding regulatory environment, EASA publishes Opinions for consultation and then for approval by the EU Commission and the European Parliament.

 

For “Soft Law”, which defines the relevant technical specification for each domain (such as Certification Specifications (CS) and Acceptable Means of Compliance (AMC)), EASA initiates a Terms of Reference to work on the development of a Notice for Proposed Amendment (NPA) which is published for consultation. The final agreed text is then endorsed through a Decision by Patrick Ky, EASA Executive Director.

 

It is important to highlight that other tools (such as Special Conditions) allow EASA to be more agile when dealing with innovative technologies. Recent examples include the publication of a Special Condition for VTOLs (Vertical Take Off and Landing, a VTOL aircraft can take-off and land vertically without using conventional runways) products and for Light UAS (Unmanned Aircraft Systems).

Flyinstinct: EASA is mainly known for its role as a regulator and its oversight missions. Has EASA always been in the forefront of technology integration in regulation? Is this AI roadmap a first of its kind?  

G.S.: EASA has always accompanied industry in the development of guidance for new methods and technologies, by developing generic means of compliance or even project specific CRIs (Certification Review Items) and by supporting or initiating research projects. For example, EASA accompanied industry with the development of means of compliance for multi-core processors.

​

This trend has been reinforced by the publication in 2018 of the new Basic Regulation 2018/1139 which drove for instance the creation of the EASA Innovation Network, interconnecting the staff from all EASA directorates.

 

Artificial Intelligence is indeed one of the first disruptive project that has been launched and tackled through this new organization.

Flyinstinct: Are there other new technologies EASA is currently working on, either by organizing workshop or by writing roadmaps? 

G.S.: Yes, indeed. For instance, the EASA Innovation Network started working on topics including urban air mobility, increased automation in the cockpit, electric propulsion, blockchain or additive manufacturing. Discussions on specific applications are enabled with our stakeholders using Memoranda of Cooperation (MoCs) and Innovation Partnership Contracts (IPCs, a way to start concrete discussions between a company and experts from the Agency).

DRAFTING A ROADMAP

Flyinstinct: When was AI first mentioned at EASA? 

G.S.: I would say that first discussions with EASA on potential applications involving the use of machine learning or deep learning in safety-critical applications started sometimes in 2017. But when it comes to non-safety-critical issues, AI had been used in certain aviation applications even earlier than that.

Flyinstinct: When was the work on AI roadmap initiated? How many people worked on it and what was your job? 

G.S.: The task force that was tasked to develop a first EASA AI roadmap development was initiated by our Directors end of 2018. I had the privilege to lead the small team of 5 people who came up with v1.0 of our EASA AI roadmap early February 2020.


Following its publication, we have launched a much wider multi-disciplinary team (25 people) to ensure the implementation of the roadmap.

Flyinstinct: What were the main objectives of this roadmap?

G.S.: The primary goals of the EASA AI Roadmap was to identify for all affected domains of the Agency:
-    the key opportunities and challenges created by the introduction of AI in aviation; 
-    how this may impact the Agency in terms of organization, processes, and regulations;
-    the courses of action that the Agency should undertake to meet those challenges. 

INTERMEDIATE CONCLUSIONS

Flyinstinct: What could be the conclusions of the three main issues raised by the roadmap?

G.S.: The first conclusion reached by the task force team was that the main challenge lies for the time being in the introduction of data-driven AI methods, namely in machine learning and deep learning. This has led us to put the focus of the roadmap on those techniques.

 

The analysis of the various impacted domains of the Agency has revealed another important conclusion. The challenge that is common to all types of safety-critical application, whatever the domain, is how to ensure trust in a machine learning application? To answer this fundamental question, the roadmap is driven by an ‘AI trustworthiness concept’, organized around four building blocks (AI trustworthiness analysis, learning assurance, AI explainability, AI safety risk mitigation).

Figure 2. The building-blocks of the EASA AI roadmap

Flyinstinct: Could you tell us more about these AI Trustworthiness Building Blocks?

G.S.: The first block, AI Trustworthiness, is very important one as it is an interface between the EU ethical guidelines and EASA applicants. The European guidelines are of a very high level so that it can be adapted to many fields and industries. However, it has to be translated into guidelines dedicated to aviation and its own specificities. The final objective is to provide practicable material for our applicants.


Learning Assurance is a concept developed to adapt a programming-based system to a learning-based system, as existing Development Assurance methods are not adapted to cover learning processes.

 

The Explainability of AI is a concept dealing with the ability of a system to be understandable by the human. Therefore, it is oriented towards human-machine interface. The way AI outputs will be communicated is essential as it will have a direct influence on user’s trust.

 

The Safety Risk Monitoring block is here to analyze situations where the “AI black box” cannot be opened sufficiently. In such cases, levels of risk mitigation should take the uncertainty of AI into account.

Flyinstinct: What is the progress you made so far in developing these building blocks?

G.S.: We progressed on all of these four building blocks. However, the most decisive steps have been made on the Learning Assurance side, in particular through the proposal of a complete process outline, the so-called W-cycle.

​

 

 

 

 

 

 

 

 

 

 

 

 

​

​

​

​

​

​

​


This cycle is an evolution from the typical V-cycle adapted to Machine Learning concepts and aims to be a framework for the Agency’s means of compliance. It requires some further refinement, but already allows us to structure discussions with industrial applicants throughout blocks composing the cycle. It appears that it is in line with the needs of the industry, hence it will be part of the first guidance that will be proposed in 2021.

 

The dotted line is here to make a distinction between the use of traditional Development Assurance processes (above) and the need for the processes adapted to the data-driven learning approaches (below). This new learning Assurance approach will have to account for the specific phases of learning processes, as well as to account for their highly-iterative nature.

Figure 3. The W-cycle

Flyinstinct: What are the main difficulties you faced with during this work and issues which may slow down AI integration in aeronautics? 

G.S.: The main difficulty with ML is its disruptive nature, which imposes a change from traditional system, software, and hardware development assurance methods towards an assurance on the learning. Of course, another challenge is the training of EASA staff on new fields of expertise like deep learning.

Flyinstinct: About that, EASA members are not all AI experts. How to draft a regulatory framework on such a technical subject without necessarily being able to understand all the concepts? Is there internal training for example?

G.S.: We initially ensured the training of relevant EASA experts in machine learning. But of course, deep learning is not the core competence of the Agency, so we also teamed up with industry stakeholders, like the company Daedalean, a Swiss based start-up specialized in computer vision solutions for aviation.


They brought their expertise in deep learning which we supported with our expertise in certification and air operations. We released a joint IPC report in March 2020 which is available on the EASA website under https://easa.europa.eu/ai. It consolidated our approach on the definition of the learning assurance concept.

Flyinstinct: How could you describe an “ethical” system? 

G.S.: The main driving principle of our roadmap regarding the ethical aspects of AI, is to ensure adherence to the seven ethical guidelines that were published by the EU Commission high-level group of AI Experts. 

 

We are now mapping these high-level guidelines to the EASA AI trustworthiness building blocks in view of creating guidance that is adapted to aviation projects.

COLLABORATE WITH STAKEHOLDERS

Flyinstinct: With which stakeholders did you collaborate and are you still collaborating? 

G.S.: We are working on the EASA AI roadmap implementation with various stakeholders, such as industrial applicants, research institutes and consortiums, standardization working groups, EU and international institutions.

Flyinstinct: How does EASA collaborate with European institutions in this kind of work?

G.S.: Our work on innovation requires good coordination with the EU Commission, with our Member States as well as with other aviation institutions like Eurocontrol. We also are as usual working with our international partners. 

Flyinstinct: Same question with industrial applicants?

G.S.: There are various tools that can be used for collaboration with industry, such as memoranda of cooperation (MoCs) innovation partnership contracts (IPCs), applications or collaboration agreements for research. 

THE EXAMPLE OF PILOTING AUTOMATION

Flyinstinct: The next disruptive innovation in operations may be the Single-Pilot Operations (SPO) in Commercial Air Transport (CAT). What is EASA’s position on this topic today? 

G.S.: EASA has launched specific projects dealing with enhanced multi-crew or single crew operations. It is in some respect a parallel activity to the AI roadmap, considering that AI is a major enabler for such capabilities. 

Flyinstinct: Same question with the VTOL as part of Urban Air Mobility development? 

G.S.: One first milestone has been reached in the release of a Special Condition for VTOL aircrafts. Now considering the total system approach, EASA is also working on a set of policy and guidance material for urban air mobility.

Flyinstinct: Would EASA be ready to certify a system like Autonomous Taxi, Take-Off and Landing (ATTOL) from Airbus at this time?

G.S.: In order to achieve such an objective, we will need to implement our AI roadmap until the level 3 of AI that aims at the certification of more autonomous products.

Flyinstinct: Will the fully autonomous CAT remain an illusion or is ML killing the profession of pilot?

G.S.: With the historical introduction of automation in the aircraft cockpits, the job of a pilot has tremendously changed over time since the 50s. We can reasonably think that fully autonomous CAT will be technically feasible within the next decades, however how far it will be implemented will also depend on societal and political acceptance. In commercial air transport, the introduction of AI will surely bring another layer of evolutions. Whatever the pace of this evolution, I am convinced that the human will continue to remain at the center, AI being a tool at its service. This is the direction we gave to our roadmap.

CONCLUSION AND INNOVATION PERSPECTIVES

Flyinstinct: The roadmap was published in February 2020. Did the conclusions change because of the COVID19 crisis?

G.S.: The aviation domain has unfortunately already suffered a lot from the COVID19, and it will possibly take a few years until we can hope for a full recovery. It is however too early to predict how much this crisis will impact innovation. For now, we continue our effort knowing that whatever the context will be, some priorities will remain like environmental friendliness or even new ones could emerge through the crisis. Therefore, we still expect safety critical ML applications to emerge sooner than later.

Flyinstinct: Could you summarize the main roadmap deliverables and the three phases that come with the planning?

G.S.: Based on industry roadmaps, key milestones have been identified. To meet these important industry milestones, the EASA AI roadmap has established several deadlines structured in three different phases.

 

To ensure readiness for first approvals by 2025, we are planning a first exploratory phase which consists in working with industry to ensure the development of initial guidelines.

​

Then a consolidation will take place in phase two, through a pragmatic approach maximizing the effectiveness of rulemaking activities by preparing upstream guidance.

​

A third phase will take into account new developments in AI, as research in this field is very dynamic.

 

To summarize, the first actionable guidance is expected to be available in a very short-term, to enable the first approvals by 2025. The finalized guidance will strengthen the first established concepts and will pave the way for longer-term disruptive changes like SPO and autonomous flights.

Figure 4. The timeline of the roadmap

Flyinstinct: Three AI levels are mentioned in the planning. What are these levels and how have they been determined?

G.S.: IAI has been theorized in three levels which not per se automation levels but more linked with the type of AI and its usage. The first one considers AI as assisting and augmenting human performance. The second level deals with task-sharing where AI becomes more independent, but still under the control or supervision of the human. Level 3 implies autonomous operations. It may not necessarily be fully autonomous tasks, but human would not be directly involved in the operations.

Flyinstinct: Will ML eventually be used to take decisions at some places where human beings currently are the decision makers? 

G.S.: Most certainly, in particular when considering level 3 AI/ML applications. But we are not there yet, and this may need a decade or more to reach a level of trustworthiness that will enable such applications.

Flyinstinct: AI is called to have a role in the Safety Risk Management, but the core of the roadmap is to define how to evaluate AI Trustworthiness. Could the use of ML to assess risks call the integrity of EASA’s mission into question?

G.S.: The use of AI in the domain of safety intelligence and management would actually rely on the same concepts of trustworthiness than those developed in the EASA AI roadmap. As long as we can demonstrate an adequate level of trustworthiness, AI could be used as a key enabler to support safety management related activities like emerging risks detection, risk classification of occurrences and the prioritization of safety issues.

​

The EU approach to AI is resolutely human centric and we consider AI as a tool to support human activities. Machine learning will indeed have an impact on EASA internal processes, however in a way that will automate repetitive tasks and enhance the capability of EASA experts to work on the high added-value ones. Therefore, AI is not seen as a replacement for the human anytime soon but as a better support for human decisions.

Flyinstinct: What would be 3 issues about ML/AI that EASA will have to face in the 10 or 15 next years that should we remember? 

G.S.: I would choose:
-    The paradigm shift from development to learning and development of the AI trustworthiness framework.
-    The evaluation of ML application guaranteed performance and embodiment in safety analyses.
-    The emergence of more autonomous and adaptive AI.

DEFINITIONS

All definitions are taken from the EASA AI roadmap, which itself redirects to original sources used to write these definitions.

​

Artificial intelligence (AI) - technology that appears to emulate human performance typically by learning, coming to its own conclusions, appearing to understand complex content, engaging in natural dialogues with people, enhancing human cognitive performance (also known as cognitive computing) or replacing people on execution of non-routine tasks. Applications include autonomous vehicles, automatic speech recognition and generation, and detection of novel concepts and abstractions (useful for detecting potential new risks and aiding humans to quickly understand very large bodies of ever-changing information).

 

Automation - the use of control systems and information technologies reducing the need for human [input], typically for repetitive tasks.

 

Data-driven AI - the data-driven [approach] focuses on building a system that can [learn] what is the right answer based on having [trained on] a large number of [labelled] examples.

 

Deep learning (DL) - a specific type of machine learning based on the use of large neural networks to learn abstract representations of the input data by composing many layers.

 

Machine learning (ML) - rooted in statistics and mathematical optimisation, machine learning is the ability of computer systems to improve their performance by exposure to data without the need to follow explicitly programmed instructions. [Machine learning is a branch of artificial intelligence].

  • LinkedIn

BIOGRAPHY

  • LinkedIn

Guillaume Soudain started his career at Eurocopter (now known as Airbus Helicopters) in 2001, where he worked for 5 years as Software Engineer in the field of Automatic Flight Control Systems and as Project Manager for the development of training simulation models.

​

Since 2006, he is working at the European Union Aviation Safety Agency (EASA), first as a Software and Airborne Electronic Expert in the Certification Directorate. In 2014, he has been appointed Software Senior Expert and is now in charge of the coordination of the Software aspects of certification within the Agency.

​

In 2019, he started leading the task force on Artificial Intelligence that produced the EASA AI Roadmap v1.0 and is now in charge of the innovation project team that implements this roadmap. He is also member of the joint working group EUROCAE WG-114/SAE G-34 that is working on developing standards for Artificial Intelligence in the aviation domain.

bottom of page