You will be redirected to the website of our parent company, Schönherr Rechtsanwälte GmbH: www.schoenherr.eu
On 8 April 2019, the High-Level Expert Group on Artificial Intelligence – a supportive body to the European Commission – launched their "Ethics Guidelines for Trustworthy AI". Based on four questions, the following text deals with the context and content of this guideline, deducing the possible next steps for AI practitioners towards the achievement of Trustworthy AI.
According to the Ethics Guidelines for Trustworthy AI (the "Guideline"), Trustworthy AI has three components, which should be met throughout every AI system's lifecycle: AI systems should be (i) lawful, (ii) ethical, and (iii) robust. Thus, legal compliance ("lawfulness") alone is not enough: the Guideline also requires "ethical" compliance, reasoning that laws are not always up to speed with technical advancement, can be out of step with ethical norms or simply not suited to address certain issues. The third component – robustness – requires AI systems to be safe, secure and reliable, both from a technical and a social perspective.
Based on this theory of Trustworthy AI, the Guideline aims to offer guidance on ethical and robust AI. The first component of Trustworthy AI, lawfulness, is not directly addressed.
As stated above, Trustworthy AI should be lawful, ethical and robust. The reason for achieving legal compliance is obvious. The need for an AI system to be compliant with ethical principles and the requirement of robustness can be explained as follows: The Guideline was drawn up by the High-Level Expert Group on AI (the "AI HLEG"). As such, it is in no way legally binding. However, several aspects must be considered. First, the Guideline heavily leans on the Charter of Fundamental Rights of the European Union (the "Charter") and other legal instruments that are – at least in part – legally binding, such as the General Data Protection Regulation ("GDPR"). Second, the Guideline provides a first set of benchmarks against which outcomes in the field of AI can be assessed. It is therefore foreseeable that not only the European Commission, but also legislative bodies, will take the Guideline into account when revising or drawing up future legislation. Third, the development, deployment and use of AI systems does not stop at national borders. As seen with the GDPR, EU legislation can have a global impact. Thus, the Guideline should not only concern European developers, deployers and users of AI systems, but all stakeholders globally.
The Guideline consists of three chapters, dealing with (i) the foundations, (ii) the implementation, and finally (iii) the assessment of Trustworthy AI. Accordingly, guidance is provided in three layers of abstraction, from the most abstract in Chapter I to the most concrete in Chapter III.
In the first chapter, the Guideline reflects on the fundamental rights as set out in international and EU law and their underlying values in the context of AI systems. This reflection results in a list of four ethical principles being specified as "ethical imperatives" that go beyond formal compliance with existing legal obligations and must be respected by the stakeholders to ensure that AI systems are developed, deployed and used in a trustworthy manner. The four principles are:
(i) Respect for human authority: According to this principle, AI systems should not unjustifiably subordinate, coerce, deceive, manipulate, condition or herd humans.
(ii) Prevention of harm: The prevention of harm entails the protection of human dignity, mental and physical integrity. A special focus is laid on vulnerable persons and situations of asymmetrical power or information.
(iii) Fairness: Fairness has both a substantive and a procedural dimension. The first dimension implies a commitment to ensuring equal and just distribution of benefits and costs and therefore a freedom of unfair bias, discrimination and stigmatisation. Procedural fairness entails the ability to contest and seek redress against decisions made by AI.
(iv) Explicability: This principle states that processes in AI systems need to be transparent to gain trust. The Guideline, however, acknowledges the existence of "black box" algorithms and the need to differentiate the level of explicability according to the context and the (potential) severity of consequences.
Following a closer explanation of the four principles, the Guideline also acknowledges the potential for tensions between them. Chapter I of the Guideline therefore provides guidance in a way that the "ethical imperatives" should be addressed in the development, deployment and use of AI systems, also acknowledging potential tensions between those imperatives.
Building on the principles laid out in Chapter I, the second Chapter offers guidance on the implementation of Trustworthy AI by means of a (non-exhaustive) list of seven requirements: (i) human agency and oversight; (ii) technical robustness and safety; (iii) privacy and data governance; (iv) transparency; (v) diversity, non-discrimination and fairness; (vi) societal and environmental wellbeing; and (vii) accountability. After providing an explanation of each requirement, the Guideline describes the intended process for the implementation of Trustworthy AI. This process consists of both technical and non-technical methods and should occur throughout an AI system's lifecycle. Thus, the technical methods to ensure Trustworthy AI concern all phases of an AI system's lifecycle, from its architecture to quality of service indicators. The non-technical methods for securing and maintaining Trustworthy AI include (legal) regulation, codes of conduct, standardisation, certification and governance frameworks as well as education, stakeholder participation and social dialogue. The key guidance derived from Chapter II thus consists of promoting the seven requirements of Trustworthy AI (with a special focus on innovation and transparency) as well as the suggested technical and non-technical methods.
The centrepiece of the Guideline is Chapter III. Based on the key requirements in Chapter II (and therefore indirectly on the ethical principles in Chapter I), this chapter consists mainly of a (non-exhaustive) Trustworthy AI assessment list to operationalise Trustworthy AI. The guideline provides for the assessment list to be implemented into existing governance mechanisms or by implementing new mechanisms. It is recommended that all levels of an organisation be involved in the governance process, from top management to day-to-day operations. Also, the assessment list should be adapted according to the specific use case.
The publication of the Guideline marked the kick-off for a piloting process in which feedback on the assessment list is collected via both a qualitative and a quantitative procedure. Interested stakeholders can already sign up to participate in the piloting process, which will start in summer 2019. The results obtained in this piloting phase will be implemented in a revised version of the Guideline in early 2020. As for the first component of Trustworthy AI, the Guideline contains valuable insights towards the regulative direction of the European Commission. Read in conjunction with existing legislation like the GDPR and the product liability regime, the piloting process and the underlying Guideline mark a valuable opportunity for AI practitioners like companies developing, deploying and using AI to assess the compliance and sustainability towards future developments of their AI-related activities.
Schoenherr has a wide range of experience providing legal advice in this context, especially in the fields of fundamental rights, data protection and liability. We are delighted to support organisations embracing the current opportunity by assessing the compliance of their AI systems and developing future-oriented compliance mechanisms.
Author: Christoph Cudlik
Christoph
Cudlik
Partner
austria vienna