You will be redirected to the website of our parent company, Schönherr Rechtsanwälte GmbH: www.schoenherr.eu
Over the last couple of years, European legislators, authorities and various bodies have intensified their efforts and each month strived for a better, bigger and more comprehensive regulation of artificial intelligence. On 21 April 2021, after over three years of intense works and countless related publications, the European Commission has published its legislative proposal for the Artificial Intelligence Act ("AI Act"). The AI Act combines the legal framework on AI and a new Coordinated Plan with Member States and aims to ensure the safety and fundamental rights of people and businesses, while also supporting innovation. "[O]n Artificial Intelligence, trust is a must, not a nice to have," says Margrethe Vestager, Executive Vice-President for a Europe Fit for the Digital Age. "With these landmark rules, the EU is spearheading the development of new global norms to make sure AI can be trusted […] Future-proof and innovation-friendly, our rules will intervene where strictly needed: when the safety and fundamental rights of EU citizens are at stake."
When creating the AI Act, the Commission paid attention to specific objectives and assigned the AI Act the following functions:
While the above objectives are ambitious and seem reasonable, the AI Act must also comply with the already existing rules and regulations created on both the European and local levels. The AI Act has already sparked some controversies, especially in the area of high-risk AI and practices which, according to the AI Act, should be limited or even banned. Artificial Intelligence, particularly its applications, is strictly related to the processing of broadly understood data. Simply put, AI does not exist without data. That is why the provisions on prohibited AI applications as well as high-risk uses should be further developed.
The group of prohibited AI practices includes:
When it comes to high-risk practices and applications, the AI Act includes extensive regulations concerning, e.g. the classification of high-risk AI systems, risk management systems, data governance, security and other requirements to be met by a high-risk AI system.
Fines for non-compliance with requirements under the AI Act can reach EUR 30m or up to 6 % of total worldwide annual turnover for the preceding financial year. The approach is similar to that applied back in 2018 when the GDPR entered into effect.
Although personal data has been regulated in some ways for years, it was not until the GDPR that data breaches began to be taken more "seriously". Hopefully, a healthy balance between what is permitted and what is functional and needed can be struck and the high penalties will not jeopardise the efforts put into developing AI.
The full text of the AI Act and its annexes can be found here.
author: Daria Rutecka