you are being redirected

You will be redirected to the website of our parent company, Schönherr Rechtsanwälte GmbH: www.schoenherr.eu

10 March 2025
newsletter
hungary

Understanding the EU AI Act in practice: 10+1 questions and answers for Hungarian companies

1. What is the EU AI Act and how does it affect businesses in Hungary?

The EU AI Act is an EU regulation that establishes harmonised rules for the development, distribution and use of artificial intelligence systems within the EU. It aims to ensure the safe and ethical use of AI, enhancing the internal market while protecting health, safety and fundamental rights.

The AI Act applies to all businesses that develop, use, import or distribute AI systems in the EU, regardless of their location. This includes Hungarian companies that deploy AI systems in the course of a professional activity.

2. Is it already implemented in Hungary? When will it enter into force?

EU regulations, including the AI Act, are directly applicable in Hungary. The provisions of the AI Act are being implemented gradually, with the first significant provisions having already come into force on 2 February 2025.

Hungary is also expected to adopt additional secondary legislation soon. So far, only government decisions have been adopted, indicating that a new enforcement authority will be established to implement the AI Act. This authority is expected to operate under the supervision of the Minister for National Economy, manage the domestic regulatory sandbox, and perform regulatory and market surveillance tasks. Furthermore, a Hungarian Artificial Intelligence Council is expected to be established under the leadership of the recently appointed Government Commissioner responsible for AI. This body will issue guidelines and opinions on the implementation of the AI Act in Hungary.

3. Is it only relevant for IT businesses and large international companies? Can SMEs and microenterprises stop reading this article?

Unfortunately, no. The EU AI Act establishes mandatory rules for any Hungarian business applying AI solutions, including SMEs and microenterprises. Therefore, we recommend continuing to read to learn the basic information relevant to your business. Certain companies, such as developers, importers and distributors of AI systems, must comply with special rules and are advised to seek more detailed legal advice.

4. What are AI systems and how do I know if I use one? What if a company or its employees only use ChatGPT occasionally?

Properly defining AI systems is not entirely straightforward – the Commission even issued a standalone guideline on the topic. In simple terms, AI systems encompass machine-based systems (typically software) designed to operate with varying levels of autonomy. What sets AI systems apart from traditional software is their ability to infer, learn, reason and model from data or inputs. The most common examples of AI-based systems are certain varieties of

  • recommendation engines: systems that suggest products, services or content based on user preferences and behaviour;
  • predictive analytics: systems that forecast future trends or behaviours based on historical data;
  • virtual assistants: AI-powered assistants like Siri, Alexa or Google Assistant that perform tasks based on voice commands;
  • robotic process automation: systems that automate repetitive tasks in business processes;
  • smart home devices: devices like smart thermostats, lights and security systems that adapt to user preferences and behaviours;
  • interactive chatbots: AI-powered chatbots that engage in conversations with users to provide information or support.

In case of doubt, you should consult the developer or distributor of the specific software or other system on whether it qualifies as an AI system.

ChatGPT and other similar generative applications also qualify as AI systems. If an employee uses it for professional purposes, the provisions of the EU AI Act will apply.

5. Are all AI systems treated equally? Should businesses stop using them altogether?

The AI Act introduces a risk-based approach that requires businesses to categorise their AI systems based on the level of risk they pose. Developers and users of AI systems must assess and classify their systems into one of the following categories:

  • Unacceptable risk: These AI systems present a clear threat to people's safety, livelihoods or rights and therefore are banned under the AI Act. The regulation lists eight such prohibited practices (e.g. AI systems that manipulate human behaviour or exploit weaknesses).
  • High risk: AI systems that pose a threat to a person's health, safety or social status (e.g. assessing creditworthiness during a bank loan application or AI systems used in employee evaluations) are classified as high risk. While these systems can be used, they are subject to strict obligations before being deployed on the market.
  • Limited risk: These AI systems (e.g. chatbots) carry lower risks but still require transparency. For instance, if an AI system interacts directly with humans, users must be informed that they are interacting with an AI system, especially if it is not immediately obvious to the user.
  • Minimal or no risk: The AI Act does not introduce specific rules for minimal or no risk AI systems (e.g. spam filters). However, it is still recommended that businesses develop a code of conduct aimed at promoting AI literacy, ensuring that those involved in the development, operation and use of AI are aware of best practices and ethical considerations.

6. What are prohibited AI practices and when will their ban come into effect?

AI practices such as harmful manipulation and deception, harmful exploitation of vulnerabilities, social scoring, individual criminal offence risk assessment and prediction, untargeted scraping to develop facial recognition databases, emotion recognition, biometric categorisation and real-time remote biometric identification have already been prohibited since 2 February 2025. Businesses should not engage in such practices, otherwise they may face severe sanctions.

7. What action should I take if my business uses only minimal or no risk AI systems?

As a first step, from 2 February 2025, all businesses must ensure that their staff and those responsible for operating or using AI systems have sufficient AI literacy. In particular, businesses should:

  • assess the current AI literacy levels within their workforce;
  • develop and implement tailored AI literacy training programmes;
  • establish internal policies and procedures, such as a code of conduct for AI usage

Additionally, businesses should:

  • proactively assess their current use of AI systems;
  • classify AI systems according to the risk level defined by the regulation;
  • provide transparent information regarding the use of AI systems in consumer services.

Businesses applying limited or high-risk AI systems must further ensure that appropriate measures are in place to address the specific requirements of the AI Act.

8. What about businesses that are actively using AI systems? What about AI developers?

Such businesses are recommended to seek in-depth legal and technical advice on the implementation of the EU AI Act. AI developers and other providers of AI systems should also keep an eye on the expected introduction of AI regulatory sandboxes. These sandboxes are controlled frameworks set up by a competent authority, offering providers or prospective providers of AI systems the possibility to develop, train, validate and test (where appropriate in real-world conditions) an innovative AI system, pursuant to a sandbox plan for a limited time under regulatory supervision.

The Hungarian Government has also recently acknowledged the need to establish a domestic sandbox through a government decision. Further details are expected to be revealed soon.

9. How will all this be monitored and what are the expected penalties for non-compliance with the EU AI Act?

As mentioned, the detailed rules on monitoring in Hungary are still pending, as the necessary secondary legislation has yet to be issued. Nevertheless, once a monitoring regime is established, non-compliance can result in substantial fines, even up to EUR 35m or 7 % of the company's total worldwide annual turnover, whichever is higher. These penalties will be enforced starting from 2 August 2025.

10. So, is this everything one needs to know about the EU AI Act? Are there any other publicly available resources?

No, these are just the very basics. The EU AI Act is more than 140 pages long and further implementation rules are expected at both the EU and national levels. The Commission has also issued guidelines to assist the affected companies to comply with the AI Act's requirements. Businesses are highly recommended to seek customised legal and technical advice to ensure compliance with this new regulatory regime, stay informed about regulatory changes and mitigate legal risks.

10+1. Is the EU Commission's announced work programme expected to result in the review and potential simplification of the rules prescribed by the EU AI Act?

The 2025 EU Commission work programme foresees a broader assessment of whether the expanded digital acquis of the EU (which also includes the EU AI Act) adequately reflects the needs and constraints of businesses with special regard to SMEs and small midcaps. Nevertheless, the already published information on the upcoming "Digital Package" primarily signals a revision of EU legislation on cybersecurity and data protection, without explicitly mentioning the EU AI Act.

Authors: Gergely Horváth, Barbara Darcsi and Ákos Kovács

Gergely
Horváth

Attorney at Law

hungary

co-authors

Happy International Women's Day!

This week, we are honouring and prioritising the achievements of our female colleagues by putting their content first! Content from our other colleagues remains of course available and can be found in the authors' profiles or via the search function.