you are being redirected

You will be redirected to the website of our parent company, Schönherr Rechtsanwälte GmbH: www.schoenherr.eu

12 February 2025
roadmap
austria

Building AI compliance and literacy in your company

On a typical day in the office, the managing director receives a newsletter about the new AI regulation from the company's trusted law firm regarding new AI regulations. The newsletter advises companies to assess their use of AI systems to determine whether the regulation applies to them. Since the managing director has not actively or knowingly procured AI solutions, he asks an employee about the company's current use of AI systems.

Assumption of AI tools

To ensure the employee's responses are accurate, the managing director asks the company's law firm for advice. The partner assigns an associate to draft a response to the managing director's question. The associate first reviews the law firm's AI systems. ChatGPT4 and Harvey provide the following answers:

Harvey

"The boss's assumption that the office does not use AI is incorrect. While the employee mentions that they do not officially use AI, the occasional use of Google Translate by employees does involve AI, as Google Translates functionality is based on machine learning, a subset of AI. Therefore, the office is indeed utilising AI, albeit unofficially and on a small scale."

Chat GPT4

"The boss's assumption that they do not use AI is incorrect. Although the office does not officially use AI-specific tools, Google Translate, which employees occasionally use, is an AI-powered application. Therefore, the office is indeed using AI, even if indirectly."

 

 

Assumption of a trusted law firm

AI has long since found its way into the office environment. Several of the above-mentioned applications incorporate AI capabilities. Google Translate, a range of printer solutions and a significant number of Microsoft apps are already equipped with AI functionality. The AI Act, which entered into force on 1 August 2024, defines AI systems as "a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments". Within this broad definition, many corporate IT systems fall under the scope of the AI Act.

The AI Act outlines several obligations for deployers – a natural or legal person, public authority, agency or other body using an AI system under its authority, except for personal, non-professional use. Certain parts of the AI Act will take effect sooner; for example, its general provisions will apply from 2 February 2025. Among other requirements, providers and deployers must ensure that their staff and anyone involved in the operation and use of AI systems on their behalf have an adequate level of AI literacy, reflecting their technical knowledge, experience, education and training, and the context in which the AI systems are to be used.

Companies should implement organisation-wide policies governing the use of AI systems for all employees. Additionally, specific policies can be developed for individual departments. The same approach applies to training: basic AI training should be provided to all employees, with advanced, department-specific training offered as appropriate. For example, employees in the marketing department, who may already use AI to generate stock photos for websites or social media channels, may require more advanced training.

As a first step, companies must carefully assess whether they are using AI systems. They should then ensure AI literacy among their staff by implementing appropriate AI policies and providing targeted training. Additionally, companies should develop strategies to raise awareness of AI and ensure its use complies with the AI Act.

authors: Denise Stahleder, Christian Kracher

Denise
Stahleder

Associate

austria vienna

co-authors