You will be redirected to the website of our parent company, Schönherr Rechtsanwälte GmbH: www.schoenherr.eu
As technology evolves, more and more organisations are turning to automation and AI models to ensure compliance with their AML and KYC obligations. This presents significant opportunities as well as challenges, particularly in terms of data protection under the General Data Protection Regulation (GDPR). As organisations seek to leverage AI to strengthen their AML and KYC checks, understanding the legal framework governing data processing is critical, as there are multiple data protection requirements to consider.
Organisations may often rely on legitimate interest to automate their AML and KYC checks, provided that a Legitimate Interest Assessment (LIA) is documented and results in a positive conclusion. However, if individual decisions are made solely based on AI results, such activities must comply with the additional rules set by Art. 22 of the GDPR.
If AI models are trained on pre-existing data collected about the data subjects, organisations must ensure that the new processing purposes are compatible with the initial ones. A compatibility test must be performed to verify this.
Organisations must review their existing privacy notices to ensure compliance with transparency requirements and clearly communicate to data subjects what they can expect regarding the use of AI in AML or KYC processes.
Additionally, organizations must input only strictly necessary data into the AI system, namely, data essential for the AML or KYC check. This must also be observed during the testing phase of the AI, where additional technical measures, such as anonymisation or pseudonymisation, should be considered.
Adequate retention periods must be defined for storing data in the AI system. These periods should also take into account whether the AML or KYC results are extracted from the AI system and subsequently stored in other internal systems of the organisation.
Given the significant risks associated with AI-driven AML and KYC checks, a Data Protection Impact Assessment (DPIA) is mandatory for organisations. This assessment helps evaluate the risks to the rights and freedoms of data subjects and identify appropriate mitigating measures.
Organisations should carefully review data protection clauses with service providers, especially if they act as data processors for the organisation.
Additionally, internal policies and procedures, as well as the Records of Processing Activities (ROPA), should be updated to accurately reflect processing activities conducted through AI systems.
Organisations must ensure that the data input in AI systems is accurate and correct to prevent the further processing of inaccurate data. They must also implement appropriate technical and organisational measures to protect the data from unlawful or accidental alteration, as well as unauthorised access or disclosure.
Training staff on the appropriate use of AI in AML or KYC checks is crucial to ensure compliance and proper use of AI models within the organisation.
Depending on the risk level associated with AI models, organisations must observe additional requirements under the AI Act. These may include conducting further impact assessments, using the AI system according to its usage instructions, monitoring AI system operations and maintaining a record of logs.
The integration of AI into AML and KYC processes offers significant advantages for enhancing compliance efforts. However, navigating the complex landscape of data protection laws, including the GDPR and the AI Act, is paramount. By adopting robust data protection measures, conducting necessary assessments, and maintaining transparency with data subjects, organisations can harness the benefits of AI while safeguarding individual rights. Ultimately, the successful implementation of AI in compliance functions hinges on a balanced approach that respects data protection principles.
author: Carla Filip