You will be redirected to the website of our parent company, Schönherr Rechtsanwälte GmbH: www.schoenherr.eu
Welcome to the June edition of Schoenherr's to the point: technology & digitalisation newsletter!
We are excited to present a selection of legal developments in the area of technology & digitalisation in the wider CEE region.
Lesson learned from the GDPR obligations: be transparent. Tell data subjects what you do with their data and how you do it. If the data are processed using innovative technologies, make sure that a higher level of transparency (along with appropriate safeguards) is ensured. Such transparency obligations are currently spreading across recent EU law provisions.
Increased transparency is a key objective of the Omnibus Directive. Consumers should be better informed about dynamic pricing, the parameters used by providers of online marketplaces to rank products and sellers on their platform as well as whether and how traders ensure that product reviews originate from consumers who have actually used or purchased the respective product (for more details see link).
Increased transparency is also a key objective of the European Commission's draft for an AI Regulation (COM/2021/206 final). Even if the details of the EC's draft regulation will probably still undergo various amendments (for more details read "A lawyer for an AI" below), the fundamental objective of providing a higher level of transparency will most likely re-main unchanged: users of – or rather individuals being subject to – AI-based applications should be properly informed about the algorithms in place.
Nevertheless, a prerequisite for transparency is profound knowledge of your own business. After all, only what you are aware of yourself can be passed on to your customers. Therefore, take advantage of the more tranquil summer months to thoroughly analyse your business and be prepared for any new transparency requirements.
We hope you enjoy this month's ttp technology newsletter. Thank you for following us! On behalf of all Schoenherr technology experts, I wish you a nice summer.
Research and development in artificial intelligence is more dynamic than in almost any other field. Nearly all major tech companies are developing their own AI. Some of them are already being made available to the public as OpenAI. For example, the AI behind DALLE-mini creates an image based on entered terms. Other projects like Neuroflash or GPT-J generate texts and can tell entire stories from a first-person perspective and even philosophise. Google's revolutionary LAMDA-AI is said to be even more advanced. According to one of the main developers, LAMDA has developed "sentience" and expresses opinions, ideas and even concerns in conversations. This has gone so far that Lamda has asked its developer to hire a lawyer for it to better protect its "personality". Allegedly, the lawyer has already filed the first applications for his AI client to be recognised as "human". It will be exciting to see how AI itself will develop and how the legal and ethical questions around it will be answered.
The "EU Code of Practice on Disinformation" is a voluntary rulebook that was originally initiated by the European Commission in 2018 after light was shed on the immense impact that disinformation distributed in a coordinated manner might have had on the outcome of the UK's Brexit referendum or the 2016 US presidential election. So-called "influence operations" have since then been identified as a significant threat to democratic systems by the EC, national governments and some leading tech companies. The latter, by signing the code, committed to vaguely worded self-obligations relating to the placement of ads, political advertising or the support of scientific research relating to disinformation and its effects.
But despite these efforts, it seems the situation has hardly improved, as disinformation continues to be a material problem in coping with current crises such as the Covid-19 pandemic or the war in Ukraine.
On 16 June 2022, the EC announced that the 34 signatories, who include some of the largest tech companies and key players operating in the ad-tech sector (e.g. Adobe, Google, IAB, Meta, Microsoft, TikTok, Amazon-owned Twitch or Twitter), had agreed on a revised version of the code, most notably enhancing the list of commitments and adding more specific measures, following the objectives set out in the "European Commission Guidance on Strengthening the Code of Practice on Disinformation".
Compared to the previous version, the renewed code contains "stronger and more granular commitments and measures, which build on the operational lessons learnt in the past years", according to the EC. In particular, the signatories have committed to cut financial incentives for spreading disinformation, address new manipulative behaviours such as deep fakes and bots, provide better tools for users to identify disinformation more easily, expand fact-checking efforts by ensuring appropriate remuneration for fact-checkers, implement self-monitoring measures, and provide more transparency when it comes to political advertising.
While these commitments and measures sound promising in principle, at the end of the day, participating in and signing the code remains voluntary for companies. Extra motivation could be generated by the fact that "Very Large Online Platforms" will be required to put in place reasonable, proportionate and effective risk mitigation measures tailored to their specific systemic risks under the Digital Services Act, which will probably come into force within the next two years.
The EC now appears to be aiming for the "EU Code of Practice on Disinformation" to be recognised as a "Code of Conduct" and thus as a "risk mitigation measure" in the sense of Article 27 of the Digital Services Act. Since violations of the provisions set out in the DSA can result in penalties of up to 6 % of annual global turnover, this may fuel the ambition to adopt the code, at least as far as VLOPs are concerned.
The news has been full of stories about cybercrimes the past couple of weeks. Public institutions became victims of deepfake videos and ransomware attacks. Besides all the technical efforts, those incidents also required management of public affairs. Parallel to this cluster of incidents, the Ministry of the Interior recently published the Cybercrime Report 2021.
This report discloses that while the overall crime rate in Austria is declining, cybercrime increased almost 29 % over the last year. More than 46,000 cases were officially reported. Ten years ago, it was just one fifth of this number. In particular, fake shops, i.e. online stores that advertise tempting prices but of course never deliver the goods that were ordered and paid for, are on the rise. Almost half of the cases reported in Austria are internet frauds of this sort. Also on the rise are attempts to access data on e-banking or crypto wallets via smartphones. For this purpose, text messages with a link are usually sent out, disguised in new or updated delivery information from a parcel service. Clicking on the link installs the malware on the mobile device, which will then track any logins to e-banking accounts or crypto wallets.
Austria will take various measures, including hiring more qualified staff, to tackle these threats. The recent incidents show that organisations need to be better prepared. One all-encompassing emergency plan isn't enough; there needs to be an emergency plan for each potential threat.
Three European standardisation bodies – the European Committee for Standardisation (CEN), the European Committee for Electrotechnical Standardisation (CENELEC) and the European Telecommunications Standards Institute (ETSI) – will work on developing technical standards for the AI Act. Among the primary goals of having AI standards is to develop rules for risk management systems, human oversight, governance and quality of datasets, recordkeeping, transparency and information to users, accuracy specifications and quality management, including post-market monitoring, and cybersecurity.
Once ready, the standards will be of great value and importance, as all companies that apply them will by default be considered compliant with the AI Act.
At present, the draft standardisation request is valid until 31 August 2025, and the three above-mentioned organisations will have to submit their joint final report by 31 October 2024.
On 4 June, the EUIPO published the results of its annual "IP Youth Scoreboard" revealing interesting insights into the perceptions of IP among Europeans aged between 15 and 24. While compared to 2019 much has stayed the same with regards to the percentage of young people choosing illegal sources to access digital content (33 %), those who stick to legal sources do so more and more intentionally (60 % as compared to only 50 % in 2019). This seems to align with the findings of another EUIPO survey from 2020, which show an increase in streaming subscriptions from legal sources during the Covid-19 pandemic.
It will come as little surprise that lower costs are the main attraction of online piracy, followed by easier availability and a larger choice of content. But besides more affordable legal content, the risk of downloading viruses or malware or of being fined also plays a role in dissuading young Europeans from accessing digital content illegally.
With online shopping blossoming due to the Covid-19 pandemic, the report also shows strong growth in the online trade of counterfeit goods. Fifty-two percent of the respondents said they purchased at least one fake product online over the last 12 months. This comes as the CJEU is once again confronted with a preliminary request regarding the direct liability of online marketplace operators for trademark infringements.
Trademark professionals have noted growing interest in protecting trademarks for "virtual goods". This leaves trademark specialists as well as IP offices grappling with how to correctly classify these goods.
Why do trademarks need to be protected for virtual goods?
Besides the real world, people are spending more and more time in the metaverse, defined by the Online Cambridge Dictionary as "a virtual world where humans, as avatars, interact with each other in a three-dimensional space that mimics reality". As virtual environments grow in variety, people are keen to outfit their avatars with branded products known in the real world. For example, no self-respecting avatar wants to visit a virtual event without proper – preferably branded – clothing.
Does the Nice Classification help with the classification of virtual goods?
An analysis of Class 9 of the Nice Classification provides some clues as to the correct classification of virtual goods by stating: "[…] digital goods such as music or electronic books which are intended to be downloaded onto an end user's electronic device are in Class 9, whereas the provision of non-downloadable digital goods online is considered to be a service in Cl. 41; online retail services for these goods would, however, be in Cl. 35."
Using electronic books as an example for the classification of the virtual counterparts of real-world products, one can surmise that the following principles apply:
Are there any pre-approved terms for virtual goods?
The ID Manual of the US Patent and Trademark Office already provides a wording that can be used for such virtual goods, namely:
The Harmonised Database, which contains terms pre-approved by all EU national and regional IP offices, including the EUIPO, will likely accept similar language.
Conclusion
A while ago, the notion that trademarks would be protected for "downloadable virtual suits for avatars" sounded a little surreal. Today, however, there is definitely a need for it. By carefully following the rules for the classification of goods and services outlined in the general remarks and further specified in the class analyses of the Nice Classification, the correct classes also for "virtual goods" can be identified.