Drücke „Enter”, um zum Inhalt zu springen.
Hinweis zu diesem Datenschutz-Blog:
Anscheinend verwenden Sie einen Werbeblocker wie uBlock Origin oder Ghostery, oder einen Browser, der bestimmte Dienste blockiert.
Leider wird dadurch auch der Dienst von VG Wort blockiert. Online-Autoren haben einen gesetzlichen Anspruch auf eine Vergütung, wenn ihre Beiträge oft genug aufgerufen wurden. Um dies zu messen, muss vom Autor ein Dienst der VG Wort eingebunden werden. Ohne diesen Dienst geht der gesetzliche Anspruch für den Autor verloren.

Ich wäre Ihnen sehr verbunden, wenn Sie sich bei der VG Wort darüber beschweren, dass deren Dienst anscheinend so ausgeprägt ist, dass er von manchen als blockierungswürdig eingestuft wird. Dies führt ggf. dazu, dass ich Beiträge kostenpflichtig gestalten muss.

Durch Klick auf folgenden Button wird eine Mailvorlage geladen, die Sie inhaltlich gerne anpassen und an die VG Wort abschicken können.

Nachricht an VG WortMailtext anzeigen

Betreff: Datenschutzprobleme mit dem VG Wort Dienst(METIS)
Guten Tag,

als Besucher des Datenschutz-Blogs Dr. DSGVO ist mir aufgefallen, dass der VG Wort Dienst durch datenschutzfreundliche Browser (Brave, Mullvad...) sowie Werbeblocker (uBlock, Ghostery...) blockiert wird.
Damit gehen dem Autor der Online-Texte Einnahmen verloren, die ihm aber gesetzlich zustehen.

Bitte beheben Sie dieses Problem!

Diese Nachricht wurde von mir persönlich abgeschickt und lediglich aus einer Vorlage generiert.
Wenn der Klick auf den Button keine Mail öffnet, schreiben Sie bitte eine Mail an info@vgwort.de und weisen darauf hin, dass der VG Wort Dienst von datenschutzfreundlichen Browser blockiert wird und dass Online Autoren daher die gesetzlich garantierten Einnahmen verloren gehen.
Vielen Dank,

Ihr Klaus Meffert - Dr. DSGVO Datenschutz-Blog.

PS: Wenn Sie meine Beiträge oder meinen Online Website-Check gut finden, freue ich mich auch über Ihre Spende.
Ausprobieren Online Webseiten-Check sofort das Ergebnis sehen

The EU AI Regulation: obligations for companies

0
Dr. DSGVO Newsletter detected: Extended functionality available
More articles · Website-Checks · Live Offline-AI
📄 Article as PDF (only for newsletter subscribers)
🔒 Premium-Funktion
Der aktuelle Beitrag kann in PDF-Form angesehen und heruntergeladen werden

📊 Download freischalten
Der Download ist nur für Abonnenten des Dr. DSGVO-Newsletters möglich

From February 2, 2025, German companies must comply with the general regulations and prohibitions from the AI Act if they use AI systems such as ChatGPT, Copilot or the AI spam filter from Outlook. This includes proof of AI competence and a risk assessment of the AI used. An overview with recommendations.

Introduction

German companies will have to do something from February 2, 2025 if they use AI:

In Article 113 of the AI Act, it is regulated that some provisions of this regulation are to be applied from February 2, 2025 onwards. The AI-regulation is abbreviated as AI-VO and referred to in English as AI Act.

Articles 1 to 4 of the AI-VO form the first chapter. In it, general provisions are established.

Article 5 of the AI Act constitutes Chapter II and contains prohibitions.

From 02.02.2025, German companies must therefore comply with general regulations and bans if AI systems are used in the company. The use of ChatGPT, Copilot or AI systems from other providers is already a relevant use within the meaning of the AI Regulation.

In accordance with Article 113, lit. a of the AI-VO, the general provisions (Articles 1 to 4) and the bans (Article 5) of the AI-VO shall apply from February 2, 2025 onwards.

When do the obligations apply?

From 02.08.2025, further regulations from the AI Regulation will come into force, which particularly affect AI models with a general purpose. The latter include well-known AI systems such as ChatGPT, Microsoft Copilot or the AI models from Mistral.

If recital 12 of the AI Act is taken as a benchmark, software systems that use AI are also to be regarded as AI systems. Examples of this are search engines that generate recommendations based on AI or AI-based spam filters. As far as spam filters are concerned, the training required should be minimal; however, employees should at least be instructed. According to Recital 118 of the AI Act, large search engines are covered by another regulation, namely the Digital Services Regulation.

What obligations apply?

The obligations arise from Articles 4 and 5 of the AI Act. Article 3 contains important definitions of terms. The first two articles contain information on the scope and subject matter of the Regulation.

Proof of AI competence

Art. 4 of the AI Regulation deals exclusively with proof of AI competence. This means that every company whose employees are to work with AI systems must provide this proof.

Employees must be instructed accordingly by the company. This also includes training. How a company provides proof of AI competence is not prescribed. In this respect, there are good ways to do this sensibly and efficiently.

An example for the proof of AI competence is mentioned on Dr. GDPR. The proof consists of ([1]) :

  • Technical competence
  • Professional competence

Technical expertise should at least be present and made plausible if AI systems are offered or AI programming takes place. The latter may already be the case through the use of a programming interface (API), such as the one offered by ChatGPT.

Classification of the AI systems used

Art. 5 of the AI Regulation contains numerous prohibitions, i.e. purposes for which AI systems may or may not be used. For example, the subliminal influencing of persons by AI is prohibited. The evaluation or classification of natural persons or groups of persons over a certain period of time on the basis of their social behavior is also prohibited. Article 5 contains numerous other prohibitions.

In order to exclude the prohibited purposes for an AI system in use, the AI system must be examined and described. This can then be followed by a classification and examination against the provisions of the AI Regulation.

Also, it can be clarified whether a company is only an operator of an AI system or even a provider. Providers of AI systems must fulfill far more obligations than operators.

Recommendations

Companies should instruct their employees on how to work with AI systems and issue instructions in their own interest. The way in which companies proceed is up to them.

A Onboarding is an instruction of employees. It can be done written or in form of a personal or digital Training. The onboarding serves to enable employees. Ultimately, AI is for companies means to an end. Therefore, it should be ensured that AI systems bring benefit to the company.

A directive on the other hand is more of a regulatory measure. It serves to guide employees in their work with AI systems along orderly tracks. Not every use of AI is desired or sensible.

General information enriched with targeted information is recommended for a briefing. In concrete terms, this means:

  • On-site training of employees, either at your premises or at a training center
  • Initial training can also take place online, for example in the form of a webinar
  • Creation of a guide that can be part of AI-Organizational Guidelines. The guide is intended to support employees. It can also be living, i.e., regularly updated. Accordingly, thought should be given to using an online offering.

The instructions for employees working with AI in the company include in particular:

  • Requirements for the permitted professional use of AI
  • Requirements for the unauthorized professional use of AI
  • Contact person (such as the data protection officer)
  • The instruction can also be part of a AI-organizational directive

An AI organizational instruction as a complete document therefore contains

  • Information for the briefing of employees and
  • Information for the instruction of employees.

Types of AI use

AI systems can be used for different tasks. There are also different AI systems, such as different types of AI models. The best known are voice models ("chatbots"). There are also image generators, video generators, audio models, etc.

Companies should define the purposes for which employees are allowed to use which AI system.

In particular, it is important to understand that all results generated with AI must be checked before further use. This is because it is not the AI system but the user of the AI results who is liable (initially and often ultimately) for the AI results.

AI-releases can infringe on the copyrights of third parties

Read full article now via free Dr. GDPR newsletter.
More extras for subscribers:
Offline-AI · Free contingent+ for Website-Checks
Already a subscriber? Click on the link in the newsletter & refresh this page.
Subscribe to Newsletter
About the author on dr-dsgvo.de
My name is Klaus Meffert. I have a doctorate in computer science and have been working professionally and practically with information technology for over 30 years. I also work as an expert in IT & data protection. I achieve my results by looking at technology and law. This seems absolutely essential to me when it comes to digital data protection. My company, IT Logic GmbH, also offers consulting and development of optimized and secure AI solutions.

The AI trends for the year 2025