From February 2, 2025, German companies must comply with the general regulations and prohibitions from the AI Act if they use AI systems such as ChatGPT, Copilot or the AI spam filter from Outlook. This includes proof of AI competence and a risk assessment of the AI used. An overview with recommendations.
Introduction
German companies will have to do something from February 2, 2025 if they use AI:
In Article 113 of the AI Act, it is regulated that some provisions of this regulation are to be applied from February 2, 2025 onwards. The AI-regulation is abbreviated as AI-VO and referred to in English as AI Act.
Articles 1 to 4 of the AI-VO form the first chapter. In it, general provisions are established.
Article 5 of the AI Act constitutes Chapter II and contains prohibitions.
From 02.02.2025, German companies must therefore comply with general regulations and bans if AI systems are used in the company. The use of ChatGPT, Copilot or AI systems from other providers is already a relevant use within the meaning of the AI Regulation.
In accordance with Article 113, lit. a of the AI-VO, the general provisions (Articles 1 to 4) and the bans (Article 5) of the AI-VO shall apply from February 2, 2025 onwards.
When do the obligations apply?
From 02.08.2025, further regulations from the AI Regulation will come into force, which particularly affect AI models with a general purpose. The latter include well-known AI systems such as ChatGPT, Microsoft Copilot or the AI models from Mistral.
If recital 12 of the AI Act is taken as a benchmark, software systems that use AI are also to be regarded as AI systems. Examples of this are search engines that generate recommendations based on AI or AI-based spam filters. As far as spam filters are concerned, the training required should be minimal; however, employees should at least be instructed. According to Recital 118 of the AI Act, large search engines are covered by another regulation, namely the Digital Services Regulation.
What obligations apply?
The obligations arise from Articles 4 and 5 of the AI Act. Article 3 contains important definitions of terms. The first two articles contain information on the scope and subject matter of the Regulation.
Proof of AI competence
Art. 4 of the AI Regulation deals exclusively with proof of AI competence. This means that every company whose employees are to work with AI systems must provide this proof.
Employees must be instructed accordingly by the company. This also includes training. How a company provides proof of AI competence is not prescribed. In this respect, there are good ways to do this sensibly and efficiently.
An example for the proof of AI competence is mentioned on Dr. GDPR. The proof consists of ([1]) :
- Technical competence
- Professional competence
Technical expertise should at least be present and made plausible if AI systems are offered or AI programming takes place. The latter may already be the case through the use of a programming interface (API), such as the one offered by ChatGPT.
Classification of the AI systems used
Art. 5 of the AI Regulation contains numerous prohibitions, i.e. purposes for which AI systems may or may not be used. For example, the subliminal influencing of persons by AI is prohibited. The evaluation or classification of natural persons or groups of persons over a certain period of time on the basis of their social behavior is also prohibited. Article 5 contains numerous other prohibitions.
In order to exclude the prohibited purposes for an AI system in use, the AI system must be examined and described. This can then be followed by a classification and examination against the provisions of the AI Regulation.
Also, it can be clarified whether a company is only an operator of an AI system or even a provider. Providers of AI systems must fulfill far more obligations than operators.
Recommendations
Companies should instruct their employees on how to work with AI systems and issue instructions in their own interest. The way in which companies proceed is up to them.
A Onboarding is an instruction of employees. It can be done written or in form of a personal or digital Training. The onboarding serves to enable employees. Ultimately, AI is for companies means to an end. Therefore, it should be ensured that AI systems bring benefit to the company.
A directive on the other hand is more of a regulatory measure. It serves to guide employees in their work with AI systems along orderly tracks. Not every use of AI is desired or sensible.
General information enriched with targeted information is recommended for a briefing. In concrete terms, this means:
- On-site training of employees, either at your premises or at a training center
- Initial training can also take place online, for example in the form of a webinar
- Creation of a guide that can be part of AI-Organizational Guidelines. The guide is intended to support employees. It can also be living, i.e., regularly updated. Accordingly, thought should be given to using an online offering.
The instructions for employees working with AI in the company include in particular:
- Requirements for the permitted professional use of AI
- Requirements for the unauthorized professional use of AI
- Contact person (such as the data protection officer)
- The instruction can also be part of a AI-organizational directive
An AI organizational instruction as a complete document therefore contains
- Information for the briefing of employees and
- Information for the instruction of employees.
Types of AI use
AI systems can be used for different tasks. There are also different AI systems, such as different types of AI models. The best known are voice models ("chatbots"). There are also image generators, video generators, audio models, etc.
Companies should define the purposes for which employees are allowed to use which AI system.
In particular, it is important to understand that all results generated with AI must be checked before further use. This is because it is not the AI system but the user of the AI results who is liable (initially and often ultimately) for the AI results.
AI-releases can infringe on the copyrights of third parties




My name is Klaus Meffert. I have a doctorate in computer science and have been working professionally and practically with information technology for over 30 years. I also work as an expert in IT & data protection. I achieve my results by looking at technology and law. This seems absolutely essential to me when it comes to digital data protection. My company, IT Logic GmbH, also offers consulting and development of optimized and secure AI solutions.
