AI systems deliver unpredictable results. The problem cannot be solved for AI systems with a general purpose (ChatGPT), but it can be solved for company-owned AI systems with a specific purpose. A transparency obligation can be derived from the GDPR alone. Operators and providers of AI systems must fulfill additional obligations under the AI Act.
Introduction
How can you make an AI system transparent? The answer to this question for general AI systems is: not at all. This is because these general systems, including ChatGPT, work on the basis of neural networks. How this network works is well known. If you were to write down a formula describing the network, nobody would understand it, let alone be able to read it properly.
The GDPR prescribes in Article 5 the obligation of transparency when processing personal data. This applies to all AI systems that process personal data. These are all systems into which personal data have flowed during training or at user input (often via a prompt). This is a fact that the Hamburg Data Protection Officer has (only?) negated in a dangerous way.
In Art. 5 Sec. 1 lit. d GDPR it is required that data be factually accurate, i.e., correct. This applies to all personal data in AI systems. At the latest at the time of inference, i.e., when an AI system generates an output, this legal provision should be fulfilled.
The AI-Regulation (AI Act) defines obligations that are particularly incumbent on providers of AI systems. Special obligations are imposed for high-risk AI. This type of system should be an exception in practice.
Most companies that use AI systems are operators. For operators, there are far fewer obligations than for providers. An operator is a company or organization according to Art. 3 No. 4 AI-VO, if one “uses an AI system at their own responsibility.” Everything beyond that falls under the provider term (Art. 3 No. 3 AI-VO).
An idea for increasing the transparency and documentation of AI systems came to the author at a meeting of the AI expert group of the State Data Protection Commissioner of Lower Saxony, of which the author is a member. The author has also previously published a book on test-driven software development.
On the one hand, transparency is an external presentation of AI results. However, internal transparency, i.e. for the operator of an AI, is almost more important: How does the AI work? What results does it produce?
Proof of the correctness of AI outputs
In general, it is not possible to completely ensure that an AI only spends correctly. However, it is possible to come close. Before a suggestion is made in this regard, an example is given by the very good DEEPL translator (from Germany!), who uses AI himself and, just like any other AI system, sometimes makes mistakes:

DEEPL was asked to translate a text containing a monetary amount. DEEPL translated €1,050.00 in such a way that the euro figure was replaced by a pound figure. This is obviously wrong. For anyone who wants to try it out for themselves: It depends on the overall text! This has been partially obscured in the screenshot above because it was semi-sensitive information. You will probably get a correct result if you only enter the last sentence in DEEPL. But if the preamble text is different, the error may occur. This alone shows how non-transparent AI systems work.
Errors can therefore not be avoided. How can you still fulfill your duty of transparency and ensure the correctness of AI outputs as much as possible?
The answer is: Through test cases.
Test cases are pairs of actual inputs and target outputs. A test case consists of an actual input and an actual output that is accepted as good. The AI Regulation (AI-VO) has apparently even taken this into account:
This is because Art. 3 No. 53 of the AI Regulation defines the term “plan for a real-life test” as “a document describing the objectives, methodology, geographical, population and time scope, monitoring, organization and conduct of a real-life test”.
The No. 56 of the same article defines AI competence as “the skills, knowledge and understanding that enable providers, operators and those affected, taking into account their respective rights and obligations within this regulation, to use AI systems competently and become aware of the chances and risks of AI and possible damage it can cause
With the help of test cases, operators (and even more so providers) can become more aware of the opportunities and risks of the AI they operate or offer.
These Deepfakes can also be created in accordance with Article 3 AI-VO No. 60. Here it is about a “content of images, sound or video generated or manipulated by AI that resembles real persons, objects, locations, institutions or events and would falsely appear as authentic or true”. When using image models, one would ensure that inputs targeting real people and aiming to defame them are best possible identified and prevented. In any case, it can already be documented with the help of test cases where (still) the weaknesses of the AI system lie.
Test cases are an excellent means of documenting the quality of AI systems. They can also make such systems more transparent and highlight their remaining weaknesses.
The obligation for providers of non-high-risk AI systems to evaluate their own system, as set out in Art. 6 (4) of the AI Regulation, can also take place via test cases.
The risk management system referred to in Art. 9 (1) of the AI Regulation can be underpinned very well with the help of test cases.
Numerous other provisions in the AI Act impose obligations on providers and operators of AI systems that can be served by documented test cases. These include:
- Art. 11 (1) AI Regulation: technical documentation of a high-risk AI system
- Art. 17 AI-VO: Quality management
- Art. 53 AI Regulation as a whole: Obligations for providers of general purpose AI models
- Art. 91 and 101 of the AI Regulation can have negative consequences for AI providers if their documentation does not appear to be sufficient.




My name is Klaus Meffert. I have a doctorate in computer science and have been working professionally and practically with information technology for over 30 years. I also work as an expert in IT & data protection. I achieve my results by looking at technology and law. This seems absolutely essential to me when it comes to digital data protection. My company, IT Logic GmbH, also offers consulting and development of optimized and secure AI solutions.
