Drücke „Enter”, um zum Inhalt zu springen.
Hinweis zu diesem Datenschutz-Blog:
Anscheinend verwenden Sie einen Werbeblocker wie uBlock Origin oder Ghostery, oder einen Browser, der bestimmte Dienste blockiert.
Leider wird dadurch auch der Dienst von VG Wort blockiert. Online-Autoren haben einen gesetzlichen Anspruch auf eine Vergütung, wenn ihre Beiträge oft genug aufgerufen wurden. Um dies zu messen, muss vom Autor ein Dienst der VG Wort eingebunden werden. Ohne diesen Dienst geht der gesetzliche Anspruch für den Autor verloren.

Ich wäre Ihnen sehr verbunden, wenn Sie sich bei der VG Wort darüber beschweren, dass deren Dienst anscheinend so ausgeprägt ist, dass er von manchen als blockierungswürdig eingestuft wird. Dies führt ggf. dazu, dass ich Beiträge kostenpflichtig gestalten muss.

Durch Klick auf folgenden Button wird eine Mailvorlage geladen, die Sie inhaltlich gerne anpassen und an die VG Wort abschicken können.

Nachricht an VG WortMailtext anzeigen

Betreff: Datenschutzprobleme mit dem VG Wort Dienst(METIS)
Guten Tag,

als Besucher des Datenschutz-Blogs Dr. DSGVO ist mir aufgefallen, dass der VG Wort Dienst durch datenschutzfreundliche Browser (Brave, Mullvad...) sowie Werbeblocker (uBlock, Ghostery...) blockiert wird.
Damit gehen dem Autor der Online-Texte Einnahmen verloren, die ihm aber gesetzlich zustehen.

Bitte beheben Sie dieses Problem!

Diese Nachricht wurde von mir persönlich abgeschickt und lediglich aus einer Vorlage generiert.
Wenn der Klick auf den Button keine Mail öffnet, schreiben Sie bitte eine Mail an info@vgwort.de und weisen darauf hin, dass der VG Wort Dienst von datenschutzfreundlichen Browser blockiert wird und dass Online Autoren daher die gesetzlich garantierten Einnahmen verloren gehen.
Vielen Dank,

Ihr Klaus Meffert - Dr. DSGVO Datenschutz-Blog.

PS: Wenn Sie meine Beiträge oder meinen Online Website-Check gut finden, freue ich mich auch über Ihre Spende.
Ausprobieren Online Webseiten-Check sofort das Ergebnis sehen

AI systems and AI Act: ensuring transparency and correctness

0
Dr. DSGVO Newsletter detected: Extended functionality available
More articles · Website-Checks · Live Offline-AI
📄 Article as PDF (only for newsletter subscribers)
🔒 Premium-Funktion
Der aktuelle Beitrag kann in PDF-Form angesehen und heruntergeladen werden

📊 Download freischalten
Der Download ist nur für Abonnenten des Dr. DSGVO-Newsletters möglich

AI systems deliver unpredictable results. The problem cannot be solved for AI systems with a general purpose (ChatGPT), but it can be solved for company-owned AI systems with a specific purpose. A transparency obligation can be derived from the GDPR alone. Operators and providers of AI systems must fulfill additional obligations under the AI Act.

Introduction

How can you make an AI system transparent? The answer to this question for general AI systems is: not at all. This is because these general systems, including ChatGPT, work on the basis of neural networks. How this network works is well known. If you were to write down a formula describing the network, nobody would understand it, let alone be able to read it properly.

The GDPR prescribes in Article 5 the obligation of transparency when processing personal data. This applies to all AI systems that process personal data. These are all systems into which personal data have flowed during training or at user input (often via a prompt). This is a fact that the Hamburg Data Protection Officer has (only?) negated in a dangerous way.

In Art. 5 Sec. 1 lit. d GDPR it is required that data be factually accurate, i.e., correct. This applies to all personal data in AI systems. At the latest at the time of inference, i.e., when an AI system generates an output, this legal provision should be fulfilled.

The AI-Regulation (AI Act) defines obligations that are particularly incumbent on providers of AI systems. Special obligations are imposed for high-risk AI. This type of system should be an exception in practice.

Most companies that use AI systems are operators. For operators, there are far fewer obligations than for providers. An operator is a company or organization according to Art. 3 No. 4 AI-VO, if one “uses an AI system at their own responsibility.” Everything beyond that falls under the provider term (Art. 3 No. 3 AI-VO).

An idea for increasing the transparency and documentation of AI systems came to the author at a meeting of the AI expert group of the State Data Protection Commissioner of Lower Saxony, of which the author is a member. The author has also previously published a book on test-driven software development.

On the one hand, transparency is an external presentation of AI results. However, internal transparency, i.e. for the operator of an AI, is almost more important: How does the AI work? What results does it produce?

Proof of the correctness of AI outputs

In general, it is not possible to completely ensure that an AI only spends correctly. However, it is possible to come close. Before a suggestion is made in this regard, an example is given by the very good DEEPL translator (from Germany!), who uses AI himself and, just like any other AI system, sometimes makes mistakes:

DEEPL translation error, source: Klaus Meffert

DEEPL was asked to translate a text containing a monetary amount. DEEPL translated €1,050.00 in such a way that the euro figure was replaced by a pound figure. This is obviously wrong. For anyone who wants to try it out for themselves: It depends on the overall text! This has been partially obscured in the screenshot above because it was semi-sensitive information. You will probably get a correct result if you only enter the last sentence in DEEPL. But if the preamble text is different, the error may occur. This alone shows how non-transparent AI systems work.

Errors can therefore not be avoided. How can you still fulfill your duty of transparency and ensure the correctness of AI outputs as much as possible?

The answer is: Through test cases.

Test cases are pairs of actual inputs and target outputs. A test case consists of an actual input and an actual output that is accepted as good. The AI Regulation (AI-VO) has apparently even taken this into account:

This is because Art. 3 No. 53 of the AI Regulation defines the term “plan for a real-life test” as “a document describing the objectives, methodology, geographical, population and time scope, monitoring, organization and conduct of a real-life test”.

The No. 56 of the same article defines AI competence as “the skills, knowledge and understanding that enable providers, operators and those affected, taking into account their respective rights and obligations within this regulation, to use AI systems competently and become aware of the chances and risks of AI and possible damage it can cause

With the help of test cases, operators (and even more so providers) can become more aware of the opportunities and risks of the AI they operate or offer.

These Deepfakes can also be created in accordance with Article 3 AI-VO No. 60. Here it is about a “content of images, sound or video generated or manipulated by AI that resembles real persons, objects, locations, institutions or events and would falsely appear as authentic or true”. When using image models, one would ensure that inputs targeting real people and aiming to defame them are best possible identified and prevented. In any case, it can already be documented with the help of test cases where (still) the weaknesses of the AI system lie.

Test cases are an excellent means of documenting the quality of AI systems. They can also make such systems more transparent and highlight their remaining weaknesses.

The obligation for providers of non-high-risk AI systems to evaluate their own system, as set out in Art. 6 (4) of the AI Regulation, can also take place via test cases.

The risk management system referred to in Art. 9 (1) of the AI Regulation can be underpinned very well with the help of test cases.

Numerous other provisions in the AI Act impose obligations on providers and operators of AI systems that can be served by documented test cases. These include:

  • Art. 11 (1) AI Regulation: technical documentation of a high-risk AI system
  • Art. 17 AI-VO: Quality management
  • Art. 53 AI Regulation as a whole: Obligations for providers of general purpose AI models
  • Art. 91 and 101 of the AI Regulation can have negative consequences for AI providers if their documentation does not appear to be sufficient.

Read full article now via free Dr. GDPR newsletter.
More extras for subscribers:
Offline-AI · Free contingent+ for Website-Checks
Already a subscriber? Click on the link in the newsletter & refresh this page.
Subscribe to Newsletter
About the author on dr-dsgvo.de
My name is Klaus Meffert. I have a doctorate in computer science and have been working professionally and practically with information technology for over 30 years. I also work as an expert in IT & data protection. I achieve my results by looking at technology and law. This seems absolutely essential to me when it comes to digital data protection. My company, IT Logic GmbH, also offers consulting and development of optimized and secure AI solutions.

AI: Which language model is the best?