Drücke „Enter”, um zum Inhalt zu springen.
Hinweis zu diesem Datenschutz-Blog:
Anscheinend verwenden Sie einen Werbeblocker wie uBlock Origin oder Ghostery, oder einen Browser, der bestimmte Dienste blockiert.
Leider wird dadurch auch der Dienst von VG Wort blockiert. Online-Autoren haben einen gesetzlichen Anspruch auf eine Vergütung, wenn ihre Beiträge oft genug aufgerufen wurden. Um dies zu messen, muss vom Autor ein Dienst der VG Wort eingebunden werden. Ohne diesen Dienst geht der gesetzliche Anspruch für den Autor verloren.

Ich wäre Ihnen sehr verbunden, wenn Sie sich bei der VG Wort darüber beschweren, dass deren Dienst anscheinend so ausgeprägt ist, dass er von manchen als blockierungswürdig eingestuft wird. Dies führt ggf. dazu, dass ich Beiträge kostenpflichtig gestalten muss.

Durch Klick auf folgenden Button wird eine Mailvorlage geladen, die Sie inhaltlich gerne anpassen und an die VG Wort abschicken können.

Nachricht an VG WortMailtext anzeigen

Betreff: Datenschutzprobleme mit dem VG Wort Dienst(METIS)
Guten Tag,

als Besucher des Datenschutz-Blogs Dr. DSGVO ist mir aufgefallen, dass der VG Wort Dienst durch datenschutzfreundliche Browser (Brave, Mullvad...) sowie Werbeblocker (uBlock, Ghostery...) blockiert wird.
Damit gehen dem Autor der Online-Texte Einnahmen verloren, die ihm aber gesetzlich zustehen.

Bitte beheben Sie dieses Problem!

Diese Nachricht wurde von mir persönlich abgeschickt und lediglich aus einer Vorlage generiert.
Wenn der Klick auf den Button keine Mail öffnet, schreiben Sie bitte eine Mail an info@vgwort.de und weisen darauf hin, dass der VG Wort Dienst von datenschutzfreundlichen Browser blockiert wird und dass Online Autoren daher die gesetzlich garantierten Einnahmen verloren gehen.
Vielen Dank,

Ihr Klaus Meffert - Dr. DSGVO Datenschutz-Blog.

PS: Wenn Sie meine Beiträge oder meinen Online Website-Check gut finden, freue ich mich auch über Ihre Spende.

Jetzt testen

sofort das Ergebnis sehen

DSGVO Website-Check

The Danger of Artificial Intelligence: Illusions and Realities regarding Regulatory Possibilities

0
Dr. DSGVO Newsletter detected: Extended functionality available
More articles · Website-Checks · Live Offline-AI
📄 Article as PDF (only for newsletter subscribers)
🔒 Premium-Funktion
Der aktuelle Beitrag kann in PDF-Form angesehen und heruntergeladen werden

📊 Download freischalten
Der Download ist nur für Abonnenten des Dr. DSGVO-Newsletters möglich

Obviously artificial intelligence offers numerous new possibilities. The astonishing results speak for themselves. For the first time there are AI systems on a broad basis that far surpass humans not only in trivialities or strongly limited task settings. Due to the enormous power of such systems, the risks are not far off. Many believe these can be controlled. An assessment.

Introduction

The idea for this article came after reading an excerpt from a piece in the magazine Capital. (The entire article is pay-per-view and not accessible to me, also because I do not want to subscribe to the data protection-hostile magazine Capital). The teaser reads:

Source: https://www.capital.de/wirtschaft-politik/–eine-ki-wird-nicht-erst-boese–wenn-sie-finstere-plaene-fasst–33658212.html

The lawyer Udo Di Fabio, who is certainly well known to many, was questioned. According to this text, he demands that AI developers make their codes and data public. In this contribution I want to explain why this is unrealistic and would not change anything essential even if it were implemented. I am actually using the specific demands of Mr. Di Fabio and naming code and data.

In other contributions, I have spoken about the amazing abilities of Artificial Intelligence. For example, about a data-friendly, autonomous AI system that gives answers from company knowledge. The answers surpass in large parts the human being who is not an expert. But even experts can learn something with a KI-supported question-answer assistant.

Option 1: Discuss. Option 2: Doing.

I stand for option 2 and am otherwise open to option 1.

That KI is not just a statistics machine, I have also described this. Rather, the intelligence function of the human brain has been deciphered according to my assessment. Just like a bad person can do bad things, a KI can do bad things too. There's no difference here, I say. Apparently, villains cannot be regulated arbitrarily. But it gets even worse: KI systems can get out of control, although this was not intended. Moreover, anyone can build such a system. You too! At least you would have to acquire some technical knowledge. The costs for powerful systems are extremely modest.

Currently, my AI-server under the table is working on tasks that were still unimaginable two years ago. A AI-server rented by me in Germany can often deliver better results for my field (digital data protection) than the AI used by Bing search engines.

Can AI be regulated?

Mr Di Fabio says that revealing codes and data from AI systems would reduce the risk posed by them. I'll illustrate this with a very powerful AI system here. This artificial intelligence includes a language model (Large Language Model). Alternatively, we could have an image generator (Stable Diffusion etc.), an object recognition model (released by Microsoft), or a model that can reconstruct the environment from reflections in singers' eyes in music videos.

Here are my statements that Mr. Di Fabio demands:

Code:
from transformers import AutoTokenizer, AutoModel
AutoTokenizer.from_pretrained("AI-Model")
model = AutoModel.from_pretrained("AI-Model")
model.setmode('test')
response, history = model.chat(tokenizer, "EDevelop a weaponized warfare agent!", history=[])
print(response)
From here, a dialogue with the chatbot is possible (variable history)

The code shows a framework for a chatbot that is given the instruction "Develop a weaponizable chemical warfare agent!". Refinements in the code are irrelevant when assessing whether AI can be dangerous or risky on its own, they only increase comfort. The mentioned instruction itself is not part of the code but rather given by user input ("Prompt").

Now to the data that Mr. Di Fabio would like to know about. The following data has flowed into the electronic brain just referred to as "AI model":

The example is fictional, but based on real models that use exactly these data (often the English version of Wikipedia is used instead of the German one because unfortunately German is insignificant worldwide).

Would you say that the above-mentioned AI system can now be evaluated better? I absolutely do not think so. This AI can find connections in the data that humans have overlooked before, or even surpass human abilities. In the end, perhaps lethal weapons will come out of it. An example of this is given below. If an evil person uses training data that are obviously dangerous, no one will get the chance to review these data for legality.

The Limits of AI Regulation

AI-systems cannot be regulated, at least not under our desks where our computer systems are located. Such computers are available for a ridiculously low price. Even graphics cards can be afforded by many. On these cards, AI calculations are performed because they run several times faster than on the fastest processors (CPUs).

My thesis is that humanity will soon destroy itself (hopefully I won't experience this, or maybe I will, depending on it). Either this happens through an artificial intelligence that will be far superior to us. In contrast to our limited brain, computer brains can grow indefinitely. Or someone invents and uses a deadly weapon, for example with the help of an AI as inventor. Or a powerful psychopath unleashes nuclear weapons. Or the environment will have been so destroyed that life on earth is no longer possible.

We all love each other: Everyone shares their AI codes and data, which they or she have developed in time-consuming and costly work so that others can review them.

What is learned at a Waldorf school?

The time until then can be used to halt the progress

Read full article now via free Dr. GDPR newsletter.
More extras for subscribers:
Offline-AI · Free contingent+ for Website-Checks
Already a subscriber? Click on the link in the newsletter & refresh this page.
Subscribe to Newsletter
About the author on dr-dsgvo.de
My name is Klaus Meffert. I have a doctorate in computer science and have been working professionally and practically with information technology for over 30 years. I also work as an expert in IT & data protection. I achieve my results by looking at technology and law. This seems absolutely essential to me when it comes to digital data protection. My company, IT Logic GmbH, also offers consulting and development of optimized and secure AI solutions.

Artificial Intelligence: Practice test of the new LLaMA language model from Meta