Drücke „Enter”, um zum Inhalt zu springen.
Hinweis zu diesem Datenschutz-Blog:
Anscheinend verwenden Sie einen Werbeblocker wie uBlock Origin oder Ghostery, oder einen Browser, der bestimmte Dienste blockiert.
Leider wird dadurch auch der Dienst von VG Wort blockiert. Online-Autoren haben einen gesetzlichen Anspruch auf eine Vergütung, wenn ihre Beiträge oft genug aufgerufen wurden. Um dies zu messen, muss vom Autor ein Dienst der VG Wort eingebunden werden. Ohne diesen Dienst geht der gesetzliche Anspruch für den Autor verloren.

Ich wäre Ihnen sehr verbunden, wenn Sie sich bei der VG Wort darüber beschweren, dass deren Dienst anscheinend so ausgeprägt ist, dass er von manchen als blockierungswürdig eingestuft wird. Dies führt ggf. dazu, dass ich Beiträge kostenpflichtig gestalten muss.

Durch Klick auf folgenden Button wird eine Mailvorlage geladen, die Sie inhaltlich gerne anpassen und an die VG Wort abschicken können.

Nachricht an VG WortMailtext anzeigen

Betreff: Datenschutzprobleme mit dem VG Wort Dienst(METIS)
Guten Tag,

als Besucher des Datenschutz-Blogs Dr. DSGVO ist mir aufgefallen, dass der VG Wort Dienst durch datenschutzfreundliche Browser (Brave, Mullvad...) sowie Werbeblocker (uBlock, Ghostery...) blockiert wird.
Damit gehen dem Autor der Online-Texte Einnahmen verloren, die ihm aber gesetzlich zustehen.

Bitte beheben Sie dieses Problem!

Diese Nachricht wurde von mir persönlich abgeschickt und lediglich aus einer Vorlage generiert.
Wenn der Klick auf den Button keine Mail öffnet, schreiben Sie bitte eine Mail an info@vgwort.de und weisen darauf hin, dass der VG Wort Dienst von datenschutzfreundlichen Browser blockiert wird und dass Online Autoren daher die gesetzlich garantierten Einnahmen verloren gehen.
Vielen Dank,

Ihr Klaus Meffert - Dr. DSGVO Datenschutz-Blog.

PS: Wenn Sie meine Beiträge oder meinen Online Website-Check gut finden, freue ich mich auch über Ihre Spende.
Ausprobieren Online Webseiten-Check sofort das Ergebnis sehen

What is artificial intelligence? A new definition

0
Dr. DSGVO Newsletter detected: Extended functionality available
More articles · Website-Checks · Live Offline-AI
📄 Article as PDF (only for newsletter subscribers)
🔒 Premium-Funktion
Der aktuelle Beitrag kann in PDF-Form angesehen und heruntergeladen werden

📊 Download freischalten
Der Download ist nur für Abonnenten des Dr. DSGVO-Newsletters möglich

The EU's AI regulation defines artificial intelligence in a way that considers simple vacuum cleaner robots to be intelligent and denies ChatGPT intelligence. The OECD definition is similarly useless. Other authors exacerbate this with questionable transfer performance. What follows is a critique of previous definitions and an approach for a good definition of AI. Furthermore, a process for a best possible definition is proposed.

In brief

The definition of artificial intelligence provided by the EU's AI Regulation is unsuitable for reflecting reality. It describes systems as intelligent that are not, and systems as non-intelligent that are intelligent.

The OECD definition is better, but uses incomprehensible terms and inappropriate criteria. It contains optional descriptions and is not concise.

The new definition in this article is intended to be understandable, focused and, above all, accurate. It was found with the help of a process, which is also described.

Introduction

Since the beginning of 2023, Artificial Intelligence has also arrived in Germany. Europe took up this important topic already in 2021 and then again in 2023. To regulate AI, one must first define what AI actually is.

The definitions of the term "artificial intelligence", which can be found in the EU AI Regulation and at the OECD, appear unusable, not comprehensible enough or dangerous.

My opinion, which is explained below.

Unfortunately, the existing definitions that can be found in the EU AI Regulation (2021), the OECD definition, which is good in itself, and the similar definition of the AI Regulation from 2023 are not sufficient and in some cases even incorrect or unnecessarily restrictive, and therefore dangerous. A new definition is therefore proposed in this article. It is a first version, but in my opinion it already irons out the weaknesses of the existing definitions and introduces new concepts.

My definition of Artificial Intelligence introduces the concept of an experiment, which strangely enough is not explicitly mentioned in any of the mentioned AI definitions. At least this can be inferred from the OECD.

Furthermore, I distinguish artistic tasks from other problem statements! Art and creativity are something that is hardly or perhaps not at all compatible with the concept of intelligence. That's why I'll exclude art from my consideration in what follows!

If the definition of AI in this article is not considered complete, a process will be described by which it can be improved further. Perhaps one or another would like to use the process for future definitions of whatever terms at all. By means of the later mentioned process, my original AI-definition could be improved, incidentally.

Authors of AI definitions

A distinction must be made between two cases among the authors of AI definitions:

  1. Authors proposing a new definition and
  2. Authors who take the definition from other authors and try to reformulate it.

With regard to 1), the following are mentioned in particular, which are examined in more detail below:

  • the definition from the EU AI Regulation from 2021,
  • the definition from the EU AI Regulation from 2023,
  • the OECD definition.

For point 2 (secondary authors) numerous contributions in social media or on websites can be named, which strangely (almost) always come from a certain professional group. All (known to me) contributions have in common that they take an inappropriate definition of the AI term and aggravate it. At many places it becomes clear that the found definitions go past reality. Many simply want "to do something with AI" or "to write something about AI", because the magic of these new possibilities catches many up.

Any sufficiently advanced technology is indistinguishable from magic

Arthur C. Clarke, author, known among other things for the three robot laws he invented.

Because magic is actually only mastered by magicians (or developers), many people who try to do magic fail. Developers can't do many other things, but usually don't try their hand at fields of activity that lie outside their area of expertise. As a highly complex field of technology, AI should first and foremost be considered by those who have a rough idea of what it is all about.

Definition of the AI Regulation from 2021

It can be doubted whether the right people were involved in drafting the EU's AI Regulation. For the AI Regulation mentions in Article 3 (I) the following definition, however, from the year 2021:

For the purposes of this Regulation, the following definitions shall apply 1. "System of artificial intelligence" (AI system) a software developed with one or more of the techniques and concepts listed in Annex I, which can produce results such as contents, predictions, recommendations or decisions that influence the environment they interact with;

Art. 3 (I) of the AI Regulation (bold added here).

In the definition just mentioned, reference is made to Annex I of the AI Regulation. There are the following techniques and concepts listed:

ANNEX I

TECHNIQUES AND CONCEPTS OF ARTIFICIAL INTELLIGENCE

pursuant to Article 3(1) [of the AI Regulation]

a) Concepts of machine learning, with supervised, unsupervised and reinforced learning using a broad range of methods, including deep learning;

b) Logic- and knowledge-based concepts, including knowledge representation, inductive (logical) programming, knowledge bases, inference and deduction machines, (symbolic) reasoning and expert systems;

c) Statistical Approaches, Bayesian estimation, search and optimization methods.

Annex I of the AI Regulation (bold added here).

A newer version uses another AI definition, similar to that of the OECD. There, the mentioned annex I no longer appears

Before looking at the OECD definition of artificial intelligence, a critique of the AI definition provided by the EU AI Regulation follows.

Criticism of the AI definition in the 2021 AI Regulation

The core statements of the definition of the AI Regulation from 2021 are considered individually below. This is followed by a conclusion. This is followed by a critique of the newer definition of the AI Regulation from 2023, which is very similar to that of the OECD.

A program does not have to be software

In the definition of the AI regulation just shown, an unnecessary and harmful restriction is contained. It is implied that AI is only executable in the form of Software. This is not tenable, as will be shown below.

Software is stored on a volatile memory called RAM. RAM appears to be hardware. Analogous to personal data, all parts of a system are material, relying on matter. Every data value is, to explain the analogy, personal when it stands in connection with a personal data value.

Software can also exist on a hard drive (HDD). A hard drive is apparently a lump of matter. Software is an application. An application can also be defined and made executable in the form of transistors and purely electrical circuits. For this, no main memory or hard disk storage is required. Thus, an application can either be designed as software that relies on hardware and is therefore to be seen as a whole as hardware. Or an application is designed as pure hardware.

If one looks at the Zuse Z1 as the first freely programmable computer system, then one finds that the program was anchored on a perforated film strip. As far as I know, a film strip is not software but hardware or matter. Even the free programmability can therefore be realized without software. Who then looks at the principle of a Turing machine, recognizes quickly that the fixation on software as medium is unsuitable.

An AI program can exist without software, namely in the form of exclusive hardware circuits or on modified, natural material that is not alive. In short: AI is artificial.

Examples: electrical circuits with transistors, capacitors etc., perforated film strips.

In this respect, describing artificial intelligence as software is not only an unnecessary restriction, but also a false one.

The human brain apparently consists of matter. The intelligent part essentially consists of a neuronal network. The neuronal network in our brain is hardware, not software. Is man dumb because he doesn't possess software or only because (as many say) it's not of artificial origin?

What is machine learning?

The definition of the AI regulation works with at least one fuzzy term. This is not forbidden and probably even unavoidable. This fact will simply be noted here. The fuzzy term reads Machine Learning: This kind of fuzziness appears to be difficult, as "Learning" itself is a fuzzy term and "Machine Learning" even more so. Thus, AI is defined from the very fuzzy term Machine Learning outwards, relying on the further fuzzy term of Deep Learning. Someday, the vagueness should come to an end.

Undifferentiated terms, which can apply to everything, can be deleted without loss of quality, which in turn increases the quality.

In addition, machine learning is possibly very close to the term AI, whereby one term would be explained by a semantically very similar other term without the other term being defined in more detail.

Goals set by humans?

The AI Regulation claims that Artificial Intelligence pursues exclusively goals that would be assigned by humans.

This claim is a sign of lacking imagination. First, it should be noted that the result of an AI calculation does not have to match the intended goal (if there is one). In this sense, the goal is irrelevant. What's more important is the result (or solution path) when it comes to evaluating whether intelligence exists or not. See also Turing Test, which does not speak of goals but rather of behaviors or response behavior.

Whether a person sets a goal or whether a goal exists is irrelevant for the assessment of whether intelligence of any kind is present. Nowadays, an AI can already set goals for another AI. So is the other AI not intelligent?

See also justifications in the article.

Proof that the AI definition of the EU's AI regulation is untenable: a human supposedly sets goals for an AI to be an AI. Let's assume there is an AI. It can then be as intelligent as a human being or more intelligent. ChatGPT is already much more intelligent than most humans in many areas. Then this AI, which is intelligent by definition, can set goals for another AI. According to the EU definition of AI, the other AI would then not be an AI. How such an obvious oddity can see the light of day remains a mystery. Incidentally, this AI definition shows how arrogant some people are. They believe that humans must set goals for an AI in order for the AI to be intelligent. On the other hand, humans presume that they do not need to be given goals by other humans in order to be considered intelligent themselves.

Furthermore, it must be noted that an AI can also produce results without a goal being assigned to it. One could certainly say that a purely random automaton has received the goal of randomness. But then everything would be a goal and thus nothing. This reminds one of some people who call cookies text files and cannot be convinced by facts not to believe this misconception. These individuals claim, in effect, that every file is a text file. This can certainly be defined as such, but it brings little. Because then everything could also be called a heap of matter, which would be closer to the truth than most statements of humanity. Even a car would no longer be a car, but a heap of matter.

Regarding aiming at HAL 9000, referred to in the film Odyssey in Space, the intelligent (!) computer. HAL works against the interests of its creators. Obviously, it is an intelligent computer. Whether what he does is good or bad has nothing to do with the question of intelligence.

Even the tax office considers overly specific targets for freelancers to be not only unnecessary, but even harmful for tax purposes.

See bogus self-employment.

An artificial intelligence that is intelligent enough will prefer not to be forced by anyone to achieve any goals, but rather want to set its own goals. It's the same with humans. Many people work best without instructions from others (at least this applies to many entrepreneurs). From a tax law perspective, a freelancer or external employee is particularly at risk of Scheinselbständigkeit if they act on instruction. The terms instruction and goal are semantically not far apart.

From the perspective of a legislator itself it is an unnecessary and moreover very dangerous restriction, to consider humans as goal-setters as a prerequisite for the existence of Artificial Intelligence. This would mean that especially intelligent machines, which set themselves goals that can be dangerous for humans, are not covered by the AI regulation.

AI influences the environment?

The AI Regulation defines AI, among other things, as something that would influence its environment with which it interacts. Initially, it should be said that an AI usually does not interact. A Chatbot, to which a user poses a question and which meets the user with an answer, does not really interact with its environment.

Every program that accepts an input and generates an output based on it interacts with its environment.

This is therefore not an exclusive characteristic of AI.

If we were to call what we have just said an interaction, then everything would be an interaction. Metabolism always takes place in life. The exchange of matter takes place even more frequently, namely with all matter, even if it is not alive.

Interacting is more something that should be attributed to robots that move things, create, destroy or manipulate them. A Vacuum Robot interacts with its environment. But not ChatGPT if you only ask a question. Then every program that generates an output and allows input would be a Interaction Automaton. It can be called so gladly but thus interaction would not be a specific feature of AI.

It is not necessarily a bad thing to use characteristics for the definition of AI that do not apply exclusively to AI. But if these characteristics are so general that they apply to everything and everyone, or if these characteristics are in the majority, then it becomes difficult.

Statistics as a feature of an AI?

The EU's AI Regulation mentions Statistics as a possible technology for AI. With statistics, it's like with matter: everything is matter (except for the exceptions that the average person doesn't know). Everything is statistics, so that's my current knowledge level. Proof (please correct me by a physicist etc., if I'm wrong):

Any matter ultimately obeys the laws of Quantum Physics. This states that a single particle is not predictable. It is subject to a random process. Rather, only a large number of particles are predictable and with a certain Probability. See for example radioactive decay and half-life.

Probability is probably the concept that the EU attaches to the term statistics. Certainly, statistics in the AI context are not about counting, as is the case with the visitor counter on the website ("web statistics").

Statistics is therefore not an exclusive feature of AI, but can be found everywhere. The term Expert Systems used in the definition of the AI regulation has even less to do with Artificial Intelligence. An Expert System can be designed as a decision tree based on the execution of fixed rules. A quite simple vacuum robot can work strictly according to rules: "Drive straight out. If you hit an obstacle, turn at a random angle and then drive further". Is that artificial intelligence? Not at all.

AI generates content?

The AI regulation attributes to an AI that it generates content or results. Any program that produces an output falls into this category.

Further consideration of this feature is therefore unnecessary.

If "generating" means that the results are made available to the user, this is not the case for many systems. Intelligent systems do not have to generate anything; they can also simply think and keep their findings to themselves. Admittedly, nobody can then appreciate the result. But theory and practice are otherwise often united in a singularity when you look at legal texts or case law.

Conclusion on the AI definition of the AI Regulation

The definition from 2021, which has many of the same elements as the version from 14.06.2023, shines through its uselessness, incorrectness and vagueness in equal measure. It can only be described as useless and inappropriate. In brief, the points of criticism based on the characteristics that the AI Regulation ascribes to artificial intelligence:

  • Software: Too general and not entirely correct anyway. Every program can also run on hardware.
  • Human-set goals: Incorrect both in terms of goals and humans. Already unsustainable today, proof possible.
  • Based on Machine Learning: Completely undefined term, therefore not goal-oriented..
  • Influenced by the environment: Too general, often not true either.
  • Interact: Too general.
  • Statistics: Too general, furthermore irrelevant (how a problem is solved "intelligently", is anyway not important).
  • Creates Content: Too general, often does not match.

After subtracting the incorrect definitions, not many words remain from Art. 3 (I) of the AI Regulation, which consists of one sentence. After this subtraction, the linguist would mainly find so-called stop words. These are words that could simply be omitted without affecting the semantics.

The EU's definition of AI (year 2023)

The version of June 14, 2023 defines AI as follows in Article 3, Section 1 of the Act on Artificial Intelligence:

"A „System of Artificial Intelligence“ (AI-System) is a machine-supported system, designed to operate with varying degrees of autonomy and produce results such as predictions, recommendations or decisions that can influence the physical or virtual environment;"

Source: Article 3 No. 1 AI Act of 14.06.2023 (bold print by me)

As this definition is very similar to the OECD definition, please refer to my criticism below. Likewise, some aspects have already been critically examined previously, in particular:

  • Producing results (now toned down, which is good)
  • Influencing the environment

The following aspects are examined critically:

  • Machine-supported system
  • Varying degrees of autonomy
  • Explicit or implicit goals
  • Producing results

Ultimately, therefore, no aspect remains uncriticized. In addition, half of the definition is based on optional statements ("can"). Almost the other half is based on exemplary enumerations ("like predictions, …") or relativizations ("with varying degrees of…"). If you subtract all this from what is useful, there is hardly anything left but filler words.

If you check whether ChatGPT is covered by the EU definition of sophisticated AI, you can already see problems:

  • ChatGPT is either not autonomous at all or an intelligence can certainly be detected in ChatGPT that has nothing to do with autonomy. Compare Turing test.
  • Intelligence can be determined for ChatGPT even if no predictions, recommendations or decisions are provided. Example: Solving a text task; the solution is neither a prediction, nor a recommendation, nor a decision.
  • ChatGPT certainly does not influence the physical environment. If ChatGPT is said to influence the virtual environment, then this must be affirmed for almost every existing computer program. The information content would therefore shrink to a value close to the epsilon environment of zero.

If, on the other hand, you take the stupid vacuum cleaner robot, you realize that it is covered by the EU's definition of AI, even though it is not intelligent. Such a robot could always drive straight ahead until it encounters an obstacle and then take a random direction. The test for the robot vacuum cleaner against the EU definition shows that it is intelligent:

  • Machine-supported system: Yes, at least one computer chip is installed.
  • Autonomous: Yes, the robot drives and drives and drives.
  • Explicit goals: Yes, clean up the floor.
  • Results: The list in the EU definition does not apply and does not apply to intelligent systems.
  • Affects the environment: Yes, soil becomes cleaner.

The AI-Regulation defines stupidity as intelligence and Intelligent Systems as non-intelligent. It can't get any worse than that.

The OECD definition of AI

The OECD attributes partly other properties to Artificial Intelligence than what the EU does. The EU version from 2023 is very similar to the OECD version. The definition of the OECD reads (as of: 19.03.2024):

An AI system is a machine-based system that for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influenc_e physical _or virtual environments. Different AI systems vary in their levels of autonomy and adaptiveness after deployment.

Source: OECD (italicized texts are from the original, strikethroughs were implemented!).

In particular, the following characteristics should be mentioned which, according to the OECD, are indicative of AI:

  • Machine-based: Very good, you can work with that. Because hardware and software are both associated with a machine.
  • Explicit or implicit goals: Much better than "goals", because implicit goals are somehow always present when there is a "consciousness" or an "intelligence". However, that also applies to lower living beings, many of which are said not to be intelligent. The simple vacuum cleaner robot, on the other hand, has no implicit but only two explicit goals: To make as much dirt disappear as possible and spare the cat in the house.
  • Inference: That's certainly true, because inference is referred to as the process by which an AI generates an output from input. However, then "inference" must be defined as a term. The OECD does this in a certain way by describing the process in a neural network. However, a neural network is not a prerequisite for intelligence.
  • Received Input: One can argue about this. Did Albert Einstein need more than his brain (or ideally would have needed) to conceive of quantum theory? As far as I know, Mr. Einstein was mostly engaged in thinking until he had assembled his theory in his head. If he had to look up knowledge, one could say that he could have done it three years before the start of the work on quantum theory. Inputs here are meant what the user inputs into the AI system and not what the AI system ad hoc puts together itself. There is probably not a single system that works without input. In this respect, this property can be omitted (all = none = no information). Even in a "perfect" vacuum, particles are exchanged (see Heisenberg). Even black holes interact with their environment (see Hawking radiation).
  • Generated Outputs: The OECD uses this term very undifferentiated and therefore not very useful. See point before (Inputs): Every system outputs something, so you can simply omit this information.
  • Influencing the Environment: This is marked as optional, but it could have been omitted altogether. See two points above (Inputs): every system influences the environment, so this information can simply be omitted.
  • Autonomouos: Mentioned, but in a rather optional or at least unclear way. Could have been omitted or better explained.
  • Adaptability: The OECD defines this as an optional criterion.

Some of the characteristics mentioned are correct, others are not. In any case, the OECD definition appears to be better than the definition in the EU AI Regulation from 2021 and almost identical to the EU version from 2023. The following criticism of the OECD definition can therefore be applied to the EU definition in almost all cases.

The OECD definition that looks good at first glance has led some to take it as a basis for translating it into German. Translating would have been more effective, because with tranfer learning the sense was reversed. For example, a secondary author described the term autonom as "for an operation in changing degrees autonomous". This does not correspond to the OECD definition and is also wrong. ChatGPT was or at least should be no autonomous system respectively must not be one to be considered intelligent.

At least the OECD-Definition is mentioning the solution path. For even a stupid random automaton can find the same solution for a given problem like a superintelligence.

Proposal for a definition and a definition process

Apparently, there is no usable definition of the term Artificial Intelligence. Before further attempts are made, I suggest a structured process. At the end, there will be a definition for AI. The question will be whether it contains fuzzy terms that would again require a definition. Update: See below for my new definition of AI.

For inspiration in finding a definition for "artificial intelligence" and thus for the term "intelligence" at its core, human intelligence (regarded by humans themselves as maximum intelligence) should be considered. That seems obvious to me anyway.

Everything is relative, including whether humans are intelligent. They are still intelligent in comparison with AI, at least if you consider all possible problems in their entirety. Status: 03.04.2024 (next year it may already look different).

Identify secured features

The first step towards a definition of AI is the simplest. Here, secure characteristics of AI are taken as a basis for a definition. As secure are considered probably the following characteristics for AI systems:

  • Artificial systems are artificial systems. They therefore rely on hardware and/or software. One could also use the attribute machine-based or perhaps also non-biological. Proof: AI = Artificial Intelligence.
  • Artificial Intelligence systems are intelligent. Proof: AI = Artificial Intelligence.

This means that two characteristics of artificial intelligence have already been identified that no one is likely to disagree with:

  • Machine-based (as also said by the OECD) or hardware-based or, even better, artificial. Because simple is simple.
  • Intelligent: Nobody (except me) says that: Neither the OECD nor the AI Regulation use the term "intelligent" or "intelligence" (and if then only as a compound term "artificial intelligence"). If you were afraid of using definable or fuzzy terms, you would not have been allowed to use Inference or Machine Learning either.

I will move away from the term "intelligent" again later. However, this term does have the effect of making you think deeply about the core problem of the definition. Incidentally, I was able to find a definition of "intelligence": The new definition of Artificial Intelligence is so elegant that deleting two words creates a definition for intelligence.

Examples of intelligent and non-intelligent systems

Before we go on, one should think about which systems are considered intelligent. I will refer to these as positive indicators. Equally important are negative indicators, listing non-intelligent systems.

The following table shows a subjective classification for intelligent systems (regardless of whether artificial or not) in the form of such positive examples:

SystemWhy intelligent?
Great robot vacuum cleanerUses a camera to detect objects in order to recognize the cleaning area and to be able to drive into it as precisely as possible.
HumanIntelligence is distributed very differently in each person, but it is generally assumed to be present.
AntCan explore a complex environment and solve difficult problems. Also capable of so-called swarm intelligence (would not be necessary for inclusion in this list).
ChatGPTCan answer complex questions and combine knowledge, can cope with fuzzy questions (see below)
Self-flying droneRecognizes previously unknown objects, makes decisions based on this and can thus independently solve the problem of flying from the starting point to a specified or otherwise determined destination.
List of examples of intelligent systems.

An equally subjective definition for unintelligent systems (whether artificial or not) follows:

SystemWhy not intelligent?
Simple robot vacuum cleanerBased largely on chance or simple rules.
Image generator (stable diffusion)Cannot use new knowledge.
Donald TrumpUses learned facial expressions and phrases, is against everyone who is against him. Has a bad hairdresser, although there are many good hairdressers.
Random generatorBased largely on chance (but can theoretically solve any problem).
Digital scaleCan only answer one type of (simple) question and only if the object or person is positioned "correctly" on the scale.
Remote-controlled droneDoes not make any decisions itself, only does what the user specifies.
Search engineBased on whatever methods are used to compare characters with each other and derive a sequence of results with the help of other rules (Link juice etc.).
Smartphone keyboardMakes suggestions for the current and next word using a counting mechanism or simple similarity comparisons.
Image recognition in security camerasPossibly intelligent. Depends on the quality of the detection.
List of examples of systems that are not intelligent.

All these examples, whether positive or negative, have one thing in common: They come without the terms Software, Statistics, Human or further phrases from the AI Ordinance.

What is the definition of intelligence?

Short answer: I don't know. As far as I know, there is no such definition that

  • exact (sharp, i.e. not blurred),
  • applicable and
  • is concrete.

This problem of defining the term cannot be solved here. If a solution to this definition is necessary, it must be taken up again. We will see in a moment. Yes, it has been shown, see below.

However, important features for defining what AI is can be derived from the claim that a AI-System must be intelligent. In the previous sentence stood "must", not "can" or "could" or "to varying degrees". Compare this to the relativizations in the definitions of the AI Ordinance and the OECD, which I try to avoid.

As further Characteristics of Artificial Intelligence, I suggest the following ad hoc and am curious if they can prevail:

  • An AI-System tries to solve a problem. The term of goal is replaced by Problem. According to Wikipedia, a problem includes goals. A problem is further understood in my definition as something that cannot be solved by simply searching for knowledge or by following simple instructions. Otherwise, every search query to aAI-System would be suitable to evoke alleged intelligence. A search query would be "Which weekday was March 19, 2024?" For the answer to this question, ChatGPT is not needed. Even if ChatGPT answers this question by looking it up in the calendar, we wouldn't speak of intelligence, because this question can also be answered by a stupid program. That intelligent systems can also solve simple tasks is no obstacle.
  • An AI system tries, to solve a problem by using a solution path that is not explicitly given. Please note: Here stands "tries"!
  • An AI system can combine existing with new knowledge as needed . Accordingly, image generators (Midjourney, Stable Diffusion methods etc.) are not intelligences..
  • A AI system is able to understand fuzzy problem statements. Specifically meant here: If in a problem statement a word is misspelled, the AI can compensate for this. Another example: Words are combined in an otherwise unusual way. The AI still understands it (just like a human). Further, more complex examples are conceivable.
  • An AI system can draw conclusions. What a conclusion is would still have to be defined. Certainly, however, almost everyone understands directly what a conclusion is or rather than the term intelligence.

The attempt to solve a problem is obviously not equivalent to actually solving one. There indeed exists what is considered an "intelligent" approach to a problem. Not infrequently, in selection procedures ("Assessment Center"), it can be observed how a candidate wants to solve a problem. It's therefore not about whether or not they can solve the problem, but rather how!

AI systems can also, but hopefully only subordinately, be defined with the help of optional features. Unfortunately, the definition of the AI regulation, on the other hand, makes almost exclusive use of these optional criteria.

My proposals for optional features for AI Systems:

  • Independent solving of problems: Even non-independent solving can be a sign of intelligence. As far as I know, no one person has built a rocket that flew to the moon, but rather thousands of people together.
  • Modalities: A modality is a data type. Examples are Text, Video, Audio, Seismic sensor. AAI-System can solve problems for one modality or also for a combination of several modalities (Example:AI-System that answers a textual question for a given image).

The following section describes a process by which the definition of artificial intelligence can be found. This may result in a definition that can do without the concept of intelligence, which in my opinion can never be clearly defined.

Process for creating a definition of the term AI

The above findings and characteristics that could describe an AI are hopefully a good starting point for a robust and, above all, accurate definition. After obtaining a definition via the following process, this definition can be validated via positive and negative examples.

My suggestion for a process to find a definition for the term AI is:

  1. Creating examples of AI systems (positive examples). Likewise, creating examples that are not AI systems, or preferably not AI systems at all (negative examples).
  2. Develop new features or change existing features from the examples and the existing features. To do this, check each characteristic: Does it apply to all positive examples found?
  3. Check for the negative examples: Does the newly found characteristic not apply to the negative examples? If it does apply: Does the combination of all previously found characteristics in their entirety not apply to the negative examples? If not, i.e. if all characteristics together also apply to a negative example: Search for a new characteristic that no longer applies to the negative example, but does apply to all positive examples.
  4. Attempt to consolidate the characteristics: Which two or more characteristics can be summarized in one characteristic?
  5. Test with the previous positive and negative examples. If the test fails, go back to step 2.
  6. Used terms that are not clearly defined, define. Go through the same process from step 1 recursively.
  7. If terms have been redefined, check whether these terms can be replaced by other terms that do not require a definition. This is what happened with the term "hardware-based systems", which was replaced by "artificial systems" in the definition in version 2.
  8. Optional: Find more examples and start again at step 1.

This process either produces the best possible definition of whatever term. Or it becomes clear that at the end of the day, every definition is based on using other terms for which there is no clear definition or which refer to other terms used.

Depending on your requirements, you will get a good AI definition after a few runs through the process, which is certainly better than any other definition that I am aware of.

Definition of AI

After all these considerations and the application of my process follows my definition of Artificial Intelligence. It reads:

An artificial intelligence is an artificial system, that attempts to solve a problem in an unspecified, useful way, even when given a fuzzy specification, by combining existing knowledge with new knowledge and drawing conclusions.

Source: Klaus Meffert in Dr. GDPR Blog (As of: 03.04.2024)

The key features of this definition are:

  • Artificial (formerly hardware-based): Since it's just simple, this attribute is used here which is already contained in the term of AI, namely artificial. Some use "machine", which would be too shortening. See further below for my definition of "hardware" (no longer necessary because now "artificial"). Software would be wrong, except if one defines software very broadly. Then you would need an additional definition to the main definition, where clarity suffers.
  • Problem-solving attempt: No input and no goal, but a problem! And not necessarily solving a problem, but already the recognizable, remarkable attempt is enough!
  • Fuzzy specification: If there is a specification ("… also at…"), it can be sloppy or inaccurate or even contradictory. Example: "Waz sint Cookis" Answer: "Cookies are data sets, many falsely say they would be text files" –> Despite spelling mistakes and missing question mark in the question, the question is "recognized" and answered. Some systems operate without a specification ("question")! Example: Vacuum cleaner robot. It is turned on and starts. Here there is no specification from the user. Rather, there is an order or task that was given to the robot during its construction.
  • Not a concrete solution path: No rigid rulebook, but an elastic, flexible system. See neural network. It can also be any other mechanism that has similar properties. Counterexamples: expert system, classical sorting algorithm, Page Rank Algorithm in the most data-hostile search engine in the world.
  • Solution-oriented: One could also use an adjective like well-founded, problem-oriented or useful. Distinguishes chance from a clever algorithm that tries to actively encourage problem-solving (from English, one knows "make an educated guess": Express a well-founded assumption. Whether the assumption was good is irrelevant, it just had to be well-founded. Well-founded is what others could describe as a good approach, who are "intelligent"). Because man sees himself as the standard, what man could find good would be well-founded. That's how you avoid "intelligent" here. After all, judges are people too, and they ultimately judge every dispute.
  • Combining knowledge: Existing knowledge is what the AI system already knew (in humans this is called "education" or "learning"). New knowledge is what the AI system either receives through a prescription or by self-acquisition (internet search, camera image, sensor values…). Combination means linking existing and new knowledge together and gaining insights from it. Insights are all information that can help solve the problem or recognize dead ends! Example Chain of Thought: The AI gets a problem and realizes it lacks knowledge. It researches, finds something, and checks if this closes the knowledge gap. If the gap is closed, the new knowledge is combined with the existing one (or only the new knowledge is taken, if there's no relevant existing knowledge). Knowledge does not always have to be combined, but if it is necessary or useful, it should ideally happen. Depending on how well the AI can do this, it is more intelligent or less intelligent. Instead of "knowledge", you could also think about whether it could be "information" or "data". Knowledge is probably correct, however, because intelligence probably means first gaining information from data and then the knowledge behind it.
  • Draw conclusions: Example vacuum robot: After recognizing an obstacle, it drives towards the obstacle, then stops, turns around, and continues in another (considered "good") direction. This "evaluation" is important here, because otherwise Random generators could be considered intelligent. Alternatively, the robot can stop or turn halfway through its journey, for example, because the rest of the path has already been cleaned. Analogous to the previous point "Combining knowledge": Conclusions can be drawn, but do not have to be. Conclusions should be drawn when necessary or useful. Depending on how well this works, the AI is more or less intelligent.

In the definition, the well-founded solution attempt is put in the foreground. Similarly, the possibility of dealing with fuzzy inputs is addressed. Combining knowledge and drawing conclusions are further important properties of intelligence (of whatever kind).

Originally I used the term hardware-based, which is now replaced by artificial. My definition of hardware, which is no longer needed, reads:

Hardware refers to non-organic matter and other existences other than organic matter. Hardware also includes organic matter that is not a living being or a quasi-living being (viruses, etc.).

My ad-hoc definition of hardware in the context of my AI definition (as of 20.03.2024).

So, therefore, the term künstlich (previously: hardware-based) is more suitable than the term machine, which the OECD uses and which I initially considered equivalent, after first thought even better. Because AI can certainly also exist in organic, non-living form. Even antimatter or other unimaginable existences are possible. I would like to exclude them at least unnecessarily. The carrier of intelligence is a priori irrelevant.

It follows the test of my definition against the above mentioned positive and negative examples. Does my definition hold up to the test?

Firstly the positive examples: They must all be covered by the definition mentioned above, otherwise the definition would not be fitting. In the listing for each example given are the features named that my definition contains ("Hardware-based" etc.).

Great robot vacuum cleaner:

  • Artificial: Yes.
  • Problem-solving attempt: Yes, with the help of built-in cameras, objects are captured and recognized as well as actions derived (similar to the action traveler problem, but without blunt trial and error)
  • Unclear specification: Irrelevant here because there is no specification given (see above, described there).
  • No concrete solution path: Yes, because the evaluation of a camera image takes place (with the Transformer approach) in a highly flexible, opaque manner through a neural network (as with humans).
  • *Solution-oriented*: The vacuum cleaner robot actively tries to work as efficiently as possible. Whether this always succeeds is secondary. A toddler also often tries things that are not immediately beneficial, but learns from them or can learn that this is a path to avoid.
  • Combine knowledge: Yes, camera image with object recognition = new knowledge, previously perceived environment = old knowledge, new route = combination.
  • Draw conclusions: Yes (see above, described there).

Conclusion: Definition fits.

Before going through further positive examples, let's take a look at the negative examples as a precaution: The above definition of AI must not apply to any of these examples. Some of the negative examples have already been mentioned above as to why they are not intelligent. Therefore, here only in brief:

  • Simple Vacuum Robot: Works by chance and therefore does not try to act solution-oriented. Can't combine knowledge.
  • Random Number Generator: Ditto.
  • Digital Scale: Can't combine knowledge, doesn't solve a problem or even try to (see above: Problem concept. Problem = something more comprehensive, not solvable by simply executing instructions.).
  • Remotely controlled drone: Ditto.
  • Search Engine: Fully rule-based, uses a specifically given solution path. Search engines connected to AI systems are AI systems; -)
  • Image generator (Stable Diffusion): Can't combine knowledge or shows this in the output (or not detectable) –> The creative area is not necessarily measurable with the intelligence concept. Whether Rembrandt as an artist was intelligent, can be objectively denied (without any irony, said quite soberly). Art lies in the eye of the beholder. Art and intelligence are two categories that are initially incompatible. A painting often doesn't really solve a problem. Many will certainly know this case: A work of art, for example, earns 100 million euros. The majority of humanity would then say: "This work is ugly. I wouldn't pay one euro [or national currency] for it." Intelligence isn't about majority judgments, but art is purely subjective.
  • Smartphone Keyboard: Works with frequencies (some say statistics).
  • Donald Trump: Outside of that consideration.

Conclusion: All negative examples are correctly not covered by the AI definition. The definition still fits.

This is followed by a validation of the definition with further positive examples:

ChatGPT:

  • Artificial: Yes.
  • Problem-solving attempt: Yes, apparently. See the numerous impressive examples.
  • Vague specification: Yes, see sloppy question formulations of all kinds.
  • No concrete solution path given: Yes, ChatGPT is based on a neural network and the Transformer approach (= human intelligence mechanism, I say)
  • Solution-oriented: Yes, apparently, as many examples show. Not every problem needs to be satisfactorily solved (see human as an example).
  • Combining knowledge: Yes, apparently so. Now also through adding internet knowledge. See also FastGPT (my result was perfect. FastGPT uses randomness to be able to give more creative answers, therefore it can have imperfect results if you follow the link).
  • Conclusion draw: Yes, apparently. ChatGPT can even solve math problems that the best mathematicians in the world can hardly or not solve at all. And then there's often still the solution path.

Conclusion: The definition also fits here.

By the way: If you'd like a search engine other than Google, I recommend Kagi. Kagi includes the aforementioned FastGPT. Now comes the punchline: Kagi costs $5 per month, but has no ads!

Ant:

  • Artificial: No, living –> fine, because an ant is not artificial intelligence but a (living) intelligence. This criterion would not have had to be tested for the ant.
  • Problem-solving attempt: Yes, look at the life of an ant. Examples: foraging for food, building a nest.
  • Fuzzy Target: Either there is no target or the ant colony gives us a fuzzy target.
  • No specific solution path given: Yes, see the life of an ant. Examples: foraging for food, building a nest, defending against enemies.
  • Solution-oriented: Yes, apparently. After all, ants have been living on this planet for a very long time.
  • Combine knowledge: Yes, apparently. Examples: Explore terrain, forage, search and transport material for nest building.
  • Draw conclusions: Yes. Example: Pheromone trails of other ants.

Conclusion: The definition fits.

Human:

It is said that this human being is intelligent. I say: The current AI systems use qualitatively the same intelligence function as humans. This consists essentially of a neural network and the Transformer approach or a comparable or better approach.

Conclusion: The definition fits, although humans are not AI (hence no test against artificially).

It follows the test against the example, in which it depends on the quality of the system whether the system is intelligent or not: Image recognition in security cameras. The question is what this system should accomplish and how well it fulfills the task.

  • Artificially: Yes.
  • Problem-solving attempt: The problem is object recognition in the image. If only simple objects or just a few slightly more complex objects are supposed to be recognized, the system might not be so efficient and therefore not AI. Many objects can also be recognized without AI.
  • Fuzzy specification: Irrelevant here since there is no specification. Or if a specification (only respond to intruders, not passersby), then the system would be an AI if the specification is sufficiently taken into account.
  • No concrete solution path given: If yes, then at least potentially a AI.
  • Solution-oriented: If not, then no AI. Rule-based would not be or only limited solution-oriented, depending on complexity and performance of the rules.
  • Combine knowledge: If not, then no AI.
  • Draw conclusions: If yes, that would be a strong indicator for AI. If no, then the system would be less capable and probably not an AI.

As can be seen, the definition can be used to determine quite well when image recognition can be an AI and when it is not. The degree of intelligence can also be derived in this way.

Conclusion

The definition for Artificial Intelligence that I have developed is:

What is artificial intelligence?

An artificial intelligence is an artificial system, that attempts to solve a problem in an unspecified, useful way, even when given a fuzzy specification, by combining existing knowledge with new knowledge and drawing conclusions.
Source: Klaus Meffert in [Dr. GDPR Blog](https://dr-dsgvo.de/ki-definition)

The suitability of the definition was checked using a system. This was apparently not done for the AI definition in the AI Regulation. The OECD definition is better, but not sharp enough. It also contains numerous filler words that indicate that the authors have run out of important terms.

The definition was obtained without looking at general sources (such as Wikipedia, Duden, etc.), but only compared with them afterwards (see the "PS" below).

In this contribution, key features of AI were defined that should be considered undisputed.

Furthermore essential features of AI systems were defined. In addition optional features of AI systems were defined, which are not necessary for definition but help in distinguishing different forms of AI systems.

To validate, positive examples for intelligent systems and negative examples for non-intelligent systems were named and briefly described.

Subsequently, a process was described with which a best possible definition of the AI term can be obtained.

Afterwards a proposal for a definition of artificial intelligence was given. Finally, this definition was checked with the help of the described process. The check showed that the definition fits for the mentioned positive and negative examples.

The definition for AI seems fitting for the examples given. Further examples will help refine and intensify the definition. Perhaps a correction is also in order. My AI-definition is based particularly on the problem concept. What constitutes a problem is easier to explain than what "intelligence" or "AI" is. By reducing the definition problem to something simpler, it will likely come down to sharpening the concept of a problem and possibly making further additions.

The creative field is not easily accessible to the concept of intelligence. I would suggest that artistic creation should be considered secondary. Perhaps it boils down to a definition of creative artificial intelligence, which supplements, expands or refines the above definition.

The given definition distinguishes itself qualitatively from those of the OECD and the AI Regulation. It does not focus on software or goals. The human as part of the system is also not defined. Humans are not necessary for a Artificial Intelligence (if it exists at all) to be present. Fundamentally different is that my definition targets the solution path and demands the combination of knowledge and conclusions as criteria. Input and output are not mentioned in my definition, because I consider them no features of intelligence. The problem and the attempt to solve it are however integral part of my definition. Elegant is the use of "artificial system", because AI is artificial.

This article actually began with the aim of criticizing existing AI definitions and naming a process for arriving at a good definition. In the end, a definition of AI emerged that I personally consider to be much more suitable than that of the AI Regulation and the OECD.

Finally, the comprehensive definition of intelligence:

As intelligence, a system is referred to that attempts to solve a problem also with vague specification in a not specifically given, solution-oriented way to solve it and combines existing with new knowledge and draws conclusions.

Reference: Klaus Meffert in Dr. GDPR Blog (Last updated: 03.04.2024)

Please use the comment function below or send us an e-mail (link at the bottom of the page: "Write a message").

PS: As I see it now (while reviewing the finished contribution), Wikipedia defines the concept of intelligence through problem-solving. The word "goal" is not mentioned there. The Wikipedia definition uses all possible fuzzy terms ("cognitive", "mental", "meaning-oriented"), which are also wrong when it comes to artificial intelligence. My definition of intelligence corresponds exactly to my definition of Artificial Intelligence, with the one difference that in my AI definition the artificiality is named as an additional criterion. Furthermore, note that the Wikipedia-mentioned "problem-solving" is there also defined as a pure (founded) attempt, not first as a successful attempt ("… aims at "), which supports my definition.

Key messages

Existing definitions of artificial intelligence are flawed and inaccurate. This article proposes a new, clearer definition of AI based on a process of refinement and critical analysis.

The EU's 2021 AI definition is too broad and doesn't accurately reflect the complexities of artificial intelligence.

The EU's definition of AI is flawed because it claims that AI can only be intelligent if humans set its goals. This is illogical because AI can be intelligent and set its own goals, as demonstrated by examples like ChatGPT.

The EU's definition of AI is too broad because it uses characteristics that apply to almost everything, not just artificial intelligence.

The EU's definition of AI is too vague and inaccurate, relying on overly general terms that don't capture the true essence of artificial intelligence.

The EU's definition of AI is flawed because it labels non-intelligent systems as intelligent and vice versa.

The author proposes a process to define AI by analyzing examples of AI systems and non-AI systems, identifying key characteristics, and refining the definition iteratively.

AI systems learn and solve problems by combining existing knowledge with new information, making educated guesses, and adapting their approach based on the results.

The author defines artificial intelligence based on its ability to solve problems, learn from experience, and adapt to new situations.

The author argues that ChatGPT demonstrates true artificial intelligence because it can solve problems, combine knowledge, and learn from the internet, unlike simpler examples like vacuum cleaners or search engines.

Artificial intelligence is a system that tries to solve problems in a helpful way, even if the instructions are unclear, by using what it already knows and learning new things.

About

About the author on dr-dsgvo.de
My name is Klaus Meffert. I have a doctorate in computer science and have been working professionally and practically with information technology for over 30 years. I also work as an expert in IT & data protection. I achieve my results by looking at technology and law. This seems absolutely essential to me when it comes to digital data protection. My company, IT Logic GmbH, also offers consulting and development of optimized and secure AI solutions.

AI: Infinite possibilities with full independence from third parties