Many think ChatGPT is just another improvement of AI systems. That's fundamentally wrong. Rather, a whole series of sensations have recently taken place that seem unknown to most people. I myself am able, for example, to set up several high-performance AI systems in no time at all and have them run on my own hardware without the need for an internet connection.
In brief
The contribution answers the following questions:
What is the AI Revolution and what stage is it at?
Answer: The AI Revolution refers to the rapid development and improvement of artificial intelligence, which has profound impacts on human history. It has already reached stage 3 of the so-called World Scope 5, with stages 4 (materialization of AI) and 5 (socialization) yet to be achieved.
How will the AI revolution affect society?
The AI Revolution could for example lead to stock prices being so predictable that the stock market will cease to exist in its current form.
What are some examples of AI applications that the author has developed himself?
The author has developed a AI-program for audio transcription of podcasts and a system for calculating the similarity of images and texts.
What advantages do local AI systems offer compared to cloud-based systems?
Local AI systems enable better control and adaptation, reduce legal uncertainties and data protection issues, and cause no costs for third parties.
How has the speed of AI development changed?
The speed of AI development has snowballed, with daily news from research and an open culture of exchanging knowledge and source code.
Key words:
Artificial Intelligence, Computing Power, Storage Capacities, AI Revolution, Model Sizes, Availability of Resources.
Introduction
The Korean Independence Movement has neither begun nor started, it is already fully completed for what they call World Scope 3 of 5. The Turing Test was finally (in my opinion) recently passed by OpenAI's ChatGPT or comparable intelligent systems (such as LLaMA from Meta).
The World Scope 1 refers to large databases with collections of texts or images, Scope 2 relates to mass data from the Internet. Scope 3 means multi-modal models (more on that below). The next two stages are still unattained: Scope 4 is the materialization of AI (robots etc.) and Scope 5 is socialization. Until then, it will likely take another 10 or more years, I humbly assume. You will at least experience it, provided you are not yet in retirement age and live an average long life.
What is the AI Revolution?
Many have not yet noticed that a turning point in social developments is imminent due to the enormous new capabilities of artificial intelligence. That's no assumption, but my certainty. Believe it or not, it will happen. And probably still this year (2023).
A first glance: Stock prices will be so easily predictable that anyone with a little play money on their account can almost risk-free gamble to increase their wealth. This will (so my naive assumption) lead to the stock market in its current form ceasing to exist. Here are historical stock prices as CSV files. News archives are available on various platforms. Wow, we already have the data needed for a basic model… Maybe we still need to add weather reports and environmental events? What else could we possibly need?
What I write about the stock market, I can now program myself: A forecasting program for stock prices, which works in unprecedented quality, and that's on my own old personal computer, which I bought about 3 years ago for a moderate budget. A bit of effort must be made for such a program. But at least I know how it goes, what the steps are and which problems need to be solved. The problems that seem solvable are based on high-quality mass data. You would therefore need a lot of historical stock prices as well as past economic news.
Real-life example: Audio transcription with AI
Here's a real example of what I've programmed with few means: Stephan Plesnik and I recently recorded episode #24 of Data Protection Deluxe Podcasts. It's about ChatGPT and Artificial Intelligence as well as the dangers regarding data protection.
I have written a AI-program with source code and run it. As input I defined our episode 24 MP3 file of the podcast. The program uses an AI model that was optimized for audio transcription. During transcription, it tries to convert spoken language into text. Many use this to add subtitles to videos with less effort than doing it manually.
Here is the Audio Playback, for which my AI program extracted the text that was spoken:
Here's the unedited original output of my AI program for the podcast episode (only "Stephan" instead of "Stefan" I edited):
[00:00.000 - 00:12.000] privacy Deluxe, the Podcast rund um the Thema privacy und IT mit Dr. Klaus Meffert und Stephan Plesnik.
Yes, hello and welcome warmly to privacy Deluxe Podcast. I'm Stephan Plesnik and as always Dr. Klaus Meffert is here with me. Hi Klaus, how are you?
Hello Stephan, everything's fine as always. I hope so with you too and I'm looking forward to our conversation.
Oh, wonderful, everything is fine with me too and I've formulated it all consciously like that. I'll start with a quote and then it will explain itself for many people what we're talking about today.
[00:37.000 - 00:51.000] That quote means intelligence is the ability to accept one's environment. I find that a very beautiful saying, because we want to talk about a topic today.
[00:51.000 - 01:03.000] It's actually funny that we haven't talked about this for so long, namely ChatGPT and its implications, legally speaking and especially for privacy.
The ability to accept one's surroundings would also mean that we ultimately have to use our intelligence to accept that it exists.
[01:15.000 - 01:34.000] What's your general opinion on the topic of ChatGPT and these developments, like we've already seen in Italy where it was banned due to unlawful processing of personal data?
Yes, so maybe first the quote. That can only be said by a person who talks exclusively about human intelligence, I'd say now, because a computer system is quite indifferent to its environment.
[01:46.000 - 02:00.000] And I claim that there is a test by which one can determine whether a computer system possesses human intelligence, at least the same abilities as human intelligence.
...
The text is as error-free as possible, although we formulate completely differently when speaking than in written text. Furthermore, three speakers can be heard in the podcast , namely Stephan Plesnik, I, and the intro voice. Even music is hidden behind the language in the intro. Nevertheless, my program was able to accomplish the translation of language into text. With a simple post-processing by a static program, one can automate further smoothing and bring the text more towards grammatically correct text.
The time indices in the program output mentioned above were also generated by the program. It is also possible (lightly) to distinguish between speakers, I have just not yet implemented it.
Many challenging tasks have been solved, including:
- "Uh" and some other filler words removed.
- Word repetitions recognized and filtered out.
- Understanding exotic and also English terms correctly.
- Simultaneous speaking of multiple speakers is no problem for AI.
- Several speakers.
- Even less "simple" words and expressions are recognized, such as "all of a sudden".
- Music filtered out etc.
My budget: 0 Euros, Time expenditure: Enough time left over to program an AI for your problem.
Are you now convinced that AI has reached a level never seen before? Not only that: today's AI systems have further very notable qualities that make everything so explosive and also interesting.
The AI Revolution is over
Of course there will be further developments. But a truly significant, extreme milestone was completed very recently in a positive way.
The AI Revolution at the level of the Turing Test is over. Anyone who thinks today's artificial intelligence is not particularly revolutionary has not understood the development and knows nothing about the current state of affairs.
Further revolutions will follow.
Recently there was a talk show on ZDF (should have been Maybritt Illner). In this, Ranga Yogeshwar argued with Saskia Esken. I don't want to express an opinion about politicians here, but only regarding Mrs. Esken's statements about artificial intelligence.
Mrs Esken is an IT specialist, just like me (if now this gender language wasn't there, the sentence wouldn't be misunderstood and couldn't suggest that I had undergone a sex change). Unlike Mr Yogeshwar, Mrs Esken didn't see the revolution but referred to earlier (quite recent) successes of Artificial Intelligence. In doing so she misses some essential qualities of current approaches like GPT ("today's" instead of "current" would not do justice to the speed of developments).
State of the Art
Let's start with something obvious that initially has nothing to do with AI. As everyone knows, and is undisputed (except by lawyers who are in a dispute…), modern computer systems have two main features:
- Very high processing power (including large and fast hard drive space and main memory).
- Very low price: My first printer cost more than 1200 DM in 1988 and was a 9-needle printer. Analogously it was with very slow home computers, whose exact price I no longer remember. My model had then 4.77 MHz and 2 x 64 kilobytes main memory as well as a floppy disk drive. Of course the good piece only had a CPU with one core. What a CPU core is, nobody knew back then either.
The development of PCs has reached a point where anyone can have a High-performance computer under their desk for little money. Or do you know affordable hardware that could transcribe an audio file in very high quality 8 years ago?
In addition, network connections have always become faster, both in the internet as well as in the intranet. Only Vodafone seems to have missed this trend, as every cable customer of Vodafone will confirm who wants to hold a video conference over this line.
The main memory sizes are exploding, which is extremely important for AI applications. With 32 GB you can at least get halfway there when it comes to training new models.

Hard disk storage has also become extremely large and affordable. The basis of AI applications are mass data. Therefore, this point is extremely important. My old 2 TB hard drive is at least large enough to store several larger datasets for AI applications simultaneously. The next plate will then be at least 4 TB in size, so as not to have space problems again immediately.
Graphics cards, which were originally (I suppose) optimized for computationally intensive and graphics-intensive computer games, "accidentally" bring with them the kind of chips that can perform AI calculations particularly quickly. I mean GPUs. Mid-range gamer graphics cards from Nvidia have 5888 CUDA cores (CUDA = Compute Unified Device Architecture). CUDA cores work just like CPU cores. However, you've probably already noticed that in CPUs only a few cores are available (12 for example in already very good Intel chips). Now compare the number 12 with the number 5888 and think about how much faster a calculation would then likely run!
Graphics card processors, CPU speeds, main memory sizes, hard drive space, cloud computing: highest performance at lowest price, that's the fertile ground for the AI revolution.
Here is an example of how graphics card processors (GPUs) can be used instead of computer processors (CPUs) in AI applications:
device = 'cuda' if torch.cuda.is_available() else 'cpu'
The instruction does the following: If a graphics card capable of AI is present, its tens of thousands of processor cores are used for calculation. Otherwise, the meager 8 or 12 CPU cores are used and you will have to wait a bit longer. The result will be available in light speed at best and in sound speed at worst (the comparison is not scientifically justifiable, but it shows how far we have come today).
Furthermore, cloud applications are available in very high quality (performance, comfort) at a very low price. Cloud applications primarily mean Cloud Computing. Cloud storage is less meant by this, although online storage in the AI area often has its justification.
As much as about technology, which can benefit any kind of application. The AI Revolution was only possible because of these parameters. Too little CPU and GPU Power makes every mathematical model useless, which requires extreme resources. Who waits eagerly today for tomorrow's weather forecast to be able to announce it only the day after tomorrow? Then better the Jehovah's Witnesses, who proclaim more effectively (small deviation towards data protection, because a EU Court ruling states that Jehovah's Witnesses as announcers and the faith community are joint responsible for the proclamation act).
Type of algorithm
The current AI system, in my opinion, beautifully replicates the functioning of the human brain. The brain receives a series of stimuli, namely through
- Eyes
- Ears
- Nose
- …
These stimuli are converted into analog signals. These signals are processed in the neuronal network of the brain between neurons. To do this, synapses are used as neuron connections. The synapses are differently strong with neurons connected. The system learns through upbringing by parents as well as feedback from the environment. Based on that, thinking processes take place, which often follow actions (speech output, text output, movement). That's it, actually?
Analog, a ANN (artificial neural network) or current AI technology works:
- Input stimuli come in through input data: texts, images, audio files, video files
- The transformer encodes these signals into discrete numerical sequences
- These data are processed in an artificial neural network
- The neural network generates an output in the form of text, language, image or video
The analogy is significant. That AI systems do not know what their results mean, I consider this an argument that does not count. Who can show that humans know what the results mean that they produce? If a human has to explain a term, he constantly refers to other terms until finally the 100th term is explained again by the 1st term. If a real object is to be understood, one can point to the object. The computer will soon get that too. If it's an abstract object, the computer can do just as much as humans.
Model size
A model is almost like wiring up neurons in a brain of AI. Downloading a model then downloading a (often specialized) brain.
The more connections a artificial brain has, the greater the chance that it will have higher abilities. At least one could naively deduce this from the development of model sizes of GPT.
ChatGPT-2 had 1.5 billion parameters in the model. A parameter (simplified) refers to the configuration of a neuron, since GPT is also based on a neural network. The GPT version 3 already had 175 billion parameters and was approximately 350 GB large. The storage size is impressive, but can be handled by any budget notebook. However, it's also about handling the model, which involves performing complex calculations. It's not enough to carry the brain around.
You will agree with me that many neurons are better than a few. At least, this is generally true. Not coincidentally, humans have the most neurons among all living beings (according to Wikipedia) – only elephants have more, with 250 billion neurons. Here's a selection:
| Living beings | Number of neurons |
| Human | Eighty-six billion |
| Ape | 33 billion |
| Red monkey | 6 billion |
| Cat | 250 million |
| Rabbit | 71 million |
When looked at more closely, intelligence also arises from the ratio of brain size to body size. So one could naively explain why a cat has less than 1/3 neurons as a sheepdog, but in my experience with these animals is clearly more intelligent than the dog. Of course, the quality, type and number of connections also have something to do with the performance of a brain.
That the model size at least for AI models conditions an increase in the performance capability of the AI is called Power-Law and can be considered as given.
Let's get down to basics: Humans have 86 billion neurons, GPT-3 has twice as many neural connections (and 96 neuron layers) and thus at least an order of magnitude fewer neurons than our brain. Let's just put it out there without judgment. It's clear that this enormous number certainly wouldn't prevent intelligence from occurring. Apparently, the human brain had more time to develop than artificial brains did. In itself, it's hardly surprising that more neurons in electronics produce less result than in nature. Not to mention the Modalities.
I deliberately chose the word Modalities. Because of the AI Revolution we must also talk about multimodal systems. Multimodal means that the input data is diverse in form. So far, AI systems have mainly dealt with text processing, for example. Now – as many of you certainly know – text and image can be combined. See Dall-E, Midjourney and other procedures that can generate a synthetic image of unprecedented quality from a text prompt.
Availability of funds
I installed and programmed a system earlier that allows me to calculate the similarity between images and texts. With this I can then determine on which pictures a mouse is depicted and on which one a mouse being chased by a cat is shown. Furthermore, I can generate images that visualize text input.
You know this story: You tell someone about something amazing you've accomplished, then they say they know someone who did that (or even better) too. I'd like to say: That can be done by me on my computer without needing to ask or pay anyone else, and I don't need an internet connection after creating the program. So much for one part of the AI revolution!
Above, I gave an example of a AI-program that I myself created on my computer. To run it, I need no internet connection, no service from Google or others, nor a consultant (exceptionally also not a tax advisor). This program can create transcripts of my podcast (or other German-language podcasts) in high quality.
With the new AI technology, I can program top-notch AI systems in a flash and have them run on laughably cheap hardware.
"Best AI systems" is meant absolutely on one hand, but relatively compared to the abilities of earlier AI systems, which are completely incomparable with current possibilities.
It's now quite possible for me to build, program, improve, and use such systems. If you had asked me three months ago, I would have thought it impossible.
The whole thing is taking place on a very pitiful hardware, because my PC is several years old and was already quite cheap back then.
Please note that German language is strongly subordinate to English in international affairs and research. Nevertheless, I was able to obviously and thus undoubtedly have a German-language audio file automatically translated into text.
Recently I also built a question-and-answer machine. It's not as powerful as GPT-2 or newer versions, but that's just because I used a small initial model. My hardware can't handle anything more. Tomorrow I'll build a (locally running on my computer) translator that turns a German text into English. Guess how long it will take me to get the first (very good) version? My estimate: A few hours.
I'll just summarize: I've managed in no time at all to grasp certain types of AI systems intellectually, so that I can use them, program them and get them running on low-cost hardware to produce results of unprecedented quality for me. We're talking about achieving top results without any significant prerequisites that were previously only achieved by billion-dollar companies and their outsourcing services.
Local Systems, Data Protection
I mention in passing that high-performance systems based on current AI algorithms and models can run locally on their own hardware.
That has several positive consequences.
Firstly, no costs are incurred by third parties. Anyone who knows cloud services is aware that the Pay Per Use model also brings its dangers with it. A program accidentally calling itself infinitely often quickly causes unwanted costs of several thousand euros in just a few days.
Secondly, one can control and adapt local systems themselves. Local also often means Open Source. Thus, the control is as great as one could only imagine until recently.
Thirdly, good data protection is possible on local systems and legal uncertainties can be reduced to zero. Many of you have probably heard that the transfer of personal data to intelligence agencies like the USA without user consent is not allowed and even with consent is hardly manageable legally.
Development Speed and High Availability
There are daily news from research. What was once pure theory now has a hard practical reference.
New algorithms, new frameworks, new models. Everything is almost new every day.
Developments on the market that used to take years are now taking place in weeks or a few months. The current speed I would like to call light speed. You will agree with me that exactly this speed is the correct term for being able to write a program quickly, which previously took years and was only possible if you had a lot of personnel and money.
The development is proceeding in a landslide-like manner. This also has to do with the open culture of the AI scene. Insights are exchanged, including source code. Previously, source code was also exchanged, but it was hardly relevant. Now one line of code can change the world.
Publicly available are algorithms, frameworks, computing power, APIs, assistance, vast data sets, and models. Even if it should have been that way earlier: little excitement naturally interests hardly anyone.
Gray Box thanx to AI-kickstart
A white box is a fully adaptable system. A black box is a completely unadaptable system.
A Gray Box (or Grey Box) is a middle ground. The system can be adapted at a low level. Thus, an AI system can easily be modified, updated, revised, and improved.

This solves many problems that come with pre-fabricated systems. On the other hand, completely open systems are too complex. One has to deal very intensively with it before the system generates an individual benefit.
Completely different is a Gray Box System, which in my opinion exists with current AI systems. You can start with a Black Box and turn it into a Gray Box as needed. If you have enough computing power and data, you can even create a White Box. That's the best way to do it.
How great is that then?
Downloadable brain
Let's make it short by giving an illustrative comparison of what AI used to be like and what it is now.
AI was "earlier" like when you know nothing about the Japanese language and buy a Handbook for learning Japanese, live in the country for three years, and then can speak Japanese quite well.
Earlier" meant that the term "today" referred to a time period from the present day back a few years into the past. Currently means "these days" a time period from today (this moment!) up to a few weeks or months into the past. Currently, "earlier" means a period that ended sometime last year or this year.
Those who don't understand it aren't dumb, they're just not knowledgeable enough about Artificial Intelligence.
AI is now analogous to downloading the Language Center from a Japanese person's brain, loading it into your own brain (computer) and then being able to speak very good Japanese.
Believe me now that an AI revolution has already taken place?
Conclusion
If you still don't believe in the AI Revolution, then please read my post again this time properly. Or send me a real problem of your company that you would like to have solved. It should be a problem related to text processing, however, since I know AI technologies quite well but do not want to stray too far off course. But I can also tell you how image analyses are currently running.
Ideally you don't believe in the AI Revolution. Because that would increase my Competitive advantage even more and it will then be easier for you to take advantage of my consulting services.
In recent days, I haven't been writing primarily about data protection, but rather about Artificial Intelligence. This alone makes it clear that the AI revolution is here. At least, I've become aware that we don't have a data protection problem with many AI applications. With a few innovative approaches you can avoid legal problems.
A tip for all those involved with language, text or image processing, i.e. designers, translators, authors: Reconsider your business model and your future. Look for a new or additional activity or sharpen your business model (“What do customers want from you that a computer can’t do?”). Before you sharpen your business model, think about whether you know what a computer is currently capable of doing and how it will look like in a month.
Summary
This post was summarized by an AI for you and only lightly edited:
Artificial Intelligence (AI) and data protection are important topics of our time. The AI revolution is in full swing. With systems like OpenAI's ChatGPT and Meta's LLaMA significant progress has been made in passing the Turing Test. The development of AI systems is advancing rapidly, enabling complex tasks to be solved.
The contribution provides insight into the possibilities offered by AI systems, such as predicting stock prices, transcribing audio recordings and analyzing images and texts.
The availability of powerful hardware and cloud applications at low prices has accelerated the development of AI systems. AI systems can be run locally on one's own hardware, which enables data protection and control. The open culture of the AI scene promotes the exchange of knowledge, algorithms, and source code, which further advances development.
The AI Revolution also affects professions like designers, translators and authors. Business models should be rethought in order to focus on skills that a computer cannot provide.
Brief summary:
The AI Revolution is already a reality and has the potential to fundamentally change human history. With the development of systems like ChatGPT from OpenAI and LLaMA from Meta, significant progress has been made that could pass the Turing Test. These systems are capable of handling complex tasks such as text and image processing, speech recognition and translation.
Key messages
The rapid advancements in AI are creating a revolution with profound impacts on society, surpassing previous technological breakthroughs.
The author believes AI has reached a revolutionary level, surpassing the Turing Test and marking a significant milestone in its development.
The rapid advancements in computer hardware, particularly processing power, memory, and storage, at increasingly affordable prices, have created the ideal environment for the AI revolution.
AI technology is rapidly advancing due to powerful GPUs and cloud computing, enabling complex calculations and mimicking the human brain's learning process through artificial neural networks.
Larger AI models with more parameters tend to perform better, but the quality and type of connections, as well as the ability to process multiple types of data (like text and images), are also crucial for intelligence.
Artificial intelligence is rapidly advancing and transforming many aspects of our lives.
These systems can do complicated things like understanding and working with text and images, recognizing speech, and translating languages.




My name is Klaus Meffert. I have a doctorate in computer science and have been working professionally and practically with information technology for over 30 years. I also work as an expert in IT & data protection. I achieve my results by looking at technology and law. This seems absolutely essential to me when it comes to digital data protection. My company, IT Logic GmbH, also offers consulting and development of optimized and secure AI solutions.
