Because ChatGPT first amazed mankind with its incredible results, many people think they have to use this chatbot for everything and everyone. What seems harmless in the private sphere quickly leads to problems in business life that can quickly become justiciable.
Introduction
Recently, a German company announced that they offer an AI solution for interpreting legal texts. The trigger was a contribution on Dr. GDPR about Artificial Intelligence for analyzing legal texts.
The company's AI application is still in the beta stage. There is reason to suspect that a prototype was built in a hurry in order to explore the market for such applications. At no point was it mentioned which language model is used in the background. Investigations suggest that it is ChatGPT.
This article was created together with a presentation from 05.09.2023 at the IT Club Mainz & Rheinhessen at the Gutenberg Digital Hub in Mainz.
See below for impression.
The AI system's answers were good, but not earth-shattering. It provided reasonably plausible answers to legal questions, including sources. However, the answers would not have been sufficient to win a legal dispute. In fact, the results would probably have led directly to defeat.
The fact that ChatGPT-3 was used by the German company's AI application was established by asking the chatbot for information about this. The AI's response was: "Yes, I am based on a model called GPT-3, which was developed by OpenAI. […]
What's so problematic about ChatGPT? The question could also be asked for Microsoft Copilot. A simple test proves, that Copilot itself is overburdened with simple tasks.
Motivation
Chatbots typically draw on a pre-compiled knowledge base. Here's an example of an application from the above German company, which apparently uses ChatGPT in the background:

The chatbot therefore does not know a judgment that exists and is uniquely identified (for a lawyer) by a unique identifier. The chatbot should have told the user where the limits of the chatbot are rather than pretending that the judgment they are looking for does not exist.
To use ChatGPT, all you need to do is use the OpenAI interface (API). Because it is so easy to use this API, many people seem to feel compelled to do so. This is where the trouble begins, as you can see from the example above.
This is how easy it is to use ChatGPT from your own program (note: the data all flows to OpenAI/ChatGPT, only the following program code is local to the programmer):
import openai
openai.api_key = "XYZ" #Your paid OpenAI API key
completion = openai.ChatCompletion()
Define a function chatgpt to ask a question:
chat_log = [{
'role': 'system',
'content': 'You are a helpful assistant',
}]
chat_log.append({'role': 'user', 'content': question})
response = completion.create(model='gpt-3.5-turbo', messages=chat_log)
antwort= response.choices[0]['message']['content']
return antwort
What is the answer to all questions?
Caption
Calling the OpenAI API for ChatGPT to receive an answer from ChatGPT for a question.
Anyone really can do it. In any case, as many non-programmers can create and execute this code as there are people who are not master mechanics and can change a tire on their car. As indicated in the code, the fun costs money once a test quota per call has been exceeded. The response time could also be faster (this is complaining at a high level to dampen the illusion of some that ChatGPT can do everything).
Another example of a false statement generated by ChatGPT:

Here is the translation: My conclusion, which can also flow into my own AI, but not into ChatGPT: Cookies are not text files, but datasets. ChatGPT will therefore continue to always answer incorrectly.
As a reminder: ChatGPT is an extremely advanced language model that's in a league of its own.
Unfortunately, ChatGPT is based on so many millions or billions of documents that too much imprecise general knowledge and too little precise specialized knowledge has found its way into the artificial brain.
The Microsoft Bing Search is also driven by AI. Microsoft recently financially participated in OpenAI to profit from the ChatGPT hype. Especially when simple formulated search queries come into play, the results are quickly wrong or absurd if the Bing AI comes into play. Conversely, the results often lie within the positive expectation range if Bing searches conventionally. Conventionally, it often works best to get good hits. The difference is that answers from the search results are only quoted, not abstracted, i.e., in their own words reproduced by the AI.

Better a correct, quoted answer than a wrong answer in your own words.
The better solution
A better solution provides correct answers more often and gives the user the chance to better recognize potentially inaccurate or incorrect answers so that they can critically review them. Whether an answer may be inaccurate or incorrect can be easily determined with a company's own AI, unlike when using the black box called ChatGPT.




My name is Klaus Meffert. I have a doctorate in computer science and have been working professionally and practically with information technology for over 30 years. I also work as an expert in IT & data protection. I achieve my results by looking at technology and law. This seems absolutely essential to me when it comes to digital data protection. My company, IT Logic GmbH, also offers consulting and development of optimized and secure AI solutions.
