MENU
Innovation

ChatGPT – an omniscient answering machine?

02 march 2023

ChatGPT has been making waves like no other development from Silicon Valley in recent years. The AI-supported chatbot provides answers to complex questions and can also generate presentations, scientific essays and, if desired, poems or recipes. Microsoft has great hopes of this language model and is set to integrate it into its Bing search engine. We spoke to Carsten Becker from the TÜV NORD Corporate Centre Innovation about the opportunities and risks presented by ChatGPT.

 

The hype around ChatGPT is colossal. But what makes the system different from previous chatbots?

Carsten Becker: Current chatbots are designed for very specific and limited processes. You can use them to book a flight or have your credit card blocked. The systems do not have to be particularly smart –  based on user input, they work their way bit by bit through a decision tree until they reach their goal. With ChatGPT you can now actually “speak” or write freely. The system then refers to the context to understand the direction of the question asked and generates answers at an amazingly high linguistic level, neither of which actions was previously possible in this form.

 

In which areas could the AI chatbot be used?

Carsten Becker: First of all, in any form of research, whether for private individuals or in professional contexts. ChatGPT can also make e-learning much more interactive and natural because, as in an analogue learning situation, you can always follow up with questions if you haven’t yet understood an aspect or need more detailed information. This AI system also far exceeds the capabilities of previous chatbots in the access it offers companies to completely new ways of communicating with their customers. And, last but not least, ChatGPT could relieve us humans of tedious and repetitive administrative tasks, such as collating reports based on collected data. And yet, as a society, we need to agree on which tasks and areas of work we want to hand over to AI.

 

Despite all the euphoria, ChatGPT’s tendency to make mistakes has been repeatedly exposed. Just how reliable are these kinds of systems?

Carsten Becker: ChatGPT formulates its answers in more detail than is the case with a normal Internet search and presents them with great “confidence”. This makes it easy for users to get the impression that the information they have been given on a topic is correct and to the point. However, this is not always the case. Some errors result from the fact that ChatGPT was trained with data collected up until 2021, meaning that it is not up to date on more recent topics. Microsoft aims to fix this latter issue by integrating AI into its Bing search engine, although initial tests suggest that, at the moment at least, this is not always leading to error-free answers. In some cases, the system also appears to cheat and simply invent source references. This of course gives rise to the enormous risk that false information will be disseminated when an AI system makes assertions that, while they may sound plausible, are simply not true.

 

 

 

 

 

Zur Person

Carsten Becker is head of the Corporate Centre Innovation of the Industrial Services business unit. The graduate business and industrial engineer and his team are working on artificial intelligence, IT security, sensor technology and the Internet of Things.

How can such systems be made safer and more reliable?

Carsten Becker: When I talk to an AI, I want it to be as comprehensively trained as possible so that it’s unbiased and balanced, i.e., that it doesn’t lean towards one side or the other in social and political issues. And, last but not least, I want to be able to see at any time whether an answer or an article has been generated by an AI. So, it’s all about trustworthiness – about “digital trust”, as we call it. If we want to guarantee this trustworthiness and prevent disinformation and the misuse of information, I believe that it’s essential for there to be a legal requirement for such systems to be monitored by independent auditing organisations. 

Who and what is behind ChatGPT?

GPT stands for “Generative Pre-training Transformer” and is a language model of the US company OpenAI, which generates texts using artificial intelligence. Users can ask the voice assistant questions, which it then answers autonomously based on data from the Internet. Microsoft is already a major provider of finance to the company and plans to invest another ten billion dollars in OpenAI in the coming years.

 

 

With this point of view in mind, how and with which data are these systems trained, how are data protection, copyright, non-discrimination and general access guaranteed and secured? On the first level, it would be possible to check whether the system is delivering the expected results. In other words, whether the answers are okay in terms of their content and also non-discriminatory. In a further step, you would then look deeper into the system: how is the code structured? Are the training data traceable and of high quality? Of course, you can never achieve complete security with any technology. This is all the truer for such complex models as ChatGPT. But we should exhaust all the options at our disposal to make them as secure as possible.

 

Google is currently the undisputed market leader in search engines. By integrating ChatGPT, can Microsoft now reopen the race for search engine supremacy under different terms?

Carsten Becker: 

Google has been way ahead of Bing for years, and that’s unlikely to change overnight. But the integration of ChatGPT will certainly significantly upgrade Bing in quality terms and make it attractive again for more users. The fact that Microsoft has apparently hit a nerve here is shown by Google’s quick and flustered reaction: the company has already introduced its own AI chatbot, but this promptly generated an error during its unveiling, causing the group’s share price to slip. But not everything is running that smoothly at Microsoft either. Some users seem to have pushed ChatGPT to its limits, to which the bot has responded by giving answers that have been sometimes nonsensical and sometimes offensive. As a result, Bing has temporarily limited the number of questions per session until the issue is resolved. But there’s one safe assumption here: Google undoubtedly has the money, the know-how and the brains to catch up with ChatGPT sooner or later. Ultimately, we’ll all benefit from this AI race because the systems and search engines will get better and better in ever shorter cycles. In my opinion, however, implementing ChatGPT in Microsoft’s Azure cloud service might almost be more interesting than integrating AI bots into the browser. If this allows companies to easily access AI functionalities to implement their business models and solutions, this will have a huge economic impact.

 

How could ChatGPT change Internet searches?

Carsten Becker: 

Instead of a list of links, as is the case today, the search engine of the future might provide a ready-made answer. This would render visits to individual websites superfluous. What sounds practical for users would of course be a big problem for media publishers. After all, no one would then have to read the original articles from which the AI chatbot derives its answers. It’s for this reason that German press publishers want to charge licence fees if the Microsoft or Google AI uses their content.

 

Entdeckt, erklärt, erzählt: Der Podcast von #explore