MENU
Was kann künstliche Intelligenz?
Interview

What can Artificial Intelligence do?

19 September 2019

Depending on your point of view, Artificial Intelligence (AI) is either the next quantum leap of digitalisation that will make our lives easier, more efficient and safer or the harbinger of the downfall of the human race. What AI can actually do today, how the systems learn and where the dangers of artificial intelligence might lie are all explained in conversation with physicist and neurobiologist Christoph von der Malsburg.

#explore: Where are we already encountering Artificial Intelligence in our daily lives?
Christoph von der Malsburg: The closest we get to it is probably when we talk to our smartphones – if, in other words, the mobile phone understands language and can translate it into text and respond to it. This comparatively recent development goes back to Sepp Hochreiter's diploma thesis from 1991 at the Technical University of Munich. He invented the principle of long short-term memory (LSTM) and published his findings in 1997.

You’ve been working on Artificial Intelligence for more than 50 years. How have the systems evolved in that time?
The ideas that dominate the field are actually 50 to 60 years old. In terms of content, you can say that the Artificial Intelligence field is stagnating. But a few years ago, a technological explosion took place under the name of “Deep Learning”. This, too, is an old idea and goes back to the American computer scientist Frank Rosenblatt, who first publicised it in 1962. Deep Learning works today because we now have very powerful computers and, above all, huge amounts of data: for example, photographs on which people have recorded the objects that can be seen in them. With millions of these kinds of images and millions of trial-and-error attempts, the artificial neural networks are being trained until they are so comprehensively interconnected that, in most cases, they can give the right answer. In other words, the learning principle is entirely based on the statistics of answers dictated by human beings from the outside.

How is this similar or different from our way of learning?
It is, in fact, fundamentally different. After all, children literally suck in the world in the first three years of life: They learn new words, processes, objects and their use at breakneck pace; they act and understand their immediate surroundings. Our brain does this based on about one gigabyte of genetic information and a few gigabytes of environmental information. However, to describe the human brain's connection matrix, you need a petabyte – a million times more data. It follows that the brain must have a very powerful means of structuring itself. It has to be self-organising nerves – there’s no other meaningful explanation for this discrepancy, and these are, in my view, responsible for 99.99 percent of the brain’s structure. As a result, our brains work more efficiently by many orders of magnitude than any AI can even begin to do today. So far, however, I think this key insight hasn’t yet been incorporated into the development of Artificial Intelligence.

"So today's AI isn't really all that smart yet."

Christoph von der Malsburg, physicist and neurobiologist

For which purposes is AI already particularly well suited and for which is it less so?
In all situations that are predictable or routine, today's Artificial Intelligence can already be put to good use – that is, wherever there are data sets in which machines can be or have already been fooled in a way that lets people call the shots. Object recognition from images and translation are already working well. At least for standard texts – but when it comes to complicated chains of reasoning, the systems come up against their limits.

In situations that can’t be foreseen in detail, things are completely different. Autonomous cars are a controversial example of this. Driving on the motorway, keeping a safe distance and changing lanes are no problem at all for the systems and were in principle possible as long as 20 years ago. But in urban traffic, every situation is new – which is where the datasets fall down. Fevered attempts are now being made to tackle this problem by using computer graphics to reproduce all possible variants of these kinds of exceptional situations in what are called corner cases or edge cases. But because these systems can only be used for statistics, for the AI to be able to recognise the situation 10,000 examples are needed for each traffic situation, in which, for example, the colour of the car or the exact geometric arrangement varies. However, reality has so much imagination that this problem is never fully mastered.

So today's AI isn't really all that smart yet. After all, real intelligence means being able to realise general goals in changing situations. In traffic, it would be something like this: “Never cause any damage, either to yourself or others.” Current AI is incapable of understanding and applying such abstract principles and concepts. So, while primary school students can already manage transfer tasks in maths classes, AI can't do it at all.

How and in what fields will AI develop in the coming years?
These artificial neural networks are another tool on the path toward digitalisation. Digitalisation and Artificial Intelligence are still very expensive, because every problem has to be individually tackled by expensive IT experts in ways that involve far too much detail. We would of course love to avoid this. The next logical development step would therefore be the automation of automation. Allowing the system to create a smart city, for example, by setting targets for it on an abstract level, which it then realises on its own: flowing traffic, low energy consumption, clean roads. For example, if a big football match were coming up or an accident were to take place, this kind of smart city would autonomously divert traffic flows and alert people. Assuming, of course, that the AI is intelligent not just in name but also in fact.

"The current development is being blocked by ingrained prejudices."

Christoph von der Malsburg, physicist and neurobiologist

When it comes to “AI”, many people think of a kind of superbrain that can do everything better, faster and more flawlessly than we humans, that can teach itself things at a frenzied pace and develop self-awareness and will one day replace us. How realistic is such a scenario from your point of view?
American author and futurist Ray Kurzweil claims – as do others – that such higher intelligence will be a reality in the next 20 years. However, I consider this to be the sensationalist prediction of crazy people who know nothing about the subject. Reputable experts believe that we’re still more than 200 years away from true Artificial Intelligence. Experts working on autonomous cars also reckon that it will take 30 to 40 years for the systems to actually understand traffic situations – unless a conceptual breakthrough occurs.

I believe that the current development is being blocked by ingrained prejudices. The first prejudice concerns “intelligent design”. If human beings want to instil intelligence into a machine, they have to empower it with their own. This is the algorithmic principle that forms the backbone of digitalisation. According to this principle, a person solves a problem in his head and programs this solution into the machine using an algorithm. This must be replaced by a concept through which the intelligent structures grow into a self-organised structure in the machine.

The other prejudice is the question – which has never yet been satisfactorily answered – of how the nerve cells in our brains create this drama that we live through every day from within, like for instance the scene before me in my office. The answer that currently prevails is that every nerve cell is an elemental symbol: for instance, the colour red, a vertical edge, my grandmother, a pencil or a dog. But what is missing from the theory of elemental symbols is what we use to compile words and sentences from letters: that is to say, an arrangement. I described this as the “binding problem” 40 years ago. But if this question about the structure of the brain were to be answered correctly, the vision of an intelligent and conscious machine that can think about itself and the world could very quickly become a reality.

So real intelligence in a machine would also have self-awareness, motivations and self-defined goals?
Well, after all, we all came into the world with predetermined abstract goals: We don't want to freeze, we don't want to starve, we want to be with mother, we want to keep out of danger and reproduce. In this sense, at least in a first generation of such Artificial Intelligence, one would also have to incorporate goals – because they don’t develop by themselves. And then the Artificial Intelligence would be able to think about it and define new goals for a second generation. And then at some point we could lose control.

Shouldn’t we abandon the further development of AI now to avoid the possibility of danger in the future?
That’s a nice thought. But the thing is that our entire social and economic system is designed to become ever better and more efficient. The economy must grow, costs need to be reduced, human work should be either made easier or replaced altogether. This objective is deeply rooted in our economic system. And the logical means of achieving this goal is automation – that is, rendering processes and, by extension, people superfluous. If you were to think that through to its conclusion, you would end up in a nightmarish dystopia.

"I see a much greater danger in the fact that our human lives are becoming increasingly meaningless."

Christoph von der Malsburg, physicist and neurobiologist

Real Artificial Intelligence is still the stuff of science fiction. Should we be afraid of AI at this moment, or should we rather be wary of what people can do with it?
There’s no doubting the fact that today’s evildoers are human beings, not machines. Current technology already makes it possible to manipulate every single individual and society at large. From personalised advertising based on the observation of our search and click behaviour to the possible manipulation of elections, as has long been suspected in the case of Cambridge Analytica and the election of Donald Trump. And, of course, automated weapons of war can and will be developed with the Artificial Intelligence we already have today.

As a society, can we ensure that AI is only used “for the good of humanity”?
The legislature can of course have an influence. In terms of data protection, the General Data Protection Regulation (GDPR) is already a first step in the right direction at European level. And international agreements can of course be reached to rein in the development of automatic weapons. After all, the atomic bomb didn’t wipe us out across the board; if anything, it’s probably prevented war. In this respect, I’m not that scared of automatic weapons.

I see a much greater danger in the fact that our human lives are becoming increasingly meaningless. At the moment, our development is entirely geared towards improving the quality of life of the individual. And this in a completely misguided sense – in terms of more consumption, less work, more leisure time and increasing uselessness. But we’re instinctively wired to fight for life every day in those social interactions with others with whom we have a relationship of trust. This way of life is increasingly being rationalised out of us.

If Artificial Intelligence were to continue to drive this development, our lives would be permanently hollowed out to the point of utter futility. If a higher intelligence actually develops one day, one might almost hope in this regard that it would force us to return to something close to our best – by making sure that we need to work harder again to achieve our goals. Of course, this intelligence could also come to the conclusion that we’re a plague on the face of the Earth and need to be culled. I'm not going to be around to experience that, and I don’t suppose you will be, either. But it could really get this dystopian if we don’t adopt countermeasures at the regulatory and social levels.

ABOUT

Physicist, neurobiologist and university lecturer Christoph von der Malsburg is a pioneer of technical facial recognition and one of the most distinguished German researchers in the field of Artificial Intelligence. He has worked in the Department of Neurobiology at the Max Planck Institute for Biophysical Chemistry in Göttingen and at the University of Southern California in Los Angeles and is co-founder of the Institute for Neuroinformatics at the Ruhr-Universität Bochum. He is currently a Senior Fellow at the Frankfurt Institute for Advanced Studies where he is researching neural networks and computer vision.