Artificial Intelligence

Fakes at the touch of a button

04 April 2024

Artificial Intelligence is opening the door to a lot of new possibilities – including fraud and manipulation. Today’s AI tools can be used to create deceptively real-seeming fakes of images, videos or sound recordings. What risks do these “deepfakes” pose for society and for companies? Read on for some tips on how to expose them.


Pope Francis appears sporting a fashionable puffy jacket. Russian President Vladimir Putin kneels before Chinese President Xi Jinping during a state visit. Donald Trump is overpowered by police officers and arrested: All these images and videos look real at first glance, but they are actually deepfakes.


What is a deepfake, and how is it created?

The term derives from the process of generation: “Deep learning” with images, videos or audios is used to train artificial neural networks and generate a fake that is as perfect as possible – i.e. a deepfake.

A very common method is the use of Generative Adversarial Networks (GANs). In this method, two networks are pitted against each other: One – known as the “generator” – is trained to create fakes that are as realistic as possible. The other – the “discriminator” – is designed to detect and evaluate these fakes. “As these two networks are played off against one another, both the counterfeiting and the control entity become more and more accurate, which means that the result gets more and more realistic,” explains Vasilios Danos, AI expert at TÜVIT.

This is a highly complex process, but one that can be carried out at the touch of a button. All you need is appropriate software and sufficient source material to train the AI. And video or audio recordings of celebrities from the worlds of politics, business and culture are everywhere on the Internet and thus easy to find.

There are now a variety of websites and smartphone apps that make the creation process even easier. “It's never been easier to create realistic fakes and then distribute them on social media,” says Mr. Danos. “And by the time it’s clear that it’s a deepfake, the damage has usually already been done.”


About Vasilios Danos:

Vasilios Danos is an expert in artificial intelligence with responsibility for AI Security and Trustworthy AI at TÜVIT. The graduate electrical engineer turned his attention to the possibilities of artificial intelligence and neural networks while still at university. In 2006, he and a university team took part in the “RoboCup”, the world championships for robot football, in Dortmund.

Fake presidents and line managers

The big worry is that deepfakes will be used to manipulate political moods, deepen social rifts and influence elections. The fact that this concern is not a flight of the imagination is demonstrated by a case that caused a lot of consternation in the US in January 2024: In the state of New Hampshire, a telephone robot with the voice of Joe Biden urged people not to participate in the primary.

Deepfakes are also often used to degrade women – ex-partners or celebrities, for instance – by producing apparently naked pictures of them. In January, X (formerly Twitter) was flooded with fake photos of singer Taylor Swift. The case also concerned the White House, prompting it to announce legislative action in Congress.

But cybercriminals have also long since discovered the new opportunities: According to media reports, an employee of a company in Hong Kong transferred 23 million euros to criminals who had used a deepfake to impersonate his superiors in a video conference. This well-known CEO fraud is being taken to a new level by AI, says Vasilos Danos of TÜVIT: “That's why it’s important to make employees aware of possible deepfake fraud attempts.”



How the EU intends to tackle deepfakes

The EU has responded to this artificially generated danger. Ahead of the upcoming European elections, the EU Commission has published guidelines on how to combat political misinformation in general and deepfakes in particular. Based on the Digital Services Act and in anticipation of the AI Act that has now been passed, major platforms such as TikTok, Facebook and the like are being called upon to clearly identify AI-generated content.

Many of them have already made their users accountable. AI-generated content on TikTok, YouTube, Facebook, Instagram and Threads must now be tagged, otherwise there will be consequences. TikTok and Facebook’s parent company Meta say they are also working on ways to automatically identify and flag deepfakes.


Automated fake scanners

Research is being carried out worldwide on deepfake detectors of this kind. Microsoft and Intel have already released relevant tools. “The systems aren’t perfect yet, but they can already detect a lot of deepfakes,” says Mr. Danos. “So, it would be ideal if all platforms were to implement detectors like these. Otherwise, sooner or later, the legislature will make their use compulsory.” However, a 100 percent hit rate is not to be expected in the future. After all, according to Mr. Danos, deepfakes are like computer viruses or other “classical” IT attacks. “As detection systems improve, attackers also evolve their methods – so you end up with a never-ending game of cat-and-mouse.”

But there is some consolation in that ordinary citizens are not yet helplessly at the mercy of digital fraud. This is because a lot of AI-generated content can be recognised if it is inspected or listened to closely. Here’s how to spot deepfakes:


Ensure good image quality

The larger the image or the higher the resolution, the easier it is to detect inconsistencies and flaws in the image. So, if you have doubts about a dubious video, you should watch it on a larger monitor and not on your smartphone. Pausing the video or playing it in slow motion can also reveal errors, thereby exposing manipulation.


Be or become attentive

As with other sensational content on the internet, the following applies in this case too: Healthy caution is often a wise move. The key questions to ask yourself is this: Can this content also be found on reputable news sites? Have the images or videos already been reviewed by fact-checking portals such as Mimikama or Correctiv? And, above all, are the statements and behaviour of the person shown consistent with what he or she usually says and does? “If, for example, German Chancellor Olaf Scholz appears to say or do something scandalous, your first response should be one of scepticism,” says AI expert Mr. Danos. “Especially when you realise how easy it is to create and distribute deepfakes.”


Take a closer look at transitions

In many cases, AI is not used to generate a completely new video but only to replace a person’s face or the area around the lips to put other words in the speaker’s mouth. For this reason, in suspect pictures or videos, you should pay attention to the “seams”, i.e. the transitions between the face and the hair or the face and the neck. These are often still blurry in deepfakes. If the face has a different quality than the rest of the image or video, and the shadows, lighting or movement around and of the mouth, say, do not match the rest of the face, this is also suggestive of a deepfake.


It’s all in the face

When we speak or have conversations, our face executes a complex series of motions: We blink and frown; and, when things get heated, that pulsing vein of anger can appear. AI is not yet able to mimic such subtleties of human facial expressions that well. A closer look, especially when the image is slowed down, can reveal when an image is a fake. In the case of photo deepfakes, looking at the hands and fingers can be revealing. The generators often fail to accurately reproduce these details, leading to anomalies. For example, one hand may suddenly have six fingers, or arms may merge with other objects.