The rapid development of artificial intelligence enables cybercriminals to create realistic forgeries of voices, faces, and texts. These so-called deepfakes can be digital images, audio, video, or text content generated using artificial intelligence.
Fraud attempts using deepfakes are becoming more frequent – both in professional and private contexts. Therefore, special caution is required.
In the following article we will show you how to recognize Deepfakes more easily and avoid common mistakes.
Key points:
Deepfakes deceive targeted individuals with realistic voices, faces, and text generated by AI – and are used by cybercriminals both in professional and private settings.
Possible signs of deepfakes include unnatural movements, distortions, or language clues such as repetitions of words and patterns or an inconsistent voice style – pay attention to warning signals and trust your gut feeling.
Protect yourself by verifying identities and being suspicious of unusual requests or content. Sharpen your digital media literacy and use available digital tools if necessary.
Detecting deepfakes: what to look out for
With the rise of artificial intelligence (AI), artificially generated voices, faces, and texts are becoming increasingly realistic – and can be hard to spot at first glance. Therefore, it is critical to know the signs to look out and rely on your instincts.
Key points to consider:
Trust your instincts
If something seems strange to you, ask specific questions – either directly in conversation or through another known contact method. For example, ask for details via a phone number that only the real person would know, or contact the person through a known number, email address, or messaging app for verification.
Inconsistencies in facial movements
In videos, pay attention to jerky movements, inappropriate facial expressions, strange proportions, or unsynchronized lip movements. AI-generated faces often have problems with natural movements like blinking, frowning, or small muscle twitches.
Inconsistencies in the voice
Distortions, pauses, echoes, or a monotonous speech melody in calls or voice messages can be signs of a fake, AI-generated voice.
Strange language
AI-generated texts are usually error-free but may appear impersonal, contain repetitions, flat phrases, or inappropriate terms.
Visual inconsistencies
Check if light, shadows, or reflections in images look realistic. If you are unsure about images, use a reverse image search to find the images elsewhere on the internet.
Unexpected messages
If a call, message, or video seems suspicious to you, question it. Is it plausible? Did I expect it? Also question the sources. Where does the video or image come from? Is the source trustworthy? Is the identity of the person clearly confirmed?
Use good image quality
Watch videos on a larger screen, rather than on your phone. High resolution and correct colour settings help to highlight details and possible inconsistencies – for example, in the skin or facial expressions.
The dangers of deepfakes
Deepfakes are more than just technical gimmicks – they pose a serious threat.
- Targeted fraud: cybercriminals use deepfakes to forge identities and gain trust. A fake call or a realistic-looking video message from the "boss" or your bank can be enough to get people to disclose sensitive data or transfer large sums of money.
- Spread of disinformation: deepfakes can manipulate opinions by making politicians, celebrities, or journalists appear to make specific statements. This can influence political debates and undermine trust in reputable sources of information.
- Reputation damage and extortion: manipulated videos or images can be used to expose, blackmail, or discredit people – for example, through alleged compromising content.
- Social impact: the more realistic deepfakes become, the harder it is to distinguish between real and fake. This fosters distrust of real content – a development that can endanger democratic processes.
Common fraud variants using deepfakes
Whether fake audio, video, or photographs – deepfakes are becoming increasingly sophisticated. Cybercriminals use them specifically to gain trust and cause harm. Let’s look at some of the most common fraud schemes currently in circulation – and how to recognize them.
Fake voice messages
Attackers can create voice messages that appear to come from someone you know. You might be asked to disclose sensitive information or open a document that contains malware.
Manipulated phone calls or video calls
Thanks to advanced technologies, it is possible to fake voices in real-time during a phone call. A well-known example is the case of a British energy company whose CEO believed he was speaking with the managing director of the parent company and subsequently transferred 220,000 euros to a fraudulent account.
Deepfakes in video conferences
In Hong Kong, a financial employee of a company was deceived when fraudsters used deepfake technology in a video conference to pretend to be the CFO and other employees. The employee was instructed to initiate transactions, resulting in a loss of around 23.5 million euros.
Shock calls with fake voices
Fraudsters use deepfake technologies to pose as family members or friends and ask for money in fake emergencies. An example is the so-called grandparent scam, where older people are called by supposed grandchildren and asked for financial support.
Fake news reports
Criminals create deepfake videos in the style of well-known news formats, in which famous people allegedly promote lucrative investments. One goal could be to get viewers to visit fraudulent websites in order to invest money.
Manipulated photos
Deepfake photos created with image generators can show people in situations that never took place. Examples include images of celebrities in embarrassing or fabricated situations, used to spread false information or damage their reputation.
Protective measures against deepfake fraud
- Identity verification: verify the identity of the sender through questions or alternative communication channels.
- Awareness: educate yourself and those around you about digital media and raise awareness of the dangers of deepfakes.
- Technical solutions: use software and tools that help detect deepfakes.
- Stay vigilant: critically question unusual communications to protect yourself from the increasing dangers of deepfakes.
Frequently asked questions about deepfakes
Show content of What exactly are deepfakes and how are they created?
Deepfakes are deceptively realistic media content – usually videos, audios, or images – created or altered using artificial intelligence. Criminals use AI models trained with recordings of a person's voice, photos, videos, or writing style. "Training data" for deepfakes can also be generated by calling people whose voice is to be cloned under a pretext, just to record their voice.
Show content of How do deepfakes differ from other forms of digital manipulation?
Unlike simple image edits or fake profiles, deepfakes create realistic and dynamic content – such as people talking in videos or believable phone conversations. The forgery is often hard to detect.
Show content of Why are deepfakes a growing threat?
Deepfake technology is getting better and more accessible, with many basic software solutions even available for free. Additionally, recordings for training AI-based applications are becoming easier to find – especially on social media. Deepfakes can exploit trust, for example, by convincing people to transfer money or making false information appear credible.
Show content of Where are deepfakes already being used?
Deepfakes are found in entertainment (e.g., movies, memes), but also in politics, for spreading disinformation, and in journalism, where manipulated content can distort real news.
Show content of What risks do deepfakes pose to individuals, companies, and society?
Individuals can be emotionally or financially harmed by the impact of deepfakes, while companies can suffer financial or reputational damage. Socially, deepfakes promote distrust, disinformation, and manipulation in public discourse.
Show content of How should I behave during a suspicious call or video call?
End the conversation if something seems strange to you. Ask specific questions that only the real person would know. Call back – but only through a known, secure number.
Show content of Can a fraudulent conversation via email or messenger be AI-generated?
Yes. There exist AI models specifically trained for fraudulent purposes. They generate deceptively real, personalized responses and questions designed to build trust and prompt action.
Show content of Can a person’s handwriting be faked using artificial intelligence?
Yes, it is technically possible. Just a few lines of real handwriting are enough to create a deceptively real forgery – for example, for signatures or notes. The use can be legitimate, e.g., for design, animation, or accessibility, but can also be misused for fraud and identity theft.