Most of us believe that we’re capable of telling the truth from fiction. The rise of deepfakes may make that much harder. Deepfakes are digital content, including audio, video, and still images, created by artificial intelligence (AI) to look and sound real. They often depict people doing and saying things they didn’t do or say. Increasingly, cybersecurity experts and technologists warn that this content could be used in a variety of malicious ways – including in fraud schemes or the spread of misinformation.


IEEE area experts are warning about regarding the rise of deepfakes is the world of celebrities and in politics, but attacks can also target individuals.

IEEE Member Rebecca Herold shares an anecdote that illustrates the risk of deepfakes.

“A friend of mine who was on a business trip received a call from his wife,” Herold said. “She was very upset, sobbing, telling him she was in an accident and didn’t have any money with her to pay for a tow truck. My friend almost believed her; she sounded just like his wife. But after he put her on hold after telling her that he was going to wire some money to the tow truck business, he first called his wife’s phone number. His wife was safe. He said he came pretty close to being scammed, the audio was quite good.”

IEEE Member Aiyappan Pillai stated “A couple of recent attacks using deepfake voice calls include to impersonate the CEO of a company and effect money transfer. Another involved deepfake hologram of a senior executive on a video call in a cryptocurrency firm to extract confidential information from other executives. Details are available in the public domain.”

These attacks are of concern as they reinforce the point made in the response to the earlier question. Besides financial losses, loss of reputation, threats to peace and privacy, the digital/ virtual world that enables great progress in the 21st century, especially so during the pandemic, is at risk of unravelling due to issues of trust. The ease and minimal costs for creating deepfakes impose a greater challenge for containment.


While deepfakes can be incredibly convincing, they often have subtle clues that they may not be real.

“At present, deepfakes aren’t completely indistinguishable from genuine videos but we’re edging closer,” said IEEE Member Yale Fox. “Most people can still identify a deepfake video.”

But as they get better, Herold urges consumers to be look for these “tells”

AI is getting really great at creating facing-forward images of others. However, they still have problems with details like how people look from the side and back. Be sure to look at images at all angles. Deepfake photos and videos often put a lot more teeth into people’s mouths than what humans actually have. If someone is smiling or otherwise showing their teeth, look closely at them.

Deepfake photos and videos also often give people too many, or too few, fingers. Count how many fingers are on the photo or video subject. Deepfakes also have a problem with profile and angled views of people. If you are communicating with someone via livestream video and you think it is a deepfake, ask them to look to the side or ask them a question that would cause them to turn their head such as “Hey, I like that painting behind you on the wall. Who is the artist?”

Deepfake photos and videos often contain inconsistencies such as in lighting, reflections and shadows. Deepfake videos often contain quick, but unnatural-looking movements. For example, Look for unusual jerking motions or seemingly skips in time. Deepfake audios often have a tinge of, or occasional, unnatural or inconsistent audio qualities.


The potential for deepfakes to create societal harm has sparked a wave of research into detection tools by both governments, universities and private industry.

Some techniques evaluate the suspicious content itself. For example, some tools might evaluate blood flow in a human face, look for unnatural blending or blurring. Or look in the reflection of a person’s eyes to see if it matches the surroundings as an indicator of authenticity.

“Similarly, threat actors generating deepfakes will likely learn from the detection algorithms and adjust their own technologies accordingly,” said IEEE Senior Member Kayne MGladrey. “ What’s essential here is that the primary distributors of video and audio content invest in deploying these solutions at scale to prevent the spread of misinformation or disinformation.”