Main content

Six things you need to know about deepfakes

We’re at the beginning of a deepfakes revolution.

Artificial intelligence which can create video and audio of people doing or saying things they never did is becoming widespread. Techniques that were once only accessible to Hollywood studios are now on the phone in your pocket.

Here are six things you need to know about deepfakes – from the risks they pose to democracy to their potential in health care.

1. "Deep", from "deep learning"

The "deep" in deepfakes comes from the term "deep learning" which is the type of advanced artificial intelligence used to create these fake videos, audio and images.

Deep learning involves feeding an algorithm with data which it learns from to perform a task without being explicitly programmed to do so. To make deepfake videos, an algorithm is fed with images of a subject such as a celebrity and learns how to transfer the face of the subject onto a "target" in a video.

2. Origins in porn

The origins of deepfakes are in the dark underworld of online pornography.

Sam Cole, a journalist from Vice media, first spotted the phenomenon in 2017, in a forum on the discussion website Reddit. She wrote about how a user called “deepfakes” was posting videos of celebrities pasted into porn videos.

These disturbing videos were made without the permission of the women being impersonated, or the porn performers who had made the original videos. Reddit has since banned sharing this type of content on its platform.

The technology to create deepfakes has since become more and more accessible – ordinary women, not just celebrities, have become the target of these abusive videos. “Nudifying” websites can now synthetically strip women of their clothes, in photos.

Victims of deepfake image abuse have campaigned to end the practice, but it can be difficult to track down the creators of the videos and hold them to account.

Ukraine's bogus surrender video

Henry Ajder discusses the deepfake video of Ukrainian President Volodymyr Zelensky.

3. Deepfakes can be used to spread disinformation


Since deepfake technology first became widely available, experts have warned that deepfakes could be used to impersonate politicians to undermine elections and threaten global security.

There haven’t been many examples of deepfakes being used to spread disinformation so far, but in the first few weeks of the war in Ukraine, one significant video emerged: in mid-March a video started circulating on social media showing Ukraine’s President Zelensky apparently calling on Ukrainians to lay down their arms and give up fighting.

It was a deepfake, and it wasn’t a very convincing one. The voice didn’t sound like Zelensky’s and his head looked like it was pasted on to his body.

Not many Ukrainians were fooled – Ukraine’s authorities had warned that a deepfake of Zelensky capitulating might appear, so they were prepared to be sceptical. Soon after the video emerged, President Zelensky debunked the video by posting a video on his own Instagram account.

Sam Gregory, Program Director at human rights organisation Witness says it was in many ways a best case scenario: “You warn people specifically that it's a bad deepfake, and then you have the person who's in that situation able to speak very credibly and debunk it.”

The worry is that the next time one of these malicious deepfakes pops up, it might not be as easy to spot.

4. But… the biggest threat of deepfakes might be undermining trust in real media

As it becomes easier to create fake videos, the risk starts to grow of real videos being dismissed as fake.

The so-called “liar’s dividend” is the idea that when anything can be faked, people who are lying by claiming that something true has been falsified have the power – they benefit from the undermining of our trust in images.

It’s not just liars who have dismissed real videos as deepfakes though. Sometimes people just really don’t want to believe something is real.

In June last year, a video of a political prisoner in Myanmar accusing Aung San Suu Kyi of corruption was shared online. Many people thought the video looked odd and jumped to the conclusion that it was a deepfake. Some used unreliable online "deepfake detectors" to back up their views.

Sam Gregory and deepfakes expert Henry Ajder analysed the video and came to the conclusion that it was most likely a forced confession. The stilted delivery and the low video quality made it look odd, but it wasn’t a deepfake.

5. In politics, deepfakes are more commonly used to make you laugh or advocate for causes

Satire is a growth area for deepfake technology. Spitting Image used puppets to skewer people in the public eye – now it’s easier than ever to puppeteer politicians for a laugh.

Brazilian comedian and journalist Bruno Sartori has made a name for himself by creating satirical videos of Brazilian politicians like President Bolsonaro and former President Lula singing songs and making statements they would never make. The purpose is to amuse people – not deceive them – and sometimes make a political point at the same time.

Deepfakes have also been used by campaigners to resurrect the victims of injustice.

Joaquin Oliver was 17 years old when he was shot and killed in the Parkland high school shooting in America. His parents set up a gun safety campaign in his memory and created a powerful deepfake of Joaquin calling on voters to prioritise changing gun laws ahead of the 2020 US election.

6. Deepfakes have broad potential – in healthcare and business

As synthetic media and deepfakes take off, their potential in various fields is beginning to be recognised.

Synthetic avatars can be used in advertising and internal communications, limiting the cost of producing, filming and translating videos.

In healthcare, the Ukrainian company Respeecher is developing deepfake voice technology for patients who are unable to speak. They are creating more expressive, realistic voices to replace robotic sounding prototypes.

There is great potential to use deepfake technology for positive purposes, as long as it’s clear to those who encounter it that it is synthetic, and it’s not used to deceive or abuse people.

Listen to The Future Will Be Synthesised now on Βι¶ΉΤΌΕΔ Sounds.

More from the Βι¶ΉΤΌΕΔ