In an age where ‘fake news’ is a near unavoidable minefield, it’s tricky whittling things down to the truth. With the introduction of deepfakes, the online landscape has become increasingly treacherous to navigate. This technology can make it look like anyone, saying anything, at any point in time.
Anyone with a computer and internet access can create a deepfake, and although they have been around in the porn industry for a long time, deepfakes have recently been causing quite a stir in the broader political and economic world. The technology is readily available through apps like Faceswap, Zao and Deepfake Lab, which are free for anyone with a phone. Making it easy to spread altered information in a second.
Recently the CEO of a UK based energy firm was scammed out of $243,000 when a deepfake audio call mimicked his boss’ voice down to the subtle German inflection and general melody. In response, Facebook has just sunk 10mil into deepfake detection.
Here’s why deepfakes are causing such a stir.
Deepfakes have the ability to create fake news and malicious hoaxes. They can even alter the aggression of a person and pose a serious threat to posterity.
What are deepfakes?
‘Deepfake’ combines the terms ‘deep learning’ and ‘fake’, and was originally coined in 2017 by a Reddit user of the same name. It is a form of artificial intelligence, and ‘deep learning’ refers to said programming learning the advanced algorithms and arrangements that allow it to make intelligent decisions on its own.
The danger in this is “the technology can be used to make people believe something is real when it is not,” said Peter Singer, a cybersecurity and defence strategist.
How do they work?
It’s the same technology that powers programs like Amazon’s Alexa and Apple’s Siri. Alexa has even introduced celebrity voices like Samuel L. Jackson, so it’s no surprise that these technologies have been used for insidious purposes, too.
Deepfakes are even used to create powerful Instagram influencers like Lil Miquela, who has 1.5 million followers and was listed in the Top 25 most influential people on the internet by Time Magazine last year.
Deepfakes work when a deep-learning program can produce a convincing counterfeit image by studying photographs and videos of a target from multiple angles then mimick its behaviour and speech patterns.
This ability to enhance actions and convincingly clone people means that anything that is seen or heard digitally could be an imitation and has some truly malicious consequences.
How to detect deepfakes
With the growing prevalence of deepfakes and their corruption of media outlets, various companies are investing in programming for deepfake detection in the hope of regulating it.
“Presently, there are slight visual aspects that are off if you look closer, anything from the ears or eyes not matching to fuzzy borders of the face or too smooth skin to lighting and shadows,” said Singer from New America.
The answer to detected deepfakes is in the recipe. The same AI that is used to create manipulations can also recognise the algorithms and signify a faked video. Companies like Microsoft and Facebook have recently invested significant amounts of money in this sort of regulation, and are even organising a competition to spur detection research.
That’s right, Facebook have dropped a smooth 10mil on the Deepfake Detection Challenge, which will spread well-crafted deepfakes in order to encourage users to compete to analyse and locate them as quickly as possible for prizes.
ZeroFox, an IT company that formed in 2013, are solely dedicated to deepfake detection and cyber security. They have seen significant advancements in the last few years, as deepfakes have become an increasingly troublesome threat for politicians and celebrities, and have come further into the public spotlight.
Shallowfakes are essentially low-tech, doctored videos. The deft subtlety required to craft these clips makes them infinitely more believable and much harder to detect.
There was controversy recently surrounding a doctored video of President Trump’s confrontation with CNN reporter Jim Acosta at a press conference. The clip clearly shows a female White House intern attempting to take the microphone from Acosta, but subsequent editing made it look like the CNN reporter attacked the intern.
The intern’s reach for the mic is slowed down, and the “chop” motion is accelerated. Here’s an annotated side by side comparison: pic.twitter.com/wLCG5GVdo1
— Aymann Ismail (@aymanndotcom) November 8, 2018
It’s a very subtle embellishment of the truth and that is why they are far more insidious. This cunning surgery of political fabrics can have disastrous effects on people opinions, aggressions and ultimately their life choices.
This is especially relevant as the US nears its 2020 election. If the Trump administration are manipulating videos regarding the opposition it could have the power to sway countless voters.
Stay tuned for the dangers of deepfakes on society.