The word is a portmanteau, ‘deep’ refers to deep learning and ‘fake’ clearly represents the untrue or sham nature of the video, audio or image created. Deepfake technology has evolved to such an extent that companies and law enforcement are currently struggling to differentiate between real and fake media (images, video and sound clips).

Some companies are even offering financing to start-ups to develop more sophisticated technology to identify whether media is a deepfake or not. 

READ MORE: Revenge porn is officially illegal in SA, culprits could face a R300 000 fine and 4 years in prison - here's how other countries compare

Tools for editing media are not new, consider, for example, Photoshop. The power of deepfakes, however, is that the technology has made it cheaper and much easier to produce realistic deep fake video or audio clips.

The production of a deepfake, begins with feeding data into the software. The data can be in the form of photos (if one were creating a video) or voice clips (if one were making audio).

The more data that can be fed into the software, the more accurate the statistical connections the software will be able to compute. 

READ MORE: A woman shares how she found out her naked pictures were leaked online, plus, here’s how men trade explicit images of women on underground channels

The concern about deepfakes originally centred around beliefs that the technology would be used to manipulate politics and even elections, however, the effect on ordinary citizens is equally concerning. 

Some examples of the effect of deepfakes and their impact: 

1-     A deepfake video is released right before election day depicting the lead candidate engaging in some form of nefarious activity. The realistic nature of these videos will make it very difficult for the average person (or for that matter, even technology experts and law enforcement) to consider the possibility of fake news, especially in light of the fact that we generally accept the veracity of video content; 

2-    Another recent trend identified was the prominence of non-consensual deepfake pornography (placing someone’s facial image on a pornographic video) – accounting for 96% of the total deepfake videos online. Unfortunately the study found that all such videos targeted women. 

READ MORE: Is it okay to Google the answer to your dinner table debate? Cellphone etiquette in a tech-driven world

Fortunately, South Africa has recently passed legislation, the Films and Publication Amendment Bill, which criminalises “revenge porn”. Depending on how the new law will be interpreted it is likely that non- consensual deepfake pornography could fall within the ambit of revenge porn.

In September 2019, a deepfake cybercrime was reported. In this case a deepfake audio clip was used.

The clip imitated a chief executive’s voice to trick an unsuspecting subordinate into transferring $240 000 (about R4,2 million) into a secret account. The company’s insurer, Euler Hermes, explained that the company’s managing director was called late one afternoon and his superior’s voice demanded that the subordinate wire money to a Hungarian account to save on “late-payment fines”, sending the financial details over email while on the phone.

A spokeswoman from Euler Hermes said, "The software was able to imitate the voice, and not only the voice: the tonality, the punctuation, the German accent.” 

READ MORE: How Tinder is being used for more than just hook-ups

The rise of deepfakes shows how your personal information (think of all those selfies you have taken, or consider the voice data that your devices have gathered) is capable of being used in these strange times in which we now live. 

Have you been a victim or a tech scam or revenge porn? Tell us your story here.

This article was originally published by ENSAfrica. Read the original article here.

Follow us on social media: FacebookTwitterInstagram

Sign up to W24’s newsletters so you don't miss out on any of our hot stories and giveaways.