22 May | 2024

Lahore, Pakistan

Social awareness, public education key to reduce harms of deep fake

A doctored video clip has ‘enough power’ to sway public opinion, manipulate young brains, target famous personalities, endanger democracy, and cause societal chaos on a huge scale

By Muhammad Saad

Imagine a world of uncertainty where no one believes no one and distinguishing between the real and the fake is impossible. A doctored video clip has ‘enough power’ to sway public opinion, manipulate young brains, target politicians and famous personalities, endanger democracy, and thus cause societal chaos on a huge scale. The challenges that technological advancements brought have become a curse for humanity and a blessing for brutality. The un-tackled seepage from the judicial system permits anyone to misuse anyone’s digital property. This imagined world of chaos is not much different from our reality where existing problem due to deep fake is the tip of the iceberg.

‘Technological advancements become a ‘curse’ for humanity and a ‘blessing’ for brutality’

With the advancements in artificial intelligence, the chances of harm caused by fabricated videos are higher than ever. To create such fake videos, deep fake technology is used. You must have come across such fabricated content that seemed real but hard to believe. If yes, you might have been exposed to a deep fake. A deep fake is a video, audio, or picture, manipulated using deep learning, a branch of artificial intelligence. The word “deep fake” was first used by a Reddit user in 2017 who used to superimpose celebrities’ faces onto pornographic content using deep learning. With the availability of more computing power over the years, machine learning algorithms have become more and more sophisticated, increasing quality of deep fakes.

Throughout history, generative adversarial networks fueled ‘evolution’ of deep fakes. Like others, deep fake technology can also be used for both good and evil purposes. Regarding the betterment of mankind, deep fake technology has use cases in industries such as healthcare and entertainment. During the coronavirus pandemic, it was difficult to diagnose diseases that arise from coronavirus infection. It was due to the lack of X-rays, CT scans, and MRI images and the resources to diagnose whether the patient had the disease or not. Here came the use of deep fake technology; computer scientists first produced deep fake images with the help of artificial intelligence and gave them to artificially intelligent models for training.

The models were able to compare the deep fake images and that of the patient to diagnose whether he had the disease or not. Moreover, training artificially intelligent models on people’s data can create privacy concerns and accuracy problems. To tackle these challenges, realistic synthetic data is produced using deep fake technology. In 2019, Canny AI, an Israeli startup created a doctored video of Facebook CEO Mark Zuckerberg, saying “imagine a man controlling billions of people’s data and thus their lives and future.” Shockingly, the video was indistinguishable. It was made using deep fake technology on a 2017 footage of Mark Zuckerberg.

‘Govt can implement legislation against dissemination of malicious content on social media’

The video was made to raise awareness about the harm deep fake can cause in the society. We can see that deep fakes can have a severe impact on the public. Through such content, bad actors can spread misinformation to fulfill their evil ambitions. It may include having illegal financial gains, generating more clicks, or igniting social unrest by misleading the masses. One such incident occurred recently when a manipulated video featuring Elon Musk was spread through social media for someone’s monetary interests. Following the video in which Elon Musk was promoting a new crypto currency, many people heavily invested in crypto currency causing a major change in the crypto price. That was the case with the people living in Europe.

In countries like Pakistan, where a ‘small number’ of literate people have technical knowledge, the odds of havoc are the most. Also, politically doctored deep fake can pose an ominous danger in today’s democratic landscape. Malicious actors can use deep fake to sway public opinion about a specific politician. For instance, a fake video of American politician Nancy Pelosi went viral on social media in which she appeared to be drunk. Earlier this year, nearly 25000 robocalls were made to the residents of New Hampshire – an American state. The fake voice of US President Joe Biden was saying not to vote in the primary elections, instead reserving it for the general elections.

‘Identifying deep fake is a daunting task for those without tech understanding’

We can understand that all these fake videos can tarnish reputation of a personality. In this way, even democracies can be targeted by deep fake technology. Recent innovations in artificial intelligence have added fuel to the fire. Production of ‘close-to-real’ deep fake usually took a few days until OpenAI’s ‘text-to-video’ model – Sora – was launched. Sora is a ‘highly’ capable tool and can create ‘realistic’ videos which are almost indistinguishable. Another problem is the open-source nature of such AI tools. Anyone from anywhere can generate content using the tools. We can see that technology is becoming more and more sophisticated with the passage of time. This chaos is just a beginning of the end.

Identifying deep fake is a daunting task for those without technological understanding. Nonetheless, various methods exist for identifying these deceptive digital creations. Foremost, developing a zero-trust mindset is important. Never trust anything without verifying it. Fake videos can also have many signs, including differences in the skin textures and body parts, less synchronization between lip movement and voice, abnormal blinking patterns and unusual facial expressions, etc. With increasing sophistication of generative AI models, discerning deep fake is becoming increasingly challenging. So, using technological detection systems is also necessary.

Our governments and big tech giants can play their roles in preventing proliferation of deceptive content. Governments can implement legislation against the dissemination of malicious content on social media. Through social awareness and public education, harms of deep fake can be greatly reduced. On the other hand, big tech giants can promote development of robust machine learning algorithms for detection and elimination of such content on their platforms. Additionally, investing in research and development of advanced detection algorithms and forensic tools can augment capacity to identify and mitigate the impact of deep fake.

Muhammad Saad, born and raised in Lahore, is student of BS in Computer Sciences at COMSATS University