Combating Deepfakes: The AI Threat 1

AI-powered tools and deepfakes pose a significant challenge in the fight against misinformation, particularly as the unintended ability of AI to create fake news has worrying consequences. As the number of internet users in India continues to grow, the use of deepfakes in the spread of misinformation is a growing concern. Recent examples of deepfakes highlight the potential threats of AI in generating fake news, but AI and machine learning have also provided journalism with several task-facilitating tools. In the fight against deepfakes and misinformation, media literacy and critical thinking curriculum in basic education are needed to build awareness and help people protect themselves from misinformation. A multi-pronged, cross-sector approach is required to prepare people of all ages for today’s complex digital landscape to be vigilant of deepfakes and disinformation.

The Rise of Deepfakes and AI-Powered Misinformation

The spread of misinformation has always been a challenge for internet users. However, with the rise of AI-powered tools and deepfakes, it has become much more difficult to distinguish between what is real and what is fake. Deepfakes are photos and videos that realistically replace one person’s face with another. Many AI tools are available to internet users on their smartphones for almost free.

The unintended ability of AI to create fake news has worrying consequences. In India, deepfakes have emerged as a new frontier of disinformation, making it difficult for people to distinguish between false and truthful information. According to Syed Nazakat, founder and CEO of DataLEADS, this problem will only worsen with different AI bots and tools driving deepfakes over the internet.

The next generation of AI models, called Generative AI, can generate an image, text, or video based on prompts without a source to transform. Examples of such models include Dall-e, ChatGPT, and Meta’s Make-A-Video. These are still in the early stages of development, but one can see the potential to cause harm as there would be no original content to use as evidence.

Two AI-generated videos and a digitally altered screenshot of a Hindi newspaper report shared last week on social media platforms highlighted the unintended consequences of AI tools in creating altered photos and doctored videos with misleading or false claims.

AI-powered tools have made detecting deepfakes on multiple social media platforms more difficult. While AI tools automate the creation of human-level writing, the difference is that they can offer responses based on real-time and current research pulled from the internet. Microsoft’s ChatGPT and Google’s BARD are two AI tools in ongoing competition.

In conclusion, the rise of deepfakes and AI-powered tools poses a significant challenge for internet users in the battle against misinformation. With the development of Generative AI models, the potential to cause harm has only increased. As a result, it is crucial to continue developing ways to detect deepfakes and to educate internet users on how to identify misinformation.

Deepfakes and Digitally Altered Content: The Unintended Consequences of AI Tools in Spreading Misinformation

The proliferation of deepfakes and digitally altered content has become a major concern in recent years, particularly as these tools become increasingly accessible to internet users. Three recent instances – a doctored video of Bill Gates, a fake video of US President Joe Biden, and an edited photo of a Hindi newspaper report – were widely circulated on social media as real, highlighting the unintended consequences of AI tools in creating altered photos and videos with misleading or false claims.

According to PTI’s Fact Check team, all three claims were debunked as deepfakes and digitally edited using AI-powered tools readily available over the internet. While the use of AI in journalism was once seen as a way to curb the spread of fake news, it has also become a significant challenge for journalists in detecting and debunking deepfakes and digitally altered content.

One of the weaknesses of deepfakes is that they require original content to work with. For instance, the Bill Gates video overlaid the original audio with the fake one. These videos are relatively easier to debunk if the original can be identified, but this takes time and the ability to search for the original content.

While recent deepfakes are relatively easy to track, Azahar Machwe, an enterprise architect for AI at British Telecom, is concerned that debunking such synthetic videos will be challenging in the future. Transforming the original video can lead to defects such as lighting/shadow mismatch, which AI models can be trained to detect. These resultant videos are often of lower quality to hide these defects from algorithms and humans.

AI and machine learning have provided journalism with several task-facilitating tools, from content generation to voice-recognition transcription tools. However, the use of these tools also highlights the potential threats of AI in generating fake news. Fake news comes in many forms, and deepfakes are created by very basic AI-powered tools these days. While most social media platforms claim to reduce the spread of misinformation by building fake news detection algorithms based on language patterns and crowd-sourcing, there cannot be 100% accuracy.

In conclusion, the unintended consequences of AI tools in creating altered photos and videos with misleading or false claims have become a major concern in the fight against fake news and misinformation. While AI and machine learning have provided journalists with several task-facilitating tools, it is crucial to continue developing ways to detect deepfakes and to educate internet users on how to identify and debunk misinformation.

AI and Journalism: The Need for Human-In-The-Loop and Media Literacy

According to Azahar, AI continues to help journalists develop quality content while ensuring timely and quick content distribution. However, human-in-the-loop will be required to check the consistency and veracity of the content shared in any format, including text, image, video, audio, and more.

In India, which had over 700 million smartphone users in 2021, the use of deepfakes is a growing concern. The Nielsen report shows that rural India had more than 425 million internet users, 44% more than 295 million people using the internet in urban India. Therefore, it is important to label deepfakes as ‘synthetically generated’ to raise awareness and prevent the spread of misinformation.

Nazakat emphasized the need for media literacy and critical thinking curriculum in basic education to build awareness and help people protect themselves from misinformation. He suggests a multi-pronged, cross-sector approach across India to prepare people for today’s complex digital landscape and to be vigilant of deepfakes and disinformation. Additionally, every educational institution should prioritize information literacy for the next decade.

Don’t miss interesting posts on Famousbio

Leave a Reply

Your email address will not be published. Required fields are marked *

You May Also Like

Arrest made in murder of LA Bishop David O’Connell, sources say

Los Angeles police have arrested a person in reference to the homicide…

Reduce IT Employee Fatigue: Gartner’s Four-Step Plan

Successful organizations must involve top executives, lower organizational layers, IT, and business…

Major Changes to Professional Award

The Professional Employees Award 2020 is set to undergo changes proposed by…

Uber stock gets RBC’s “outperform” rating

Uber Technologies’ stock has recently been given an “outperform” rating by Royal…