Rashmika Mandanna's fake video has triggered a debate about AI and deepfakes

Some AI tools are free to use and only go on to exacerbate the problem of fake photos/videos/audio

Rashmika Mandanna Deepfake Controversy: What is a deepfake video and how can you spot one?

SUMMARY
  • Deepfake refers to a video that has been edited using an algorithm to replace the person in the original video with someone else
  • Deepfake videos have risen in prevalence after the onset of numerous AI tools
  • While deep fake videos can be highly convincing, there are several telltale signs that can help you identify them

 

In a recent turn of events, popular actress Rashmika Mandanna has found herself at the center of a controversy involving a deepfake video. The video, which has gone viral on social media, shows a woman entering an elevator, but her face has been digitally altered to resemble Mandanna. This incident has sparked widespread concern and calls for legal action. Bollywood icon and co-star from Goodbye movie Amitabh Bachchan voiced his concerns about the trend of deepfakes and he pushed for legal action.

What is a deepfake video?

Deepfake is a term that combines “deep learning” and “fake”. It refers to a video that has been edited using an algorithm to replace the person in the original video with someone else, especially a public figure, in a way that makes the video look authentic. Deepfakes use a form of artificial intelligence called deep learning to make images of fake events, events that haven’t happened. Deepfake videos have risen in prevalence after the onset of numerous AI tools. Some AI tools are free to use and only go on to exacerbate the problem of fake photos/videos/audio.

How can you spot a deepfake video?

While deep fake videos can be highly convincing, there are several telltale signs that can help you identify them:

1. Unnatural Eye Movements: Look for unnatural eye movements, such as no blinking or erratic movements.

2. Mismatches in Color and Lighting: Notice mismatches in color and lighting between the face and the background.

3. Audio Quality: Compare and contrast audio quality and see if it matches the lip movements.

4. Visual Inconsistencies: Analyze visual inconsistencies, such as strange body shape or movement, artificial facial movements, unnatural positioning of facial features, or awkward posture or physique.

5. Reverse Image Search: Reverse image search the video or the person to see if they are real or not.

6. Video Metadata: Inspect video metadata and see if it has been altered or edited.

7. Deepfake Detection Tools: Use deepfake detection tools, such as online platforms or browser extensions, that can flag suspicious videos.

Several technologies are being developed to tackle the problem of deep fake videos and photos:

1. AI-Based Detection: Many tools use AI to detect tampering in videos. For instance, Microsoft has developed a tool that analyses photos and videos to give a confidence score about whether the material is likely to have been artificially created. This tool has been created with a public dataset from Face Forensics++ and has been tested using the Deepfake Detection Challenge Dataset.

2. Browser Plugins: The AI Foundation created a browser plugin called Reality Defender to help detect deep fake content online. Another plugin, SurfSafe, also performs similar checks.

3. Startups: Several startups are working on innovative solutions to counter fake content. For example, OARO offers tools to authenticate and verify digital identity, compliance, and media. Sentinel is tackling information warfare.

4. Unfakeable Records: OARO Media creates an immutable data trail that allows businesses, governing bodies, and individual users to authenticate any photo or video.

The Rashmika Mandanna deepfake controversy has underscored the urgent need for legal and regulatory measures to combat the proliferation of such content. While many of the above-mentioned technologies are being developed at a fast pace, they are either not broadly and easily available or not 100 per cent accurate. The onus still lies of the user for detecting and sharing such videos.