views
There has been a growing concern over deepfake videos of celebrities and Bollywood stars surfacing on the internet and the misuse of artificial intelligence technology for creating morphed videos.
Prime Minister Narendra Modi on Friday flagged the misuse of artificial intelligence for creating deepfake videos and said it was a ‘big concern’. PM Modi said he has asked AI chatbot ChatGpt to flag deepfakes and issue a warning when such videos are circulated on the internet.
Addressing journalists at BJP headquarters in New Delhi, PM Modi cited a deepfake video of him doing Garba. “I recently saw a video in which I was seen singing a Garba song. There are many other such videos online," PM Modi said.
The statement comes a week after a viral deepfake video of actress Rashmika Mandanna brought the spotlight on the issue of AI deepfakes and the unregulated access of the growing Artificial Intelligence (AI) technology.
The video shows, the woman with Rashmika’s face, wearing a black fitted outfit, entering a lift. The face of the woman has been morphed and edited in a way to resemble Mandanna. The deepfake video was morphed from a video featuring Zara Patel, a British Indian woman who uploaded to Instagram.
This is not an isolated incident of the misuse of AI deepfake against celebrities. A digitally manipulated video of Bollywood actress Kajol has also surfaced on the internet, where a woman with Kajol’s face photoshopped on her body is seen changing clothes on camera. The video originally features a British social media influencer, who originally shared the clip on TikTok as part of the ‘Get Ready With Me’ trend.
Reactions to AI Deepfakes
Mandanna voiced her concerns on the viral deepfake video and said that the misuse of technology is scary not only for her but also for the general internet users.
“I feel really hurt to share this and have to talk about the deepfake video of me being spread online. Something like this is honestly, extremely scary not only for me, but also for each one of us who today is vulnerable to so much harm because of how technology is being misused," she posted on her Instagram story.
Mandanna’s morphed video left celebrities in shock with many, including actors Amitabh Bachchan, Keerthy Suresh, Mrunal Thakur, Ishaan Khatter and Naga Chaitanya calling for legal action.
yes this is a strong case for legal https://t.co/wHJl7PSYPN— Amitabh Bachchan (@SrBachchan) November 5, 2023
Rajeev Chandrasekhar, the Minister of State for Electronics and Information Technology of India, also called out social media platforms for their inability to handle content containing deep fakes and misinformation.
What is Deepfake?
AI deepfakes are a form of manipulation that uses Artificial Intelligence technology to create highly convincing fake content, in the form of images or videos. Tools such as artificial intelligence (AI), photoshop, machine learning and others available online have been extensively used to create deep fake videos, clips and other content.
The AI generated fake contents are designed to appear as if they were created by or feature real individuals when, in fact, they are entirely fake. Deepfake technology can create fictional photos, morphed videos or even ‘voice clones’ of public figures.
Deepfakes often transform existing content like an image or a video where one person is swapped for another to generate realistic morphed media. The technology can also be used to create original content where someone is shown doing or saying something they didn’t do or say.
While the technology has been around for several years, it has become increasingly sophisticated and accessible recently, raising concerns about its potential misuse.
How Women are Growing Victims of deepfake porn?
The deepfakes have been mostly used for generating pornographic content, most of which are non-consensual. There has been a recent rise in photo apps digitally undressing women, sexualized text-to-image prompts creating “AI girls" and manipulated images fuelling “sextortion" rackets across the world including in Europe and the US.
Women are a particular target of AI tools and apps, which is widely available for free and require no technical expertise. These apps allow users to digitally strip off clothing from their pictures, or insert their faces into sexually explicit videos.
Celebrities including singer Taylor Swift and actress Emma Watson have been victims of deepfake porn. Around 96 percent of deepfake videos online are non-consensual pornography, and most of them depict women, according to a study by the Dutch AI company Sensity.
How to Protect Against Deepfakes?
Some of the basic tips for protecting against deepfakes are:
- Limit the amount of personal information shared online to reduce the data available for creating deepfakes.
- Enhancing your personal security and refraining from sharing photos or videos that could be utilized for creating deepfakes.
- People, who are concerned about their personal data, should consider changing their Instagram account to private instead of public.
- If a user has a business account, then they can consider hiding personal images and videos on Instagram.
- Always exercising caution while using social media and keeping your social media handles secure.
- One way to combat the spread of deepfakes is to educate the public about the technology and how to identify fakes.
- Several tools and techniques can be used to detect deepfakes, such as looking for inconsistencies in facial expressions, skin texture, and lighting.
Indian Laws Against Deepfake
India does not have a clearly defined law to specially deal with deepfake cybercrime, but various other laws can be utilised to deal with the crime.
- IT Act: The Section 66D of the Information Technology Act, 2000 has a provision for misuse of communication device or computer resource is used for cheating or impersonation. The law has a provision of imprisonment for up to three years and a fine up to Rs 1 lakh.
- Section 66E of IT Act: The Section 66E of IT Act calls for punishment in case a person’s privacy is breached for capturing, publishing or transmitting their images on the internet. The offence is punishable with imprisonment of up to three years or a fine of up to Rs 2 lakh, according to a report in Outlook.
- The Copyright Act, 1957: The Section 51 of the act states that there is a violation of The Copyright Act when any property that belongs to another person having an exclusive right is used.
- Data Protection Bill 2021: The Bill has provisions to penalise the breach of personal and non-personal data of any type. The legislation can play a crucial role in dealing with cybercrimes, including deep fakes.
- IT Rules 2023: As per the IT Amendment Rules, 2023, it is a legal obligation for digital platforms to ensure that no misinformation is posted on the internet sites or social media. The internet sites must also ensure that after reporting by any user or government, misinformation is removed in 36 hours.
Comments
0 comment