CERT-In Develops Anti-deepfake Tech To Tackle AI Scams

Indian government agency ramps up efforts to detect and prosecute AI-driven scams with new anti-deepfake tools
CERT-In Develops Anti-deepfake Tech To Tackle AI Scams

The Indian Computer Emergency Response Team (CERT-In), operating under the Ministry of Electronics and Information Technology (MeitY), is currently testing anti-deepfake technology to combat the malicious use of Artificial Intelligence (AI) in scams targeting unsuspecting individuals.

Recently, Sunil Bharti Mittal, Chairman of Bharti Enterprises recounted a shocking AI-driven scam that had targeted his company.

At the NVIDIA AI Summit in Mumbai, a source familiar with CERT-In’s efforts confirmed that the agency is actively working on this anti-deepfake technology saying, “The technology will not only detect deepfakes but will also support prosecution of such offenders in a court of law.”

In reference to the Airtel Chairman’s experience, the source elaborated that the technology can detect fake audio and video content, aiding in the fight against false information that could incite public concern. MeitY works with social media platforms to promptly remove such harmful content.

A recent high-profile case involved SP Oswal, Chairman of Vardhman Group who lost Rs 7 crore after scammers, posing as government officials and using fake documents alongside virtual settings, convinced him to transfer funds.

The Election Commission of India (ECI) has voiced serious concerns regarding the use of deepfake AI, cautioning that those spreading fabricated narratives will face stringent action. Chief Election Commissioner, Rajiv Kumar warned of strong measures against individuals using deepfake AI to spread misinformation related to elections.

In December 2023, CERT-In issued an advisory to citizens, alerting them to the growing trend of AI and deepfake technology used in scams.

According to the advisory, scammers collect social media and other publicly available information to produce convincing deepfakes that mimic the voices and faces of victims’ friends or family members. These deepfakes are then used in scams such as fake kidnapping or ransom calls to create urgency and persuade victims to transfer funds, often in the form of gift cards or cryptocurrency.

In November 2023, the Minister of Information and Broadcasting, Ashwini Vaishnaw shared that the government would begin drafting specific regulations for deepfakes. Following consultations with multiple stakeholders and platforms, Vaishnaw stated, “We will start drafting the regulations today, and very soon, we will have specific regulations for deepfakes.”

The upcoming regulations are expected to introduce penalties for creating or sharing deepfakes and may include guidelines to help users recognise deepfake content.

Meanwhile, Google has partnered with the Election Commission of India to share key voting information through Google Search and YouTube. The company also unveiled its support for Shakti: The India Election Fact-Checking Collective and joined the Coalition for Content Provenance and Authenticity (C2PA) as a steering committee member. C2PA is a global standards body committed to certifying the authenticity of digital content.

Meta, collaborating with the Misinformation Combat Alliance (MCA), has released a WhatsApp helpline to address AI-driven misinformation, especially deepfakes. Furthering its public education efforts, Meta introduced initiatives like the ‘Know What’s Real’ campaign on WhatsApp and Instagram, aimed at teaching users to identify and report suspicious content.

Also Read

Subscribe to our newsletter to get updates on our latest news