Tag: Deepfakes

  • Shocking! Telegram AI Bots Can Generate Deepfakes Of Women And Girls: Report |

    New Delhi: The growing threat of deepfakes and AI misuse has taken a troubling turn, with a recent investigation revealing that AI-powered chatbots on Telegram are being used to generate explicit images of real people. Millions of users are reportedly engaging with these tools, raising serious concerns about privacy, consent, and the potential harm caused by this technology. Authorities and individuals worldwide are now grappling with the implications of this alarming trend.

    Several celebrities, including Taylor Swift, Jenna Ortega, Alia Bhatt, and Rashmika Mandanna have fallen victim to these deepfakes. What’s even more alarming is that teenage girls are now being targeted, with deepfakes increasingly being used for sextortion schemes.

    These AI-powered bots allow users to alter photos with just a few clicks. This creates deepfakes that remove clothing or depict fabricated sexual activity. According to a recent report by Wired, around 4 million people use these chatbots monthly to produce such deepfakes. This causes significant risks to privacy and safety, particularly for young girls and women.

    Four years ago, deepfake expert Henry Ajder uncovered a Telegram bot designed to “undress” photos of women using AI. Today, the issue has grown rapidly with a new study revealing that at least 50 similar bots are now active on the platform, attracting over 4 million monthly users. These tools allow users to easily generate nude images of real people by editing photos with just a few clicks and some even creating fake images of individuals performing sexual acts.

    WIRED’s analysis of Telegram groups involved in explicit content reveals that at least two bots have over 400,000 monthly users, with another 14 bots attracting more than 100,000 subscribers. Deepfake expert Henry Ajder has called this situation “nightmarish,” emphasising the serious harm these tools pose, particularly to young girls.

    As per WIRED report, At least 25 Telegram channels support the identified bots, attracting over 3 million members combined. These channels provide updates on new bot features, offer special deals on “tokens” needed to use the bots and often direct users to alternative bots if the original ones are removed by Telegram. Further, the demand for “nudify” websites has become so high that Russian cybercriminals, as 404Media reports, have begun creating fake sites to infect users with malware.

  • Tesla And SpaceX CEO Elon Musk's X Cracks Down On Deepfakes With Improved Image Matching Update

    Shallowfakes are photos, videos and voice clips generated without the help of artificial intelligence (AI), and use widely available editing and software tools.

  • Misinformation Spread Via Deepfakes Biggest Threat To Upcoming Polls In India: Tenable |

    New Delhi: Misinformation and disinformation spread through artificial intelligence (AI)-generated deepfakes and fake content are the biggest threats to the upcoming elections in India,” exposure management company Tenable said on Sunday.

    According to the company, these threats will be shared across social media and messaging platforms like WhatsApp, X (formerly Twitter), Instagram, and others.

    “The biggest threats to the 2024 Lok Sabha elections are misinformation and disinformation as part of influence operations conducted by malicious actors against the electorate,” said Satnam Narang, Senior Staff Research Engineer at Tenable, to IANS.

    A recent report by Tidal Cyber highlighted that this year, 10 countries will face the highest levels of election cyber interference threats, including India.

    Recently, deepfake videos of former US President Bill Clinton and current President Joe Biden were fabricated and circulated to confuse citizens during the upcoming presidential elections. (Also Read: Woman Falls Victim To Investment Scam, Loses Jewelry And Over Rs 24 Lakh)

    Experts note that the proliferation of deepfake content surged in late 2017, with over 7,900 videos online. By early 2019, this number nearly doubled to 14,678, and the trend continues to escalate.

    “With the increase in generative AI tools and their use growing worldwide, we may see deepfakes, be it in images or video content, impersonating notable candidates seeking to retain their seats or those hoping to unseat incumbents in parliament,” Narang added.

    The Indian government has recently issued directives to social media platforms such as X and Meta (formerly Facebook), urging them to regulate the proliferation of AI-generated deepfake content.

    Additionally, ahead of the Lok Sabha elections, the Ministry of Electronics & IT (MeitY) has issued an advisory to these platforms to remove AI-generated deepfakes from their platforms. (Also Read: WhatsApp Allows To Pin Multiple Messages In Chat; Here’s How to Pin Messages on Android, iOS, And Desktop)

    Tenable suggests that the easiest way to identify a deepfake image is to look for nonsensical text or language that looks almost alien-like in language. 

  • Company Loses Rs 200 Crore In Deepfake Scam Via Fake ‘CFO’ Video Call |

    New Delhi: The prevalence of deepfakes on the internet often impersonating prominent figures for malicious intent has become increasingly alarming. A similar incident occurred in Hong Kong where scammers utilized deepfake technology to fabricate a video meeting, resulting in the astonishing theft of $25.6 million. This incident underscores the significant risks posed by deepfake technology and highlights the urgent need for measures to combat its misuse.

    As per a South China Morning Post report, a similar incident occurred with a Hong Kong-based company. Scammers utilized highly sophisticated deepfake technology to deceive the company’s local branch during a manipulated video conference call. Allegedly, the fraudsters digitally impersonated the company’s Chief Financial Officer to issue instructions for money transfers. (Also Read: Bumble Introduces AI-Driven Feature To Block Spam, Scams, And Fake Profiles; Check Details)

    According to the publication, all individuals participating in the video calls except for the victim were fake representations of actual people. “The scammers applied deepfake technology to turn publicly available video and other footage into convincing versions of the meeting’s participants,” the report said. (Also Read: Telegram Redesigns Voice And Video Calls In Its New Update; Check Out The Changes)

    According to the Hong Kong Police, this scam is unprecedented in Hong Kong’s history. “This time, in a multi-person video conference, it turns out that everyone you see is fake,” Baron Chan Shun-ching, the acting senior superintendent, was reported as saying.

    The officer further added “They used deepfake technology to imitate the voice of their targets reading from a script.” In total, 15 transfers were conducted totaling HK $25.6 million and were sent to multiple bank accounts in Hong Kong.

    This incident comes after several instances of celebrity deepfakes that have garnered attention online. Last year, there was an incident involving Indian actress Rashmika Mandanna where her face was superimposed onto a video of an online influencer. More recently, fake explicit videos purportedly featuring singer Taylor Swift have also circulated widely on the internet.