New Delhi: Microsoft and OpenAI announced on Wednesday that hackers are utilizing large language models (LLMs) such as ChatGPT to enhance their current cyber-attack methods. The companies have identified efforts by groups supported by Russia, North Korea, Iran, and China to utilize tools such as ChatGPT for investigating targets and developing social engineering tactics.
In partnership with Microsoft Threat Intelligence, OpenAI intervened to disrupt five state-affiliated actors who aimed to utilize AI services to facilitate malicious cyber operations. (Also Read: Meta CEO Mark Zuckerberg Tries Apple Vision Pro, Shares Video On Instagram)
“We disrupted two China-affiliated threat actors known as Charcoal Typhoon and Salmon Typhoon; the Iran-affiliated threat actor known as Crimson Sandstorm; the North Korea-affiliated actor known as Emerald Sleet; and the Russia-affiliated actor known as Forest Blizzard,” said Sam Altman-run company. (Also Read: OpenAI’s ChatGPT Is Testing New ‘Memory’ Feature With Select Users)
The identified OpenAI accounts associated with these actors were terminated. These bad actors sought to use OpenAI services for querying open-source information, translating, finding coding errors, and running basic coding tasks.
“Cybercrime groups, nation-state threat actors, and other adversaries are exploring and testing different AI technologies as they emerge, in an attempt to understand potential value to their operations and the security controls they may need to circumvent,” Microsoft said in a statement.
While attackers will remain interested in AI and probe technologies’ current capabilities and security controls, it’s important to keep these risks in context, said the company.
“As always, hygiene practices such as multifactor authentication (MFA) and Zero Trust defenses are essential because attackers may use AI-based tools to improve their existing cyberattacks that rely on social engineering and finding unsecured devices and accounts,” the tech giant noted. (With IANS Inputs)