State sponsored hacker working on laptop showing Gemini AI for cyber attacks

Google’s Gemini AI Abused by Multiple State-Sponsored Hackers for Cyber Attacks

Numerous state-sponsored threat actors are using Google’s AI-powered assistance Gemini AI in cyber attacks. Google Threat Intelligence Group (GTIG) observed the attackers using Gemini to increase productivity but not carry out cyber attacks.

“While we do see threat actors using generative AI to perform common tasks like troubleshooting, research, and content generation, we do not see indications of them developing novel capabilities.”

Google identified at least twenty countries exploiting its AI-powered generative assistance to support various attack lifecycles, with Iran and China being the heaviest users.

How state-sponsored hackers use Gemini AI for cyber attacks

Google warned that the attackers attempted to bypass Gemini AI’s safety controls using basic measures or publicly available jailbreak prompts intended to make the AI model behave in ways it was trained to avoid.

“Rather than engineering tailored prompts, threat actors used more basic measures or publicly available jailbreak prompts in unsuccessful attempts to bypass Gemini’s safety controls,” the company stated.

Subsequently, the attackers experimented with Gemini AI to achieve productivity gains such as troubleshooting code and localizing content for phishing without using the generative AI chatbot to execute novel cyber attacks.

Additionally, they used Gemini AI for cyber attacks to support various cycles, such as researching potential infrastructure targets, finding free hosting providers, conducting reconnaissance on target organizations, payload development, malicious scripting, and detection evasion.

Iranian state-sponsored hackers were the heaviest users, accounting for nearly three quarters of all use by government-linked malicious actors. They abused Gemini AI to research defense organizations, create cybersecurity-themed phishing content, study security vulnerabilities, and conduct reconnaissance on defense experts and organizations.

They also used the generative AI assistant to research defense technology such as unmanned aerial vehicles (UAVs), anti-drone systems, and satellite technology.

Tehran-linked malicious actors primarily targeted their Middle Eastern neighbors and regional U.S. and Israeli interests. GTIG observed over ten Iranian APTs, including APT42, abusing Gemini AI to support their cyber attacks.

Chinese state-sponsored hackers also leveraged Gemini AI in cyber attacks for reconnaissance, troubleshooting code, scripting, and researching post-exploitation activities such as lateral movement, privilege escalation, and data and intellectual property theft. Primary Chinese targets include the U.S. military, IT companies, and intelligence organizations.

North Korean government-linked hackers also used the generative AI assistant to research potential infrastructure and free hosting providers, conduct reconnaissance on target organizations, develop payloads, perform malicious scripting, and evade detection.

Additionally, they used Gemini to research topics of interest to the North Korean government, such as the South Korean military and cryptocurrency. DPRK APTs also used Gemini to research jobs and draft cover letters, potentially related to the regime’s campaign of placing clandestine IT workers in Western tech companies.

In contrast, Russian government-linked APTs sparingly used Gemini AI for cyber attacks in some coding tasks such as converting publicly available malware to other programming languages and adding functionality such as encryption.

However, Gemini AI’s safety controls stopped other abuses such as researching Gmail phishing, coding Chrome infostealer, and bypassing Google account verification.

Concerns with other AI models

Meanwhile, attempts to jailbreak AI models have been reported across the board, including on OpenAI’s chatbot ChatGPT. In October 2024, the company disrupted over 20 operations attempting to leverage the AI model to perform deceptive or dangerous operations.

Concerns have also been raised about AI models with lax safety controls such as DeepSeek R1 and Alibaba’s Qwen 2.5, that could potentially be used for cyber attacks.