ChatGPT on phone screen showing OpenAI credentials sold on dark web

Over 200,000 Compromised OpenAI Credentials Available for Purchase on the Dark Web

Security researchers have discovered over 200,000 OpenAI credentials for sale on the dark web as stolen logs.

The compromised credentials would allow buyers to use ChatGPT’s premium features for free and access chats with confidential information such as trade secrets, source code, and business plans.

Flare shared with Bleeping Computer it’s analysis of 19.6 million leaked logs and discovered 400,000 corporate credentials from various online accounts, including Google Cloud Platform, AWS, Salesforce, QuickBooks, and Hubspot.

The company also discovered 205,447 compromised OpenAI account credentials pilfered via commodity malware log harvesting. It remains unclear if Flare’s discovery overlaps with Group IB’s.

Another 100,000 OpenAI credentials sold on the dark web

The discovery follows cybersecurity company Group-IB’s threat intelligence team’s report that over 100,000 ChatGPT account credentials were traded on dark web marketplaces between June 2022 and May 2023.

The stolen OpenAI credentials were stolen using Raccoon Infostealer (78,348), Vidar (12,984), and RedLine (6,773) malware variants.

According to the report, the Asia-Pacific region accounted for most (40.5%) OpenAI credentials offered for sale on dark web marketplaces, followed by the Middle East and Africa (24.6%) and Europe (16.8%).

India occupied the top spot with 12,632 OpenAI credentials listed on the dark web, followed by Pakistan (9,217), Brazil (6,531), Vietnam (4,771), and Egypt (4,588), while the United States was sixth with 2,995 compromised accounts.

However, the ChatGPT parent company clarified the compromised login credentials were not the result of any OpenAI data breach. Instead, they were the by-product of commodity malware-based log harvesting.

“OpenAI maintains industry best practices for authenticating and authorizing users to services including ChatGPT, and we encourage our users to use strong passwords and install only verified and trusted software to personal computers.”

Additionally, users should enable two-factor authentication and avoid entering sensitive information such as passwords and credit cards in the chatbox.

“Employees enter classified correspondences or use the bot to optimize proprietary code,” said Group-IB Head of Threat Intelligence, Dmitry Shestakov. “Given that ChatGPT’s standard configuration retains all conversations, this could inadvertently offer a trove of sensitive intelligence to threat actors if they obtain account credentials.”

Shestakov also corroborated OpenAI claims that the compromised OpenAI credentials were “not a result of any weaknesses of ChatGPT’s infrastructure.”

Philipp Pointner, Chief of Digital Identity at Jumio, explained why the generative AI chatboxes “bring an additional concern.”

“With over 200,000 OpenAI credentials up for grabs on the dark web, cybercriminals can easily get their hands on other personal information like phone numbers, physical addresses, and credit card numbers,” noted Pointner. “With these credentials, fraudsters can gain access to all types of information users have inputted into the chatbot, such as content from their previous conversations, and use it to create hyper-customized phishing scams to increase their credibility and effectiveness.”

Rising ChatGPT’s popularity on the dark web

Flare also found a phenomenal increase in ChatGPT’s interest in hacking forums, with threat actors mentioning the chatbot 27,000 times on Telegram and dark web forums and marketplaces.

Other trending ChatGPT-related topics on the dark web include “jailbreaking” ChatGPT to write malware and weaponizing the OpenAI chatbot to execute cyber-attacks at scale.

According to the email security platform SlashNext, threat actors have successfully weaponized AI for nefarious purposes.

In July 2023, the firm discovered a new malicious AI tool WormGPT capable of generating highly compelling business email compromise (BEC) messages. The chatbot is based on the GPT-J language and trained on malware-related data.

SlashNext noted that the chatbot, described as “ChatGPT’s evil twin” by NordVPN, allowed threat actors to craft professional phishing messages in impeccable grammar in foreign languages.

Considering that grammatical errors and misspelled words are among the top indicators of phishing scams, WormGPT increases their odds of success.