Open digital padlock showing AI aggregator security breach

AI Aggregator OmniGPT Suffers a Security Breach Exposing Sensitive Data Including Credentials

Artificial Intelligence (AI) aggregator OmniGPT has reportedly suffered a security breach that exposed the personal information of 30,000 individuals.

OmniGPT allows users to access ChatGPT-4, Claude 3.5, Google LLC’s Gemini, and Midjourney models without requiring a subscription for each. OmniGPT has gained popularity with users trying various models to find out which works for them without having to commit.

The breach became public knowledge on February 9, 2025, when threat actor Gloomer listed the stolen data for sale on the infamous hacking forum BreachForums.

AI aggregator OmniGPT security breach leaked sensitive information

Gloomer claims the security breach leaked 30,000 user email addresses, and files uploaded by the AI aggregator’s users. The data breach also exposed some users’ phone numbers. Leaking contact information such as email and phone numbers exposes users to various cybersecurity risks such as phishing, which could lead to more sensitive information disclosures such as credit card numbers, Social Security Numbers, and account credentials.

The threat actor also claims the leaked files contain sensitive data such as API keys, login credentials, and billing information. The security breach allegedly leaked at least 34 million user chats.

“This leak contains all messages between the users and the chatbot of this site, as well as all links to the files uploaded by users and also 30k user emails,” Gloomer claimed.

It remains unclear how the threat actor breached the AI aggregator. However, some leaked data suggest abuse of the AI aggregator’s endpoint https://app.omnigpt[.]co/, potentially resulting from flawed user authentication or session handling.

APIs, including shadow APIs, expose new “gray boxes” that extend the threat vectors malicious actors can exploit to compromise AI platforms.

However, OmniGPT has not confirmed the security breach. Nonetheless, it is unlikely that the threat actor is making unsubstantiated claims given his “God” level status reserved for most trusted cybercriminals.

The AI aggregator’s security breach highlights the risk posed by emerging technologies, including artificial intelligence. Users have regularly been advised against sharing sensitive details with various AI chatbots to avoid this specific scenario.

The presence of user-uploaded files with financial and billing information suggests that corporations are at risk of employees uploading sensitive corporate data on vulnerable AI models, implying that persistent warnings about this behavior have gone unheeded.

According to the threat actor, the leaked files containing sensitive information were uploaded during conversations with the AI aggregator, adding that buyers will find juicy details from users’ interactions.

“You can find a lot of useful information in the messages, such as API keys and credentials. Many of the files uploaded to this site are very interesting because sometimes they contain credentials/billing information,” the attacker stated.

Risk of sensitive company information uploaded to AI chatbots

“This breach should serve as a wake-up call to anyone still putting sensitive company information into AI chatbots and LLMs,” warned Jacob Ideskog, CTO of Curity. “With over 34 million user-chatbot interactions leaked, who knows what confidential information, trade secrets or personal information has now been exposed?”

Similarly, many users still use AI chatbots for highly sensitive personal conversations that when leaked put them at heightened risk of cyber attacks and extortion.

Meanwhile, OmniGPT users should remain vigilant for potential scams and account takeover attempts. Additionally, they should take immediate steps to secure their accounts by changing passwords and API keys where possible. They should also avoid sharing sensitive data with AI models given their vulnerability to leak users’ information.

“OmniGPT is one of the lesser-known AI chatbots; just imagine the fallout if such a breach occurred to ChatGPT,” Ideskog said. “Similar to posting a picture to a public Facebook or Instagram account (or any platform using cloud storage) once that information is out there, there’s no way to reel it back in.”

In March 2023, ChatGPT users reportedly saw other users’ data after a suspected security breach. In the same month, Samsung discovered that employees were uploading sensitive code on ChatGPT resulting in the companywide ban of the AI chatbot.

“The reported OmniGPT breach highlights the risk that rapid AI innovation is outpacing basic security, neglecting privacy measures in favor of convenience,” warned Jason Soroko, Senior Fellow at Sectigo. “Unchecked progress in AI inevitably invites vulnerabilities that undermine user confidence and technological promise.”