Cybercriminals targeted Microsoft Bing AI chatbot users with malicious ads leading to infected software downloads.
Security researchers at Malwarebytes Labs found that cybercriminals inserted the ads into Bing conversations when a user hovered over the link provided.
“Ads can be inserted into a Bing Chat conversation in various ways. One of those is when a user hovers over a link and an ad is displayed first before the organic result.” Malwarebytes’ Director of Threat Intelligence Jérôme Segura wrote in a blog post.
Although the link displayed an ad label, users would normally click the first result and land on a malicious website that initiates trojanized software downloads.
Malicious ads in Bing AI chatbot
Threat actors behind the Bing AI chatbot malvertising campaign employed ingenious tactics to target viable victims and avoid detection.
They redirected Bing AI chatbot users to a malicious website mynetfoldersip[.]cfd that analyzed traffic to filter out bots, sandboxes, and security researchers by checking the visitors’ IP addresses, time zones, virtual machine rendering, and other characteristics.
After determining that the visitor was a real human, they redirected them to a spoofed domain advenced-ip-scanner[.]com, mimicking the official site through typosquatting.
The victims were then tricked into downloading an MSI installer containing three files, one being a malicious and heavily obfuscated script. When executed, the script contacts an external IP address, presumably to call home and receive additional payloads.
However, Malwarebytes researchers never followed through to identify the final payload or the threat actor’s objectives. By impersonating Advanced IP Scanner, the threat actors likely targeted network administrators and IT infrastructure.
Nevertheless, the researchers determined the cybercriminals served the malicious ads from a compromised Australian business. The fraudsters had published another malicious ad targeting lawyers by impersonating MyCase law firm management software.
Risk of malicious ads in AI chatbots
By incorporating artificial intelligence into search, Bing intends to challenge Google’s dominance and make the search experience more precise, intuitive, and user-friendly. Since its launch in February 2023, Microsoft Bing AI chatbot has established itself as a notable industry player, reaching over 100 million users and over 1 billion chats.
The expanding user base encouraged Microsoft to serve ads on the Bing AI chatbot in March 2023 to generate additional revenue. Like others, Bing search displays promoted content before organic results based on the promoter’s keyword bidding and the user’s search intent.
By providing a few links instead of the traditional endless list of snippets with URLs, Bing likely inadvertently encouraged users to trust them. When fraudsters sneak malicious links into the AI search flow, users are potentially more likely to click on them, leading to infection.
“Considering that tech giants make most of their revenue from advertising, it wasn’t surprising to see Microsoft introduce ads into Bing Chat shortly after its release,” Segura noted. “However, online ads have an inherent risk attached to them.”
Malvertising: A recurrent problem
Threat actors have repeatedly exploited search engines to deliver malicious ads and trick unsuspecting users into downloading malicious software.
“Malicious ads have been a problem for decades,” said Roger Grimes, a data-driven defense evangelist at KnowBe4. “This is just a current example of them being used in AI-related tools. Malicious ads, and the legitimacy they have with many viewers, does make them ripe for exploitation.”
Malicious ads on the AI chatbot are not a referendum on Bing Search but the application of a common tactic used by cybercriminals. Nevertheless, tech giants should do more to prevent cybercriminals from abusing their platforms to target users and spread malware.
“Of course, we need Microsoft and other vendors to do more to prevent malicious ads. They’ve been around for decades. There have to be better ways to prevent them. It’s a travesty that we are still dealing with them decades later and that they are invading our newest platforms,” Grimes added.
Highlighting that the Internet was inherently untrustworthy, Grimes emphasized the need to learn how to identify and avoid malicious ads.
“They need to understand the concept of malicious poisoned ads; how to recognize them, and be told to make sure they don’t click on them. Until content filtering tools are better at detecting and preventing them, education is really the only way to fight them,” Grimes concluded.