Go to listing page

Attackers Deliver Redline Stealer via Poisoned AI Tools

Attackers Deliver Redline Stealer via Poisoned AI Tools
The popularity of AI-based end-user tools is increasing. Unfortunately, it has also attracted cybercriminals who use various social engineering tricks to lure potential victims. Recently, a malicious advertising campaign was observed abusing the Google Search engine to push malicious executables disguised as popular AI tools such as ChatGPT and Midjourney.

Abusing Midjourney via poisoned search

Trendmicro researchers have revealed details about ongoing malicious advertisement campaigns advertising Midjourney. Midjourney is an AI-based tool that generates images using instructions provided in natural language.
  • The campaign displays SEO-poisoned search results for the keyword that would redirect users to malicious websites to eventually download Redline Stealer.
  • Upon clicking on ads, the user’s IP address is sent to the backend server. If the IP address belongs to some web-crawling bot or if the user is visiting the URL by manually typing it, a non-malicious version of the domain is displayed to avoid detection.
  • However, if the user is coming through the malicious ads, a malicious executable masquerading as the desktop version of Midjourney is served to the visitor. 

It's important to note that the genuine Midjourney tool is available only via the web version.

Post-infection process

  • When the malicious installer (Midjourney-x64.msix) is executed, it shows a fake installation window. Meanwhile, a malicious PowerShell script (frank_obfus.ps1) runs in the background. 
  • This script downloads the actual payload—Redline stealer— from the server (openaijobs[.]ru) and executes it on the infected machine.
  • Redline stealer proceeds with the exfiltration of sensitive data, including credentials, web cookies, file information, and cryptocurrency wallet data.

Additional info

  • To evade detection, the campaign uses Telegram's API for its C2 communication. This technique blends malicious traffic with normal traffic.
  • Furthermore, some variants of the malicious campaign use fake ChatGPT and Dall-E webpages.

Concluding notes

Attackers are increasingly attempting to tap users’ interest in AI-based tools, such as  Midjourney and ChatGPT. It is extremely important to be vigilant about the genuine ways to use these tools. Users are suggested to avoid falling for suspicious ads and avoid downloading any software from unofficial sources.
Cyware Publisher

Publisher

Cyware