Go to listing page

Beware! AI Generates a Truly Polymorphic Malware BlackMamba

Beware! AI Generates a Truly Polymorphic Malware BlackMamba
To demonstrate the powers and capabilities of AI-based malware, researchers have developed an attack system wherein the code is dynamically regenerated at runtime without any need for a C2 server. The process has brought forth a malicious proof-of-concept attack dubbed BlackMamba. The malware can bypass any automated security-detection system without raising any red flags.

More on BlackMamba

Researchers from HYAS labs have developed this truly polymorphic malware by exploiting the Large Language Model (LLM), the technology that powers ChatGPT.
  • BlackMamba has a built-in keylogger designed to collect sensitive information from targeted devices. This includes usernames, passwords, and credit card numbers. 
  • Once collected, it sends the data to a malicious channel on Microsoft Teams.
  • From there, it can be transmitted to the dark web or other locations via its secure encrypted channels while bypassing common firewalls and intrusion detection systems.

Manufacturing the malware

To develop the autonomous BlackMamba, researchers combined two different concepts together.
  • In the first, a malware sample was equipped with intelligent automation in a way that require no C2 communication. The stolen data was made to reach a designated server via a legitimate communication channel such as Microsoft Teams.
  • Secondly, researchers used AI code generative techniques (OpenAI APIs) to synthesize new malware code dynamically during each run, making this malware truly polymorphic. 
  • Furthermore, it uses the open-source Python package Auto-py-to-exe, allowing developers to convert the code into standalone executable files, with support for Windows, Linux, and macOS. 

The malware was tested against a renowned EDR system and resulted in absolutely no alerts or detections.

Concluding notes

BlackMamba turns out to be an entirely new breed of malware, generating new, unique, and benign code each time using AI. It demonstrates that LLMs can be abused to generate malicious code automatically, which is more effective than human-generated code and can still evade any predictive security solution. At this point, it is critical for organizations and security professionals to keep up with the evolving threats and adopt and operationalize cutting-edge security measures to stay protected from such threats.
Cyware Publisher