A new attack method has been discovered that allows hiding malware inside the image classifier within a neural network and bypassing security barriers. Surprisingly, the model’s accuracy remained above 93%.
 

What was discovered?

Researchers Zhi Wang, Chaoge Liu, and Xiang Cui demonstrated that it is possible to replace up to 50% of neurons in the AlextNet model with malware that can go undetected under security scanners.
  • In the demonstration, they embedded malware of size 36.9MB inside an AlexNet model of size 178 MB, within a 1% accuracy loss.
  • For this, the researchers had selected a layer within the already-trained image classifier model and then embedded the malware in that layer.
  • According to them, if the model does not have sufficient neurons, the same attack method can be used with an untrained model as well. 
  • Attackers could train the model using the same data used in the original model, thus producing the same performance.
  • Moreover, the model was tested on VirusTotal to determine the efficiency of the malware. After validation by 58 antivirus engines, no suspicious activities were detected inside the model, indicating a successful evasion technique.

Similar discoveries

In the past few months, similar discoveries were made by researchers, demonstrating that machine learning-related tricks or models can be used to evade detection or perform other malicious activities.
  • Last month, researchers from Adversa revealed a new attack technique named Adversarial Octopus, which could target AI-driven facial recognition tools. This allows attackers to evade security systems or infect the system.
  • Another research demonstrated that adversarial learning methods can be used to target cybersecurity defenses in industrial control systems. This can be done by exploiting the classification behavior by means of adversarial samples.

Conclusion

Malware developers often keep exploring covert ways to distribute malware while evading security checks. New technologies such as machine learning and neural networks are still at their nascent stage, making them vulnerable to several kinds of misuse (hiding malware from security solutions) or even attacks (infecting the entire system). Customers interacting with today’s pathbreaking technologies have to employ appropriate security measures to safeguard themselves.

Cyware Publisher

Publisher

Cyware