Cybercriminals and security defenders are in a constantly playing an ever-evolving game of cat and mouse. This is primarily because technological advancements in the community can be used to empower defenders and also abused to enhance attackers’ capabilities.
A newly created malware called DeepLocker serves as the ideal example of how technology can be used to advance the agenda of both security defenders as well as cybercriminals.
DeepLocker is the brainchild of security experts at IBM. It is the next generation of cyberthreats - a malware powered by artificial intelligence (AI) and packed with advanced detection evading features.
IBM researchers unveiled DeepLocker at the Black Hat USA 2018 conference as a new breed of highly stealthy and targeted cyberthreat.
The malware has been designed to completely conceal its presence and intent until it reaches a particular victim. The AI component of the malware is leveraged to identify the target via facial and voice recognition, as well as geolocation.
“You can think of this capability as similar to a sniper attack, in contrast to the ‘spray and pray’ approach of traditional malware. DeepLocker is designed to be stealthy. It flies under the radar, avoiding detection until the precise moment it recognizes a specific target,” IBM security experts said in a blog.
“This AI-powered malware is particularly dangerous because, like nation-state malware, it could infect millions of systems without being detected. But, unlike nation-state malware, it is feasible in the civilian and commercial realms.”
According to IBM researchers, the detection evading techniques used by DeepLocker are in complete contradiction to those used by existing malware variants. In order to bypass security protocols put in place by most anti-virus programs and malware scanners, DeepLocker hides its malicious payload in benign applications, such as video conference software.
“What is unique about DeepLocker is that the use of AI makes the “trigger conditions” to unlock the attack almost impossible to reverse engineer,” IBM researchers added. “The malicious payload will only be unlocked if the intended target is reached. It achieves this by using a deep neural network (DNN) AI model.”
DeepLocker is also capable of switching the concealed trigger condition into a key or a password which will then be required to unlock and drop the malicious payload.
IBM researchers created a proof-of-concept in which they masked the WannaCry ransomware in a video conference application to ensure that malware detection tools are unable to spot it. The researchers then trained the AI module to identify the face of a specific individual to unlock the ransomware and execute it on targeted systems.
“Imagine that this video conferencing application is distributed and downloaded by millions of people, which is a plausible scenario nowadays on many public platforms,” IBM researchers explained. “When launched, the app would surreptitiously feed camera snapshots into the embedded AI model, but otherwise behave normally for all users except the intended target.
In essence, the proof-of-concept also showcases that when a victim is in front of the targeted system, the malware-laced video conferencing app would use the camera to identify the victim’s face. The facial features are already programmed to function as the key to unlock the ransomware.
In other words, the victim’s face would work as a trigger to activate the ransomware and deliver the malicious payload.
“While a class of malware like DeepLocker has not been seen in the wild to date, these AI tools are publicly available, as are the malware techniques being employed — so it’s only a matter of time before we start seeing these tools combined by adversarial actors and cybercriminals. In fact, we would not be surprised if this type of attack were already being deployed,” IBM researchers said.