Go to listing page

DeepSloth: An Adversarial Attack on Machine Learning Systems

DeepSloth: An Adversarial Attack on Machine Learning Systems
A new adversarial attack technique has been developed that can force machine learning systems to slow down and cause critical failures. This technique is developed by scientists working at the University of Maryland. It neutralizes optimization methods that speed up deep neural network operations.

What's the discovery?

This slowdown adversarial attack (named as DeepSloth) was presented at the International Conference on Learning Representations (ICLR). It targets the efficacy of multi-exit neural networks.
  • The attack changes the input data to stop neural networks from making early exits and forces them to work full computations. It can negate the advantages of multi-exit architectures.
  • These architectures can reduce the energy requirements of a DNN model at inference time to half. Researchers displayed that any input can craft a deviation of a system that wipes out those savings entirely.
  • Researchers tested DeepSloth on several multi-exit architectures. If an attacker has full knowledge of the targeted architecture, the early exit efficacy can be reduced by 90% to 100%.
  • However, even if attackers do not have the exact information regarding the target model, DeepSloth can still reduce the efficacy by 5%–45%. This is equal to some sort of DoS attack on neural networks.

Additional insights

According to the researchers, when multi-exit architecture models are served directly from a server, targeted DeepSloth attacks can occupy the server’s resources and stop it from using its full capacity.
  • In scenarios where a multi-exit network is divided between the cloud and an edge device, this attack can force the device to send all its data to the server.
  • By doing this, the edge device may miss critical deadlines. For example, in any health monitoring program that uses AI to quickly identify accidents and call for help if required, any delays could lead to fatal results.

Conclusion

The researchers stated that this could be one of the first attacks targeting multi-exit neural networks in this way. Moreover, adversarial training, a usual way of protecting the machine learning models from adversarial attacks, is not much effective against these attacks. Although this technique is not yet harmful, more such devastating slowdown attacks can be discovered in the future.

Cyware Publisher

Publisher

Cyware