Go to listing page

OPAD: A New Adversarial Attack Targeting Artificial Intelligence

OPAD: A New Adversarial Attack Targeting Artificial Intelligence
Researchers have discovered a new adversarial attack that can fool AI technologies. This new attack—OPtical ADversarial attack (OPAD)—is based on three main objects - a camera, a low-cost projector, and a computer - that are used to perform the attack.

About the attack

OPAD is based on a low-cost projector-camera system in which researchers have projected calculated patterns to modify the appearance of the 3D objects.
  • To perform the attack, researchers modified the already existing objects seen by AI. For example, they have modified basketball images and presented them as something else.
  • It was performed by projecting some specifically calculated patterns onto the images.
  • OPAD is non-iterative and therefore, can target the real 3D objects in a single shot. Moreover, this attack can launch untargeted, targeted, black-box, and white-box attacks as well.
  • It is possibly the first method that distinctly models the environment and instrumentation. Hence, the adversarial loss function in the OPAD optimization is clearly visible to the users.

More insights

One of the critical factors of such an attack is that no physical access is required for the objects. OPAD attacks can transform any known digital results into real 3D objects.
  • The feasibility of this attack is only limited to the surface material of the object along with the saturation of object color.
  • OPAD can be used to fool self-driving cars that could become the reason behind intentional accidents or pranks. For instance, it can represent a STOP signal as a speed limit signal. Moreover, security cameras with AI can be fooled, resulting in serious consequences.
  • Additionally, the successful demonstration of OPAD shows the possibility of using an optical system to modify faces or surveillance tasks.

Conclusion

OPAD showed that organizations developing AI technologies should stay alert regarding potential security problems from within the AI models. Also, they should invest more in the security and testing of AI technology before real-world use.

Cyware Publisher

Publisher

Cyware