Back

 Industry News Details

 
How AI-Driven Systems Can Be Hacked Posted on : Feb 20 - 2018

Nowadays, AI seems to be taking over everything, and there is a variety of examples of that. One of the areas it's touched is cybersecurity, providing both attackers and defenders greater opportunities to reach their goals. Still, with great power comes great responsibility, as AI programs are not immune to attacks.

As an engineer who's involved in the development of machine learning engines for user anomaly detection in ERP systems, one of my goals is to build a system that not only can detect attacks but can withstand them. The first cases of fooling machine learning algorithms were published a while ago, and the first example of a real-life scenario probably had something to do with spam filter bypasses.

The deep learning craze began in 2012 when new machine learning applications for image recognition such as AlexNet were created. They seemed so cool that people didn’t even think about security issues. Unfortunately, their ability to bypass them was the core architecture weakness, which was covered in 2013 by a group of researchers in their document “Intriguing Properties of Neural Networks.” These applications are vulnerable to adversarial examples -- synthetically created inputs that pretend to relate to one class but actually are from another one. For complex objects, you simply can't compose a formula that will separate apples from oranges. There will always be an adversarial example. What can be done by fooling the networks? Well, let me give you some suggestions:

• Fooling autonomous vehicles to misinterpret stop signs vs. speed limit.

• Bypassing facial recognition, such as the ones for ATM.

• Bypassing spam filters.

• Fooling sentiment analysis of movie reviews, hotels, etc.

• Bypassing anomaly detection engines.

• Faking voice commands.

• Misclassifying machine learning based-medical predictions.

Now, let’s move from theory to practice. One of the first examples was demonstrated on a popular database of handwritten digits. It showed that it was possible to make small changes in the initial picture of digit so that it would be recognized as another digit. The system not only misclassified one with a seven but even a one with a nine, and there are examples of all 100 possible misclassifications of digits. It was performed in a way that people couldn’t recognize a fake. View More