Translate to multiple languages

Wednesday, August 30, 2017

Welcome to the world of adversarial machine learning | IDG Connect

Photo: Dan Swinhoe
"Professor Giovanni Vigna, CTO and co-founder of Lastline, on why Machine Learning in security can be a tricky game" according to Dan Swinhoe, Senior Staff Writer at IDG Connect. 
 
Photo: IDG Connect

From startups such as Darktrace, Cylance, and ZoneFox to the established giants like FireEye and IBM, there’s few companies in the security space today that don’t claim to use either Machine Learning or Artificial Intelligence in some way or another.

And there’s good reason. Once you brush aside the “me too” marketing hype – of which there is no shortage – Machine Learning has the potential to help automate processes, reduce the number of false positives, and general make life easier for the overworked and often beleaguered security professional.

But as interest and use of Machine Learning for security purposes increases, so too will awareness from hacker and cyber criminals. Which inevitably leads to hackers trying to counter these technologies any way they can. And for companies looking to deploy their own Machine Learning-based systems for security use, this could lead to problems if they’re not careful.

“I see people taking machine learning techniques that we have been using in image processing and language processing and transferring them directly to the malware or the security domain,” says Professor Giovanni Vigna, CTO and co-founder of security startup Lastline. “And that doesn't work for a number of reasons.”

Vigna cofounded the California-based Lastline in 2011 to focus on offering breach detection and sandboxing technologies. Vigna himself is a Professor in the Department of Computer Science at the University of California in Santa Barbara, and part of the Shellphish group which won 3rd place at the DARPA Cyber Grand Challenge last year.

“Recognising images or language processing, in those domains, Machine Learning is operating on data that is not actively polluted or actively resisted from an adversary. This is different from only recognising cats. The pictures are not fighting you.”

Adversarial Machine Learning 
We’re yet to enter the realm of hackers and cyber criminals deploying super advanced AI to hack our systems. The main offensive capabilities they’re using currently seems to be deploying chatbots to harvest data.

“I would say those are niche type of activity, because for them their goal is to bypass or to extract as we learn.”

While there’s little evidence of widespread use of AI for actively malicious purposes, Vigna is becoming increasingly concerned by hackers and criminals actively trying to mess with the training data of Machine Learning models in order to craft stealthier malware that avoid detection. Vigna labels this as “adversarial machine learning”.
Read more... 

Recommended Reading

Photo: IDG Connect
Be warned: AI won’t fix all your security issues by Dan Swinhoe
"Javvad Malik & Chris Doman of AlienVault on why getting the basic stuff done is more useful than AI right now."   

Source: IDG Connect  


If you enjoyed this post, make sure you subscribe to my Email Updates!

0 comments: