Translate to multiple languages

Subscribe to my Email updates
Enjoy what you've read, make sure you subscribe to my Email Updates

Wednesday, April 28, 2021

Machine learning security vulnerabilities are a growing threat to the web, report highlights | Cybersecurity - The Daily Swig

Ben Dickson, Technical writer summarizes, Security industry needs to tackle nascent AI threats before it’s too late.

Attacks against machine learning and AI systems are set to increase over the coming years
As machine learning (ML) systems become a staple of everyday life, the security threats they entail will spill over into all kinds of applications we use, according to a new report.

Unlike traditional software, where flaws in design and source code account for most security issues, in AI systems, vulnerabilities can exist in images, audio files, text, and other data used to train and run machine learning models.

This is according to researchers from Adversa, a Tel Aviv-based start-up that focuses on security for artificial intelligence (AI) systems, who outlined their latest findings in their report, The Road to Secure and Trusted AI, this month...

Future trends in AI security

Neelou warned that while “AI is extensively used in myriads of organizations, there are no efficient AI defenses.”

He also raised concern that under currently established roles and procedures, no one is responsible for AI/ML security.

“AI security is fundamentally different from traditional computer security, so it falls under the radar for cybersecurity teams,” he said. “It’s also often out of scope for practitioners involved in responsible/ethical AI, and regular AI engineering hasn't solved the MLOps and QA testing yet.”

Read more... 

Source: The Daily Swig