Ben Dickson, Technical writer summarizes, Security industry needs to tackle nascent AI threats before it’s too late.
As machine learning (ML) systems become a staple of everyday life,
the security threats they entail will spill over into all kinds of
applications we use, according to a new report.Attacks against machine learning and AI systems are set to increase over the coming years
Unlike traditional software, where flaws in design and source code account for most security issues, in AI systems, vulnerabilities can exist in images, audio files, text, and other data used to train and run machine learning models.
This is according to researchers from Adversa, a Tel Aviv-based start-up that focuses on security for artificial intelligence (AI) systems, who outlined their latest findings in their report, The Road to Secure and Trusted AI, this month...
Future trends in AI securityNeelou warned that while “AI is extensively used in myriads of organizations, there are no efficient AI defenses.”
He also raised concern that under currently established roles and procedures, no one is responsible for AI/ML security.
“AI security is fundamentally different from traditional computer security, so it falls under the radar for cybersecurity teams,” he said. “It’s also often out of scope for practitioners involved in responsible/ethical AI, and regular AI engineering hasn't solved the MLOps and QA testing yet.”
Source: The Daily Swig