A new artificial intelligence platform developed by MIT and PatternEx
can identify up to 85 percent of cyberattacks, according to a new
Dubbed AI2, the platform is said to be significantly better at predicting cyberattacks than similar systems because it continuously incorporates new input provided by human experts.
|Photo: Al screenshot and photo of Ignacio Arnaldo, via PatternEx|
“Today’s security systems usually fall into one of two categories: man or machine," Adam Conner-Simon from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) wrote in a post on the MIT News site.
"So-called ‘analyst-driven solutions’ rely on rules created by human experts and therefore miss any attacks that don’t match the rules," he said. "Meanwhile, today’s machine-learning approaches rely on ‘anomaly detection,’ which tends to trigger false positives that both create distrust of the system and end up having to be investigated by humans, anyway.” The MIT and PatternEx platform attempts to merge those two approaches.
AI²: an AI-driven predictive cybersecurity platform
Note: Take a look at their work in a paper titled, AI2: Training a big data machine to defend PDF.
An Automated Analyst
AI2 predicts attacks by combing through data and detecting suspicious activity by clustering it into meaningful patterns using unsupervised machine learning, according to researchers at MIT. It then presents the activity to human analysts who confirm which events are actual attacks. AI2 then incorporates that feedback into its models for the next set of data.
“You can think about the system as an automated analyst,” said CSAIL research scientist Kalyan Veeramachaneni, who developed AI2 with Ignacio Arnaldo (pictured above), a chief data scientist at PatternEx and a former CSAIL postdoctoral associate. “It continuously generates new models that it can refine in as little as a few hours, meaning it can improve its detection rates significantly and rapidly.” Veeramachaneni presented a paper about the system at last week’s IEEE International Conference on Big Data Security in New York City.
Machine learning algorithms typically rely on the work of many individuals helping to “teach” them how to identify the relevant data. But the advanced technical nature of threat analysis makes it difficult for anyone who isn’t an expert in data security to contribute. With such experts in high demand and with little time to spare to pore over mountains of data, finding less labor-intensive ways to develop security algorithms has been crucial.
Source: Sci-Tech Today and MITCSAIL Channel (YouTube)