PatternEx merges human and machine expertise to spot and respond to hacks, according to Zach Winn, MIT News Office.
Being a cybersecurity analyst at a large company today is a bit like
looking for a needle in a haystack — if that haystack were hurtling
toward you at fiber optic speed.
Every day, employees and customers generate loads of data that
establish a normal set of behaviors. An attacker will also generate data
while using any number of techniques to infiltrate the system; the goal
is to find that “needle” and stop it before it does any damage.
The data-heavy nature of that task lends itself well to the
number-crunching prowess of machine learning, and an influx of
AI-powered systems have indeed flooded the cybersecurity market over the
years. But such systems can come with their own problems, namely a
never-ending stream of false positives that can make them more of a time
suck than a time saver for security analysts.
MIT startup PatternEx starts with the assumption that algorithms
can’t protect a system on their own. The company has developed a closed
loop approach whereby machine-learning models flag possible attacks and
human experts provide feedback...
Giving security analysts an army
PatternEx’s Virtual Analyst Platform is designed to make security
analysts feel like they have an army of assistants combing through data
logs and presenting them with the most suspicious behavior on their
network.
The platform uses machine learning models to go through more than 50
streams of data and identify suspicious behavior. It then presents that
information to the analyst for feedback, along with charts and other
data visualizations that help the analyst decide how to proceed. After
the analyst determines whether or not the behavior is an attack, that
feedback is incorporated back into the models, which are updated across
PatternEx’s entire customer base.
Read more...
Source: MIT News