Photo: Andrew Tsonchev |
Photo: iStock |
Computerworld UK met with director of cyber analysis at Darktrace, Andrew Tsonchev, at the IP Expo show in London's Docklands late last month.
"A lot of solutions out there look at previous attacks and try to learn from them, so AI and machine learning are being built around learning from what they've seen before," he said. "That's quite effective at, say, coming up with a machine learning classifier that can detect banking trojans."
But what's the flip-side to that? If vendors are taking artificial intelligence seriously in threat detection, won't their counterparts in the criminal world consider the same? Are these hackers as sophisticated currently as some of the vendors would have us believe they are?
To understand where machine learning might be useful for attackers, it's useful to consider some instances where it has demonstrated strong advantages in defence.
"Technologically simple attacks are very effective," says Tsonchev. "We do see a lot of compromises on networks that are not flashy in terms of custom exploit development, bespoke malware that's been designed to evade detection. A lot of the time it's the old fashioned stuff: password theft, phishing, all sorts of these things.
"The problem with those attacks are they're still very effective. But they're quite hard to detect. A lot of times – say you have a situation where an externally facing server is compromised using an existing employee's credentials; or situations where employees aren't good at not using the same passwords for their personal stuff as their work stuff.
When there's a data breach and passwords get leaked, they get into these traded and shared databases. There's a good chance these passwords would work on corporate systems.
"There's nothing clever in those attacks, nothing inherently malicious if you look at them. If you're looking for threats by violation of policies, that's not a violation of policy. That's an authentication attack where someone's used a password that's meant to have access to the system, access to files that are meant to be taken out.
"It's unwanted, it's fraudulent, but it's not technically distinguishable as malicious in terms of violating access controls, which makes it hard to detect."
In those instances, the technical indicators are uprooted by people simply acting suspiciously, a far more difficult indicator than if someone is trying to get access to a network through a backdoor. This is where behavioural understanding and AI comes into the equation, to better navigate the often unpredictably and tricky complexities of humans acting like humans.
Right now, Tsonchev said, Darktrace hasn't spotted a true machine learning attack in the wild.
"This is something we are super focused on – it's what we do – and we're very aware of the benefits so we are very worried about the stage when there is widespread access and adoption of AI-enabled malware and toolkits for attackers to use," explained Tsonchev.
Read more...
Source: ComputerworldUK