Back

Cybersecurity as a Gateway to User Safety

Cybersecurity as a Gateway to User Safety

Cybersecurity as a Gateway to User Safety

A conversation with Orli Gan, Head of Threat Prevention Products at Checkpoint

While we ponder the potential dangers of artificial intelligence, AI is already selflessly protecting us from concrete dangers, such as cyberattacks. But make no mistake: AI isn’t merely making cybersecurity easier, quicker, and better – it’s what renders cybersecurity possible at all, explains Orli Gan, head of Threat Prevention Products at Checkpoint: “AI is a means to being able to address the scope of the problem in a way that human beings simply cannot. The necessary scale that is required to truly follow and combat modern-day threats would require an amount of manual labor and analytics that is simply not achievable by any vendor or government.” Rest assured, AI isn’t out to get your menial or flashy jobs

“Our use of AI technologies is confined to promoting better, more efficient cybersecurity,” promises Gan, echoing the debate about ethics and bias issues that have been marring AI’s image in recent years – a Google photo analysis tool labeled black people as gorillas, an Amazon recruiting tool was biased against women, a criminal risk assessment AI was favorable to white people, and we’ve yet to find out how the different autonomous car makers solve the “trolley problem,” to name a few. However, Gan is confident that her company is immune to those aspects: “Detection accuracy is the key factor to the notion of practical prevention, i.e. the ability to employ cyber-defense technology in prevent mode, such that attacks are blocked at the gate rather than mitigated or remediated after the effect. This type of usage is typically not as susceptible to bias or unethical use, simply by its nature. For a given protected infrastructure, the definition of an adversary is clear and non-ambiguous, so our challenge is focused primarily on reaching accuracy in our detection, rather than on determining if an activity is ethical or bias-free. With market reach across the globe, and with presence at every part of the IT infrastructure, our learning data sets are as versatile as they can be, and offer us a trustworthy source for training our algorithms”.

Gan suggests that often AI is given the power of making decisions in real time:

“It has the ability to look at the data and then reach some conclusions, sometimes on its own and at other times in conjunction with other, non-artificial intelligence engines. But very often, it makes decisions on its own.” Whether this could be a recipe for disaster is an issue we have to be aware of. “Artificial intelligence, as we all know, in its current incarnation, is very prone to errors, meaning it can offer false positives as well as misdetection. So if you rely solely on artificial intelligence, chances of getting it wrong could be high. And of course, unlike in image categorization where it’s no biggie if I mis-categorized a certain image, or if I had to tell my Alexa something twice instead of once in order for it to understand, with cybersecurity, missing an attack, letting it go through or even categorizing something as an attack when in fact it isn’t, can have rather detrimental implications for the organization”.

So Checkpoint doesn’t let AI run around with scissors, unattended. “Our studies demonstrated that AI systems cannot be blindly trusted,” says Gan. “We are still at a point where human supervision is required, and the best results are achieved when juxtaposing several technologies, AI and traditional, together to reach higher levels of accuracy. We have also learned that field expertise is very much a necessity. Engines that claim to be general-purpose perform very poorly when applied towards solving cybersecurity problems, and tweaks offered by people with knowledge of the domain have led to major improvements in the overall performance.”

The tech community needs to safeguard the technology, Gan says: “AI is still in its infancy. Like many different technologies, when they’re being introduced people don’t tend to think about the potential threats that they pose. And we have an opportunity here, when this is at a very early stage, to insist on cybersecurity being very much a part of this artificial intelligence revolution. If somebody was able to somehow poison the data from which the algorithm learns, it could influence the decision making in a way that could benefit this bad actor and hurt everyone else using it. These are very real risks that we need to address from the get-go, and not come back to later on when they’ve already matured. It’s a little more difficult to fix it as an afterthought than build it in from day one.”

Malicious hackers naturally also adopt AI. “A future in which AI is battling another AI is not far-fetched. Although keeping in mind the various motivations of the attackers – be they earning crime money or inflicting damage on the other side – the methodology may be vastly different and represent different uses of technologies and adaptations,” Gan says. “Of greater concern may be future attacks targeting the AI algorithms themselves. The future cyber wars may very well be all about modifying the expected behavior of an otherwise-trusted AI engine, which offers attackers opportunities to generate bias or an alternate verdict in a way that benefits them. It may be very difficult to protect against such attacks, or even to identify their presence.”