There’s a lot of talk these days about how artificial intelligence (AI) and machine learning will transform every industry, from agriculture (as CNN Tech recently reported) to zoo-keeping (as reported by the BBC). Cybersecurity is no different. AI proponents speak in glowing terms of the many ways AI systems will empower security teams to respond faster and more effectively to cyberthreats.
One of the reasons AI is generating so much buzz in the cybersecurity space is the explosion of data in the enterprise that needs to be protected. Much of this data is coming from the ever-increasing number of mobile devices that are connected to the enterprise. In turn, all of these devices are connected to apps, cloud services, websites, data repositories and more.
More haystacks, more needles
IDC estimates that by 2025, approximately 80 billion devices will be connected to the internet and the total amount of digital data generated worldwide will hit 180 zettabytes. As mobile devices consume and create more data, it grows increasingly difficult for traditional security tools to monitor and manage it all. It also becomes exponentially harder to notice when there are problems and security threats. When you have more haystacks, you miss more needles.
From a certain vantage point, artificial intelligence seems like the obvious solution. The technology enables computers to be trained to spot suspicious patterns and actions within large volumes of data that would evade a human-based security defense.
What’s more, cybercriminals are already using AI to find holes in enterprise defenses. For example, machine learning makes it far easier to crack passwords. This means it’s no longer a war of human versus human, but machine versus human. The good guys must ensure they have some machines on their side.
The current shortcomings of artificial intelligence
However, AI is not the silver bullet that will solve all of today’s cybersecurity challenges. The reality is that AI and machine learning are not yet ready for prime time. Yes, the technology is promising in the security realm, but it can’t be the only weapon in your arsenal.
AI and machine learning may be the future of security infrastructure but they are not the here and now. And remember that AI is only as good as you train it to be. A computer, no matter how smart, still can’t teach itself. It doesn’t know exactly what it should be looking for without proper guidance, and this training doesn’t happen overnight. It takes years of development and refinement to get it right. Make it too sensitive and your system will be filled with false positives; make it too lax and you’ll miss major threats and breaches.
Is AI the next great security solution? Probably. But is AI enterprise-ready today? No — at least not as a stand-alone solution. There are security products on the market that are built exclusively on AI and machine learning, but these solutions fall considerably short.
False positives equal user frustration
A lot of AI solutions sound great on paper but simply don’t work as advertised. For example, one recent solution flagged every Wi-Fi hotspot at a coffee shop chain as a man-in-the-middle attack, even though nothing malicious was happening.
Another AI-based security solution is designed to notice all changes on a mobile device. If a user installs an app update that consumes more battery power than the previous version, the security solution deems this suspicious behavior and falsely flags the update as malware, preventing the user from accessing the app.
These sorts of false positives tend to hurt security more than they help. When they happen, users get frustrated. And when they get frustrated, they uninstall their security solution — which opens up the enterprise up to greater risk.
AI: One component of a robust cyber defense plan
This is not to say that AI shouldn’t be a part of your cyber defense plan. It should. But it should be just one component, not your entire security system. AI, for now an least, is most effective when it augments existing solutions.
So how should security vendors be deploying AI at present? They should be leveraging proven machine learning tools and models, not unproven ones. For example, there are a lot of established machine learning algorithms in the field of image recognition so that when you upload a picture to the internet, machine learning algorithms now know if it’s a dog, a cat or a dinosaur. Years and years of investment and research have been put into these types of machine learning models.
How might you use proven machine learning models to augment established security solutions? Here’s an implementation one organization has adopted. Some malicious apps employ slight variations of popular app logos to trick users into downloading them. Using proven machine learning image recognition models, this enterprise’s AI can spot Trojan apps on workers’ mobile devices as well as on the official app stores. Their mobile app security solution uses machine learning to find those fake apps and then do additional analysis based on traditional threat modeling to determine if in fact the apps are malicious.
Going forward, AI will continue to evolve in the security space. But enterprises should be wary of those security vendors that are jumping headfirst into AI and machine learning as their only approach simply as a way to differentiate themselves and capture market share. At this stage of the game, those solutions can actually make you less secure — not more — so be sure to integrate them into a robust security system that includes other proven solutions.