Share This Article
SOCs today are overwhelmed—so overwhelmed, in fact, that analysts are forced to ignore 62% of all alerts. And, considering we live in a world that creates over 402 million terabytes of data per day and has increasing legal obligations to secure it, organizations cannot afford to miss this proportion of alerts. As such, many organizations are turning to AI to ease the burden of overstretched SOC teams.
As cybersecurity firm Prophet Security declares, “Integrating AI into SOC operations isn’t just about automating tasks; it’s about enhancing the entire ecosystem—people, processes, and technology.” In other words, the benefit of AI isn’t found in force-multiplying alone; it’s found in leveling up the capabilities of each area of the SOC in meaningful ways.
Here’s how:
Decision support
When an analyst recieves an alert, their top job is to determine if it’s legitimate or not. . Chasing down false positives can be a massive drain on time and resources, yet without the right data, it really can’t be avoided.
When around 89% of companies are running on multi-cloud environments and using between 60-75 security tools, there are often too many telemetries to follow – much less consider in a split-second decision. AI-driven tools have no issues scanning, classifying, and otherwise bringing together data intelligence from across multiple environments – multi-cloud, on-premises, hybrid – and telemetries (both inside and outside your network).
Today’s threats make a living out of approaching through the back door (sometimes literally) and not sailing clearly through where our endpoint detection tools can see them. Insider threats are especially precarious as, many times, those insiders have permission to do the things they are doing – up until a certain point. Plus, it’s easy to fly under the radar in complex cloud architectures or ones strewn with shadow SaaS or shadow APIs that can be exploited. These are environments in which low-and-slow attacks thrive.
Using heuristics and AI-powered analysis, SOCs can use these disparate systems to their advantage, corroborating various alert sources and using the surrounding context to determine which threats present the most real danger. In addition, AI enables them to leverage machine learning algorithms to improve upon this process as they go. That way, a high number of behavioral-driven attacks can become an advantage (for learning purposes) rather than simply a nuisance.
Saving SOC Brain Power
Another reason SOCs struggle is that there simply aren’t enough experts to staff most of them, and new threats are giving even those experts who remain a run for their money. It’s no secret that the ever-present cyber talent crisis drags on (we’re still 4 million short, according to the World Economic Forum). Couple that with the fact that new forms of attacks are hitting the landscape, and threat-chasing productivity slows down even more as analysts face a learning curve on top of everything else. And let’s not forget to mention SOCs need to keep in mind all relevant compliance requirements, secure data across all environments, and account for all devices.
Even if this were “easy” from a technical standpoint, the sheer volume of work required is still overwhelming for human teams. AI force-multiplies the few experts that most organizations have on hand, performing perfunctory tasks that give them back more time and help them get to the parts of their jobs they were hired to do: analyze and make decisions. Only now they are analyzing and making critical decisions with better, more comprehensive data that’s already been largely vetted via the aid of AI-powered scanning, detection, and investigation tools.
Detecting GenAI-Written Phishing
One of the hardest things for SOCs to control is what users do on their own time within their own inboxes. All the employee security awareness programs in the world can’t guarantee that a cybersecurity expert will get a visual on every phishing email before someone clicks. And even the most advanced email security tools can’t block newer forms of BEC and phishing scams that rely solely on persuasion – no tell-tale malware is included to give them away.
Additionally, cybercriminals are now leveraging generative AI to not only craft word-perfect scan emails but to scrape more personal data off social media and other sites so that the phony messages sound like they’re from someone you know. As noted by Fortra, “Now [threat actors are] hitting us with personalized phone conversations, spoofed voices, and even our own nicknames.” While AI in the hands of defenders can’t stop the scraping of personal information, nor the fact that they read well, AI-driven tools can help SOCs detect these subtly sophisticated threats via sophisticated methods of their own.
AI-powered email security tools can build similar profiles of your users’ writing styles, email patterns, and behavior so that when an attacker compromises an employee and sends a BEC or phishing email from their internal account, these tools can flag the user. A notification will appear saying something like, “This email raises some red flags.” Those could be time zone differences, discrepancies in style (signing off using a full name, for example), or differences in writing patterns.
Conclusion
Overall, AI is an invaluable tool for combatting sophisticated modern threats and reducing the enormous burden on SOCs. Thoughtful, strategic use of AI technologies can dramatically improve your ability to respond to threats, protect your organization, and even enhance your hard-working analyst’s well-being. So, what are you waiting for?
About the author:
Josh is a Content writer at Bora. He graduated with a degree in Journalism in 2021 and has a background in cybersecurity PR. He’s written on a wide range of topics, from AI to Zero Trust, and is particularly interested in the impacts of cybersecurity on the wider economy.