What Are the Top Challenges Facing Cybersecurity Automation Adoption?

Written by zacamos | Published 2023/12/06
Tech Story Tags: cybersecurity | automation | cyber-security-automation | adoption | talent-shortage | compliance | explainable-ai | business-security

TLDRWhile automation can be an effective cybersecurity solution, there are many barriers to its adoption. These barriers include a lack of trust, high false positive rates, skills shortages, financial concerns, and regulatory compliance issues.via the TL;DR App

Automation has emerged as a promising solution to many common workflow woes. It’s particularly valuable in cybersecurity, where talent shortages are rampant and inefficiency becomes dangerous. Despite that potential, cybersecurity automation adoption has been relatively slow.

Businesses that want to automate security workflows effectively must address common shortcomings. Here are five of the most significant.

1. Lack of Trust

A late 2023 survey revealed that 31% of cybersecurity professionals say a lack of trust undermines security automation’s effectiveness — more than any other issue. Because many AI decisions happen in a black box, it’s hard to take what these models say at face value. That goes doubly for a field as mission-critical as cybersecurity.

How to Address It

Resolving this trust issue is a two-part process. First, the developers behind automated security tools must focus on AI explainability. Thankfully, explainable AI research is growing, leading to more models allowing users to see how they arrive at their decisions.

Secondly, end users must accept that automation isn’t a perfect tool. Even if it was, teams should verify every AI decision to be safe. Organizations should design a formal process for reviewing and authenticating automated actions so they can adjust these tools as necessary to improve their accuracy.

As more teams use automated tools — even cautiously — they’ll see automation is more reliable than previously thought. AI will become more accurate with increased data, further building trust.

2. High False Positive Rates

False positives are another big obstacle to cybersecurity automation adoption. AI often flags benign activity because it’s better to be safe than sorry, and it’s impossible to predict every legitimate way to act on a network.

On the one hand, erring on the side of caution is better than letting malicious activity slide. On the other hand, false positives overwhelm security professionals, taking their time away from important work and increasing burnout.

How to Address It

Reducing false positives is often a matter of right-fitting an automated solution to the problem. A rules-based detection algorithm is too simplistic to accurately distinguish between suspicious and harmless behavior in a complex environment. Using more nuanced machine learning models instead allows more room for situation-based decision-making.

Security teams must also tweak detection algorithms over time. AI models may produce many false positives initially, but developers can adjust them, teaching them a wider variety of acceptable behavior. These solutions will become more accurate with additional data, reducing false alarms.

3. Skills Shortages

Labor challenges pose another obstacle. While automated systems remedy some skills shortages, they often require AI abilities and experience to implement effectively. Errors like misconfiguration are common and damaging, with 75% of data loss stemming from human mistakes.

How to Address It

AI and coding skills are in high demand, so businesses may be unable to hire new professionals with the needed experience. The solution is to grow this talent from within.

Upskilling security professionals to learn to train and implement AI models will resolve skills shortages without requiring competitive, expensive hiring processes. This route will also ensure the organization’s AI leaders are familiar with company policies and workflows. Any future security automation projects will be more effective as a result.

Another possible solution is to opt for off-the-shelf automated systems instead of building custom applications. While the latter may offer more relevant help, the former offloads much of the training and adjustment work, lowering the need for AI-related skills. Businesses should also look for vendors offering extensive ongoing support.

4. Financial Concerns

As with most new technologies, costs are another common barrier to adopting security automation. Almost one-third of all enterprises cite insufficient upfront capital as a roadblock to new technologies. Even if businesses feel comfortable taking on these initial costs, slow returns on investment may hinder them.

How to Address It

Overcoming this barrier starts with recognizing automation’s long-term financial benefits. Companies using security automation can save $3.05 million if a breach occurs, thanks to faster responses and more thorough backups. It would also help strained security teams become more productive, reducing relative operational costs.

Of course, these long-term benefits don’t mean automated security tools aren’t expensive upfront. In light of those initial costs, businesses should approach automation slowly. Instead of automating multiple processes simultaneously, they should take it one function at a time.

Automating a smaller scope and then getting used to adjusting the system before investing in further automation will spread out initial expenses. It’ll also help teams learn to automate more effectively, leading to faster ROIs.

5. Regulatory Compliance

As cybersecurity laws grow, regulatory concerns are holding businesses back from adopting security automation. Regulations like Europe’s GDPR or California’s CCPA impose specific rules on how companies can store and use people’s data. A security AI that accesses information under these protections may introduce noncompliance issues.

How to Address It

The primary issue between AI and data regulations is that it’s often unclear how AI models access and use information. If security tools operate in a black box, the company can’t be certain that it adheres to strict privacy requirements, exposing it to fines of tens of millions of dollars in some cases. Explainable AI is the solution.

Software vendors have started providing built-in compliance as laws like this have become more common. Businesses should only work with AI providers offering this assurance.

Security teams that build their own automated solutions should ensure transparency from the start of the development process. Thorough audits and an emphasis on explainability will clarify how AI models access and use protected data. Teams can then adjust the model to comply with regulations and prove this compliance.

Cybersecurity Automation Is Too Valuable to Overlook

Automation is an indispensable tool for modern cybersecurity teams. Threats are too numerous and teams are too busy for manual monitoring and responses to be sufficient. The only viable way to manage threats effectively amid the growing security skills gap is to automate where possible.

Effective implementation begins with recognizing where it can go wrong. Businesses can more readily overcome these challenges if they understand common automation adoption and usage barriers. Doing so is crucial to capitalizing on this increasingly necessary technology.


Written by zacamos | Zac is the Features Editor at ReHack, where he covers cybersecurity, AI and more.
Published by HackerNoon on 2023/12/06