Back

 Industry News Details

 
4 Machine Learning Challenges for Threat Detection Posted on : May 04 - 2020

While ML can dramatically enhance an organization's security posture, it is critical to understand some of its challenges when designing security strategies.

The growth of machine learning and its ability to provide deep insights using big data continues to be a hot topic. Many C-level executives are developing deliberate ML initiatives to see how their companies can benefit, and cybersecurity is no exception. Most information security vendors have adopted some form of ML, however it’s clear that it isn’t the silver bullet some have made it out to be.

While ML solutions for cybersecurity can and will provide a significant return on investment, they do face some challenges today. Organizations should be aware of a few potential setbacks and set realistic goals to realize ML’s full potential.

False positives and alert fatigue

The greatest criticism of ML-detection software is the “impossible” number of alerts it generates -- think millions of alerts per day, effectively delivering a denial-of-service attack against analysts. This is particularly true of “static analysis” approaches that rely heavily on how threats look.

Even an ML-based detection solution that is 97% accurate may not help because, simply put, the math is not favorable.

Let’s say organizations have one threat among 10,000 users on their network. Thanks to Bayes’ law, we can calculate an alert is truly a positive attack by multiplying 0.97 (for 97% accuracy) by the chance of an actual threat amongst all users, or 1/10,000. This means that even with 97% accuracy, the actual likelihood of an alert being a real attack is 0.0097%!

Since improving beyond 97% may not be feasible, the best way to address this is to limit the population under evaluation by whitelisting or prior filtering with domain expertise. This could mean focusing on highly credentialed, privileged users or a specific vital part of the business unit.

Dynamic environments

ML algorithms work by learning the environment and establishing baseline norms before they monitor for anomalous events that can indicate a compromise. However, if the IT enterprise is constantly reinventing itself to meet business agility needs and the dynamic environment doesn’t have a steady baseline, the algorithm cannot effectively determine what is normal and will issue alerts on completely benign events.

To help minimize this impact, security teams must work within DevOps environments to know what changes are being made and update their tooling accordingly. The DevSecOps (development, security, and operations) acronym is beginning to gain traction since each of these elements should be synchronized and work within a shared consciousness. View More