Base Rates And Security Monitoring Use Cases

As we continue to work on our research about security monitoring use cases, a few interesting questions around the technology implementation and optimization arise.
Any threat detection system designed to generate alerts (new “analytics” products such as UEBA tools have been moving away from simple alert generation to using “badness level” indicators – that’s an interesting evolution and I’ll try to write more about that in the future) will have an effectiveness level that indicates how precise it is, in terms of false positives and false negatives.
Many people believe that getting those rates to something like “lower than 1%” would be enough, but the truth is that the effectiveness of an alert generation system includes more than just those numbers.
One thing that makes this analysis more complicated than it looks is something known as “base rate fallacy”.
What makes this extremely important to our security monitoring systems is that almost all of them are analyzing data, such as log events, network connections, files, etc, that have a very low base rate probability of being related to malicious activity.
For a security system to detect that malicious activity only based on those logs it must have extremely low FP and FN rates in order to be usable by a SOC.
You don’t need to do a full statistical analysis of every detection use case to make use of this concept.
That was all about base rates; there are other things to take into account when designing and optimizing use cases, such as the importance of the event being detected and the operational processes triggered by the alerts.
But that’s something for another post (and, of course, for that research report coming soon!)
http://blogs.gartner.com/augusto-barros/2015/11/27/base-rates-and-security-monitoring-use-cases/

Share This Post