Updated: Oct 22
Threat hunting is a crucial aspect of information security, but measuring its effectiveness can be challenging. In this article, we will explore the good and bad metrics for threat hunting, helping you to understand what works and what doesn’t.
Picture threat hunting as a skilled archer. Just as the archer meticulously chooses the right arrow, studies the wind, and gauges distance to hit the target, a threat hunter sifts through data, evaluates patterns, and pinpoints anomalies. But like any archer, the measure of its accuracy isn’t just in hitting the target but in understanding which shots truly matter. Enter the world of threat hunting metrics. Quantifiable indicators can guide security professionals to improve their information security program’s effectiveness, and threat hunting is no exception.
However, not all metrics are created equal. Some offer clear value (‘The Good’), while others can mislead or divert resources (‘The Bad’), and then there are those that, if misunderstood, can jeopardize an entire security strategy (‘The Ugly’). Let’s explore threat hunting metrics, identify the best from the rest, and learn to use our data effectively.
What are the key differences between good and bad threat hunting metrics?
Good threat hunting metrics are those that provide meaningful insights into the effectiveness of the threat hunting program. They should be aligned with the organization’s security goals and objectives. These metrics should be quantifiable, measurable, and relevant to the threat hunting process. They should also be actionable, meaning that they should provide information that can be used to improve the threat hunting program.
Bad threat hunting metrics, on the other hand, are those that do not provide any meaningful insights into the effectiveness of the threat hunting program. They may be irrelevant, difficult to measure, or not aligned with the organization’s security goals and objectives. They may also be misleading, providing false or incomplete information that can lead to incorrect conclusions and ineffective threat hunting strategies.
The “Good” Metrics
As a threat hunter, I’ve found the most valuable metrics are those which help you understand the efficiency and effectiveness of your threat hunting operations. It is important to remember that although finding previously undetected threats is the ultimate goal, there are plenty more benefits to threat hunting. Let’s explore some of these:
Metrics that don’t rely on finding new threats:
Data Source Coverage: It refers to the complete and comprehensive data collected from all sources within an organization’s infrastructure. Threat hunting teams can use this metric to ensure the following:
Comprehensive visibility across systems.
Are there any parts of the network we don’t have visibility over?
Reduced blind spots, enhancing threat detection and future threat hunts.
Are there specific hosts that we are unable to collect specific data sources?
Do we have all the necessary telemetry to execute various hunts and create detections if necessary?
Improved response times: The sooner you identify the source of the attack, the higher your chances of reducing its impact.
Are there any ingestion delays that could slow down the response time?
Are there any inconsistencies, such as unparsed or non-consolidated data?
New Detections Submitted: The number of threats detected by the hunting team that weren’t previously known or detected by other means. This metric is one of my favourites because it shows the direct value added by the threat hunting team to the security program or service. When proposing new detections, the threat hunting team is expected to have done a lot of the legwork in regard to research and documentation of the proposed detection. In addition to newly proposed detections, this metric can also incorporate detection improvements. That could include introducing new data points and reducing false positive rates for existing detection alerts.
Completed Hunting Sessions: Reflects the total threat hunting missions accomplished within a set time frame, indicating team activity and efficiency. Bear in mind that this is just a number.
It’s important to note that numbers alone don’t tell the full story. The speed and complexity of each hunt can vary based on a variety of factors such as further research, attack emulation etc.
Metrics that rely on finding new threats:
Uncovered Threats: This is the number of threats found during hunts, which need additional analysis to determine their level of impact on the organization.
Threat Severity Breakdown: This metric categorizes identified threats based on their potential impact. Threat hunting teams can use severity levels (e.g., low, medium, high, critical) to: — Helps prioritize response and resource allocation by highlighting threat levels with higher severity. — Assists in refining security measures and tools to effectively tackle specific weak points, depending on their level of severity.
Mean Time to Detection(MTTD): This metric measures the time it takes from when an attack occurs to when it’s detected. It assumes that you have found evil during your hunt and can complement the Uncovered Threats metric. The lower the time, the more efficient the operation. The ultimate goal is to reduce the time to detection and investigation while increasing the alert-to-case ratio and the percentage of threats detected proactively. Calculating the Mean Time To Detect is as simple as summing the detection times (from the time of the initial infection to the time of detection) of all intrusions and dividing by the number of intrusions. This will provide you with the average MTTD.
The “Bad” Metrics
Some metrics can mislead or offer little value to threat hunting teams. Here are some metrics viewed with skepticism or caution:
Time Spent Hunting: Achieving high productivity and success in threat hunting does not solely depend on the amount of time invested. More time does not equal higher effectiveness. Rather, it’s crucial to focus on hunting efficiently and work methodically to ensure that the hunting hypothesis is confirmed or disputed based on numerous tasks completed, such as: ➡ ️Researching the attack method/tool. ➡ ️Emulating the attack. ➡ Check for current detections. ➡ Identify different methods that the same attack can take place that could potentially bypass current or future detections. ➡ Crafting queries. ➡ Analyzing the results of the queries and investigating true/false positives. ➡ Eliminating false positives.
Number of Reports Generated: Generating numerous reports may create the impression of productivity, yet it is the valuable insights within these reports that truly count.
The “Ugly” Metrics
Finally, there are those metrics that could ONLY harm the threat hunting program and lead to high employee turnover. The metrics below will drain the productivity from your threat hunters and shift their focus away from critical tasks:
Hunt Count per Hunter: This is a dangerous metric that many organizations use as a key performance indicator to compare the performance of hunters in the team. This promotes quantity over quality, with hunters potentially rushing through processes just to boost their numbers. If you don’t want to ruin your threat hunting team, DON’T USE THIS METRIC.
Queries Run: Just counting how many times someone queried the environment doesn’t gauge the effectiveness of those queries or the relevance of their results
In this article, I show how selecting the appropriate metrics can help to illuminate the successes and efforts of the threat hunting team. I tried to highlight that metrics are as good as how we use them. It is crucial to differentiate between metrics that truly enhance threat hunting capabilities and those that simply give a false sense of progress. By prioritizing quality over quantity and actionable insights over raw data, we can become more effective in communicating results and assisting the mission of the organization.
Keep in mind that threat hunting is an ongoing process, and metrics should be regularly reviewed and updated to ensure they are still relevant and providing value to the organization.
Special thanks to Ashley for helping with the review of this post!
Follow me here and on Twitter for updates on the next posts for this series.