"Defining the Win" is a popular topic when Mandiant personnel talk to clients. In this blog post I will share three examples from my experience in incident response.

When I consulted for Foundstone as part of Kevin Mandia's Incident Response Team from 2002-2004, we defined the win as removing the adversary from a customer network. The unspoken corollary was that once we helped kick an adversary out of a victim organization, the intruder would not return.

In some cases the customer retained our team to identify when the adversary returned and take additional remediation actions as needed. I recall one case involving a threat actor who was part of a Romanian organized crime operation. Over the course of a few weeks, the Foundstone IR team of which I was part identified the adversary's sphere of control and removed him from the victim company. During the six months following our first remediation action, the intruder made two additional attempts to regain control of the victim. In both cases I detected his activity, advised the client, and assisted with additional remediation. The intruder did not return after that, and law enforcement later told me they believed our Romanian friend had outlived his usefulness.

After working for Foundstone I kept in touch with Kevin, and in 2006 I saw him "define the win" during a presentation. His talk emphasized predicting adversary activity. In other words, a security team was winning if the defensive actions it took maneuvered the adversary into taking certain counter-actions. Because this sort of security team was capable of predicting adversary activity, it could try mitigating the effects of the new assault, or at least better detect it when the intruder launched it. (Note I'm not talking about predicting specific tools, exploits, or compromised computers; rather, general modes of attack are the focus here.)

An example might clarify this concept: Imagine that an organization runs several poorly secured public facing servers. An adversary is likely to compromise a victim using the path of least resistance, i.e., the weak servers. Should the security and IT teams secure the servers, the adversary will likely turn to client-side attacks, probably using phishing. If through some means the security team can reduce the likelihood of successful phishing, the adversary might try accessing the target via third party connections, and so on. I have seen and been part of security teams who can play this sort of game, and at least staying level with the adversary can be seen as one way to "win."

Later in my career I worked as the Director of Incident Response at General Electric, from 2007-2011, and learned a third way to define the win. In my DirIR capacity I defined the win using the metrics set by our Chief Information Officer, Gary Reiner. Mr. Reiner set two metrics for our IR program: 1) minimize digital incidents (with the goal to approach zero), and 2) not exceed a detection-to-containment time of one hour.

If you've read previous blog posts or listened to webinars, you may recognize these two metrics. I discuss the first as "classify and count security incidents" and the second as "measure time from detection to containment." An organization's CIO or CSO can then set goals for either of these, as Mr. Reiner did for my previous employer. The beauty of this approach is that it sets the organization free to meet the goals using a variety of means. Sometimes process improvement works best; in other cases, perhaps new technology is needed. Whatever the means, the security teams can measure the ends and present results to management in a form they requested and can best understand.

As a final note, observe that all three examples are "results-oriented." None of them focus on "inputs." Rather, all concern an outcome.

How do you "define the win?"