Security people love compliance programs. Really! It’s why we have so many of them.
No. OK, we hate compliance programs. Even when I try to tell jokes about compliance programs, I hate them.
The reason I hate compliance programs is because they’re lists of things we need to do, and many times, those things don’t seem to make a great deal of sense. In threat modeling, I talk about the interplay between threats, controls, and requirements, and I joke that “a requirement to have a control absent any threat” is why we hate compliance programs (not joking).
So I enjoyed it when Anton Chuvakin recently offered this advice to security teams in an article called “Data Security and Threat Models“: “Don’t deploy security controls  —  whether data security or others  —  unless you know what problem you are solving.” This reminded me of work that I started several years back and never published (until now), because other work took priority. It is about the threat model that underlies the PCI standard.
I had to reverse engineer this threat model from the standard as it then stood. Another way to say this is that the Payment Card Industry Data Security Standard (PCI-DSS) is a way to answer the question “What are we going to do?” — which is one of the key parts of threat modeling. But other aspects — such as the answers to “What are we working on?” and “What can go wrong?” — are left as an exercise for the reader.
And let me tell you, dear reader: that was a lot of exercise, on the order of weeks! It was painful work. To state the obvious, there are some real head-scratchers in there, and all who go through PCI audits have their own. There were times I got angry. There were times when I was confused. What’s more, there’s literally no reason I should have had to do that work. I assume that the PCI folks work with an explicit list of problems they wish to address in their standard. If so, they should publish the list.
Having done that work, I am somewhat happy to share the result. I sincerely hope to never have to do such work again. But I would like to echo and amplify what Chuvakin said: “Explicit threat models do make security better.” We should demand threat model documents from those who craft or promulgate standards.
When I say a threat model document, I mean answers to what I call “The Four Questions:”
- What are we working on?
- What can go wrong?
- What are we going to do about it?
- Did we do a good job?
For a standards body, those questions get slightly tweaked:
- What’s your model of the systems your standard protects?
- What can go wrong?
- What must people do about it to become compliant?
- Did you do a good job? (Why are these the meaningful threats? The best controls? Was your approach sane? Are you precisely aligned with other standards where feasible?)
That last point: Every time a standard uses slightly different words, all the compliance analysts in the world must re-evaluate their controls to see if they match. Divergence is expensive. If we lived in a world in which we could treat those differences as an opportunity to accumulate small advantages because we judge the consequences of programs, then we would appreciate those differences as an evolutionary force. But we sadly do not live in that world.
Explicit threat models made security better. The threat model that underlies a defensive standard ought to exist before the standard. There is certainly work to take it from “internal use” quality to release quality. But publishing it allows us to move from compliance to engineering.
This morning, I was talking to some rocket scientists. (OK, not really. They were aerospace engineers.) They emphasized to me that their folks are really, really good at engineering, at tackling a problem and solving it. But they get bogged down in compliance checklists. If we shared our threat models with them, they could create new solutions that work in their unique environment. In that, they’re not unique. There are a lot of great engineers out there. There are a lot of unusual problems.
Specifying why we care is not opening the door to reinvention of the wheel. It’s not an argument for ignoring the catalog of tested ways to manage problems. But it would be a great step toward opening the door to real and needed innovation.
Adam is a leading expert on threat modeling. He’s a member of the BlackHat Review Board, and helped create the CVE and many other things. He currently helps many organizations improve their security via Shostack & Associates, and helps startups become great businesses as an … View Full Bio
Â
Recommended Reading:
More Insights