Rethinking security: Securing activities instead of computers
For many people involved in the infosecurity community, the notion of security is too often tied to the quality of code (resistance to specific classes of bug, for example) and effective patching – in short, to low-level security.
But independent security consultant Eleanor Saitta believes that software developers and security engineers need to take a step back and look at the bigger picture.
“Security is not a property of a technical system,” she noted in her talk at the Hack in the Box conference in Amsterdam. “Security is the set of activities that reduce the likelihood of a set of adversaries successfully frustrating the goals of a set of users.”
Software development teams that understand what users want and what adversaries they face are very rare, she noted. And security engineers forgot – or misunderstood – what their job is: not securing computers, but securing activities that lead to the realization of greater goals.
Nowhere is that more obvious than in situations high-risk users – for example participants in the Occupy movement or dissidents around the world – face.
Saitta realized that a lot of what we know in the security world can’t be effectively used if someone in the real world is targeted by a determined adversary.
As she vividly put it: if you’re on a rooftop, trying to get a connection and successfully send out an encrypted message because your life or freedom – or that of others – depends on it, and you know that there are snipers waiting to take a shot at you – there is simply zero room for using a tool as complex as PGP.
“We forgot that our job was really to stop bad things from happening to good people,” she pointed out.
Security tools should be created with users’ needs in mind. We shouldn’t work on assumptions or go by intuition – we should set aside our egos, and consult with the end users – learn about their goals and adversaries.
So, how do we go about doing that? The answer is: in an organized manner – with threat modeling, adversary modeling, and operational planning.
“A threat model is a formal, complete, human-readable model of the human activities and priorities and of the security-relevant features of in-scope portions of a system,” Saitta defines. “An engineering tool that will help use define what we are trying to get the system to do.”
Building a good threat model is not a trivial task, she warns, and that’s why it’s not done often. But there are tools out there that can help with this task, and already documented models that can be customized.
Operational planning will help us detail other things we need to take in consideration, such as resource management, risk analysis, and a whole set of different practices (task domain, communication, community, and so on).
Here is where we choose which invariants – things that systems attempt to maintain – are important to us: simplicity, confidentiality, availability, integrity, deployability, trust, interoperability, and many, many more.
The thing to keep in mind, though, is that every invariant has a planning cost, and influences other variants. In general, the fewer invariants, the easier the process.
Here is also where we make the important decisions: sometimes, for example, speed will be more important that security, effectively making a “bad” solution better than a “good” one. In the case of high-risk users, usable security is a must.
Threat modeling is where development and security engineering meet:
But for the mapping of the security task to be truly effective, we also need to do adversary modeling, and bring in users to have a say about the design of the solution.
All of this things together make for effective security design, and this is what we should be striving for, whether or not our solutions are meant for high-risk users.