The episode 206 features Down the Security Rabbithole crew talking with Steve Christey Coley from MITRE, and the conversation settles in on a theme DtSR has hit on before, the need to define “security research”. I’m completely with them on this one. Especially as people are trying to go after such terrible laws as DMCA, the industry needs to define terms such as “security researcher”, or else someone else will end up doing it.
The discussion moves to talking about other venues of research and how security research fails to emulate their standards. The comparison to medical research (ie cancer) is used a lot. Towards the end, Michael makes a comment about these being old questions, why not seek old answers. Again, this is not a bad path.
What’s missing in this conversation is the understanding of what is different for security research as opposed to other more formal types of research. Only if you understand what’s different can you take the existing solutions and tailor them to your problem. So what is different?
Bar to entry – when you think of research, you think going to get a PhD, then getting grants and funding. If you are looking at human subjects (such as in the medical realm), you need to find them, get their permission. There’s a huge bureaucracy. And with that bureaucracy comes a lot of rules and regulations.
Info sec, on the other hand, has almost no bar to entry. Someone with an aptitude and work ethic can learn all they need to reading online and experimenting on their computer. They can download free tools and start for free in the basement. They have tons of “subjects” in their possession, and can reach out to many more on the internet.
Speed – People are looking at problems like cancer and archaeological understandings and questions about the universe as something that will take many years to figure out.
As much as we all probably hate hearing about the “speed of cyber”, it is just on a different timescale.
Discovery vs error identification – perhaps the most important point is that typically when we talk about research, we are talking about humans trying to figure out how the world works, gaining some understanding that we didn’t have already.
Security research isn’t that. It is one human looking for a way to make what some other human did behave in an unexpected way. The conversation in DtSR 206 touches on this. So maybe we shouldn’t call it research. But we have to name this in order to define what we are talking about.
The DtSR team definitely have feelings on the term “security researcher”, and regularly uses the example of Charlie Miller and Chris Valasek’s remotely turning off a Jeep with WIRED writer Andy Greenburg inside as their straw-man. I’d submit that this is a bad example. The research here isn’t a problem. Two smart guys took apart a device they owned (a Jeep Cherokee) and understood it to include its vulnerabilities. Given what they found, I’d argue that’s exactly the kind of research we want.
What the DtSR folks really don’t like was the publication method. There’s responsible (or coordinated) disclosure, or presenting at DEFCON, or dumping it all out on pastebin. And then there’s stunt hacking. Regardless, the quibble isn’t with the research, it’s with how they went public with it. So you think that was dangerous? That’s debatable.
I guess publication could be wrapped into the definition of research, but I suspect there would be some unintended consequences you’d want to really think through first on that.