CISO priorities: Implementing security from the get-go
Dr. David Brumley, CEO of ForAllSecure, a Carnegie Mellon computer science professor (on leave), and part of the team that won the DARPA Cyber Grand Challenge, was, at one time, a dishwasher and a line chef. That was before going back to get his high school diploma via correspondence courses and attending the University of Northern Colorado (UNCO), where he graduated with a B.A. in Mathematics while also working as a system administrator.
After graduation, he got his first security job: Chief Security Officer at Stanford University. Five years later, he attained a master’s degree in Computer Science from the university. After another five years, he gained a PhD in Computer Science from Carnegie Mellon University (CMU), and began his teaching career there and started a PhD program.
“Working as a CSO gave me thousands of hours of hands-on experience in the field and this shaped my research. In my role as a professor at CMU, I learned a lot about shaping research problems, getting a team of bright minds together to work on them, and keeping the team happy, engaged, and funded,” he told Help Net Security.
Among the problems he wanted to find an answer to was: “How can we automatically check the world’s software for exploitable bugs?”
“I’ve spent 15 years working on technology to help identify vulnerable software. In 2014 at CMU, I was working with two amazing students – Thanassis Avgerinos and Alex Rebert – on this problem, and we had a breakthrough result: we developed a system dubbed Mayhem, which allows users to check off-the-shelf Linux apps for unknown bugs and vulnerabilities,” he shared.
Academically, their work was really well received, but the infosec industry was not yet convinced. So, together the three founded ForAllSecure and entered the DARPA Cyber Grand Challenge (CGC), the first computer security tournament designed to test the “wits” of machines, not human experts.
The objective was to see if the automated identification and repair of security vulnerabilities in software is possible, and Mayhem ended up winning the challenge.
Automation is the only real solution
That was three years ago. Since then, they’ve been working to make the Mayhem DARPA research prototype into a product anyone can use, and have had the opportunity to interact with hundreds of cyber professionals to see how it can help protect the world’s software. They’ve engaged with the Defense Innovation Unit (DIU) – a new unit that brings radically new tech into the DoD protect systems.
“We’re learning a lot about customers, products, and how the market takes on new technologies. It’s not easy – Mayhem and similar tools are a new breed. Also, during the Cyber Grand Challenge, we didn’t have to worry about how to get apps inside the system for the check. In real life, we do, and we’re working on making it easy,” he added.
For Dr. Brumley, there’s not a shadow of a doubt that the security industry has to turn to technologies that don’t need humans to find security faults in software.
“Humans cannot react quick enough to the pace of current threats. Every day attackers probe our networks, find new vulnerabilities, and come up with ingenious ways to circumvent security. We know we can’t out-scale attackers manpower wise; no organization can hire more security experts than there are potential attackers,” he opined.
“Technology scales and works faster than any human can, but that doesn’t mean that there is no role for humans in this battle. What I’m saying is that we should automate as much as possible, leaving humans for what they do best: creative work, thinking of new problems, finding new solutions. And once they do, we should try to find a way to automate those as well.”
If you’re running it, you’re responsible for its security
Organizations must change the way they implement security and change the way they look at it, he also said.
“When deciding which new tech to deploy on your IT environment, involve security in that decision. When you’re creating new applications, create an application security team who is integrated with your developers,” he advised.
Organizations should also stop asking themselves whether they are secure (there’s no such thing as absolutely secure) and start asking how quickly they can identify a new problem and react and whether they can move faster than attackers.
“Forty years of research has shown it’s near impossible to solve the ‘make it secure — period’ problem. I think we can solve the ‘how to move faster’ problem,” Dr. Brumley noted.
Thirdly, organizations need to start considering and thinking about all the risks they inherit.
“When you use open source, you’re inheriting a risk. When you use third-party software you’ve not checked yourself, you’re inheriting a risk,” he explained. “I’ve run into many companies who say to me when I point out a huge gaping hole: ‘well, we didn’t develop or create that.’ That doesn’t matter! If you’re running it, you’re responsible for it.”
And, finally, organizations must invest in their people. Yes, it’s hard and yes, it can be expensive, but people are often thrust into a security role with very little formal training or education, he noted, and they simply have to refine their skills.
“Personally, I’ve found two tricks. First, teach your security people the basics of coding if they don’t already know. The goal isn’t to turn them into developers; it’s to make sure they know how software and computers work deep down,” he advised.
“The second trick I’ve used is encouraging security teams to enter ‘Capture the Flag’ competitions. A hacking CTF is a closed world where security can practice and hone their skills, and ultimately provide a rubric to see how they are doing compared to others. In short, if you play a CTF and get beat, you probably have some skills you can improve on.”
(At CMU, he co-founded and advised a very successful competitive hacking group named the Plaid Parliament of Pwning (PPP). They’ve also created a free online game called PicoCTF to help high school kids – as well as others – to learn how to hack.)
Please, no more FUD
We all known that Fear, Uncertainty, and Doubt (FUD) sells well, but Dr. Brumley would like to see companies start building trust.
“I think smart organizations actually do think in terms of trust. For example, Google provides a free service and incentives to check open source for security flaws with OSS Fuzz. Why do they do this? One answer is that Google wants people to trust their products like Google Chrome. They know if there is a security flaw – even from open source components included in Chrome – people will trust Chrome itself less,” he pointed out.
“When you start thinking of security as a mechanism to build trust, it stops being a cost and becomes added value.”