Should we add bugs to software to put off attackers?
A group of New York University researchers are testing a new approach to software security: adding more bugs to it instead of removing them. The idea is to “drown attackers in a sea of enticing-looking but ultimately non-exploitable bugs” and waste skilled attackers’ time.
This approach is aimed at disrupting the triage and exploit development stages of the attackers’ workflow by introducing many chaff bugs (the name is a nod to the strips of foil dispensed by military aircraft to confuse enemy radar).
“By carefully constraining the conditions under which these bugs manifest and the effects they have on the program, we can ensure that chaff bugs are non-exploitable and will only, at worst, crash the program,” the researchers explained.
“Although in some cases bugs that cause crashes constitute denial of service and should therefore be considered exploitable, there are large classes of software for which crashes on malicious inputs do not affect the overall reliability of the service and will never be seen by honest users. Such programs include most microservices (which are designed to gracefully handle the failure of a server by restarting it), server-side conversion utilities, or even web servers such as nginx that maintain a pool of server processes and restart any process that crashes.”
Initial successes
The effectiveness of the scheme also hinges on making the bugs non-exploitable but realistic (indistinguishable from “real” ones). For the moment, the researchers have chosen to concentrate their research on the first requirement.
The researchers have developed two strategies for ensuring non-exploitability and used them to automatically add thousands of non-exploitable stack- and heap-based overflow bugs to real-world software such as nginx, libFLAC and file.
“We show that the functionality of the software is not harmed and demonstrate that our bugs look exploitable to current triage tools,” they noted.
Checking whether a bug can be exploited and actually writing a working exploit for it is a time-consuming process and currently can’t be automated effectively.
Making attackers waste time on non-exploitable bugs should frustrate them and, hopefully, in time, serve as a deterrent.
Could this work?
The researchers are the first to point out the limitations of this approach: the aforementioned need for the software to be “ok” with crashing, the fact that they still have to find a way to make theses bugs indistinguishable from those occurring “naturally”, and the fact that defenders have to make sure that the bugs are, indeed, non-exploitable.
Also, the technique can’t be used to protect open source software, just already compiled versions of the program.
“Developers are unlikely to be willing to work with source code that has had extra bugs added to it, and more importantly future changes to the code may cause previously non-exploitable bugs to become exploitable. Hence we see our system as useful primarily as an extra stage in the build process, adding non-exploitable bugs,” they pointed out.
Nevertheless, they do believe that, in time and with additional research, chaff bugs can be a valuable layer of defense.