Secure Coding: Principles & Practices
Authors: Mark G. Graff and Kenneth R. van Wyk
Pages: 224
Publisher: O’Reilly & Associates
ISBN: 0596002424
Available for download is chapter 1 entitled “No Straight Thing”.
Introduction
The security issues and challenges facing information technology today have their roots in the software development process. One might say that’s the root of all evil – or better, of all vulnerabilities. Written by two eminent software security experts, Mark G. Graff and Kenneth R. van Wyk, “Secure Coding” basically tries to answer the question – ‘Why do good people write bad software?’, and how can this be corrected.
About the authors
Mark Graff is Chief Cyber Security Officer for Lawrence Livermore National Lab and was formerly Network Security Architect and Security Coordinator at Sun Microsystems. He has been a Congressional expert witness, has lectured on network security topics at the Pentagon, and has appeared before the Presidential Commission on Infrastructure Survivability.
Ken van Wyk is Director of Technology for Tekmark Global Service’s Technology Risk Management (TGS-TRM) practice, and was Chief Technology Officer and Co-Founder of security firm Para-Protect Services. He was one of the founders of the Computer Emergency Response Team (CERT), and is also the co-author of O’Reilly’s “Incident Response”.
Inside the book
The book consists of six chapters that closely follow a typical software development process or methodology known as the waterfall development methodology or Systems Development Lifecycle Model (SDLC) that includes the following phases: architecture definition, design, implementation, operations and finally automation and testing.
The first chapter titled ‘No Straight Thing’ is a fine introduction to the problem. It’s more a philosophical and political text that goes into the issue of responsibility for today’s alarming situation with software quality. Who’s to blame? The consumer, the government or the producer? The authors choose here a politically correct answer: all of them share the responsibility.
Yet it is evident that the software industry lacks features which constitute the core of other engineering activities (for example, consider bridge building). These features include mandatory security and quality standards, measurement methodologies and product liability.
In this sense, the authors correctly identify the main efforts needed to improve the situation in the future: better education is needed of both engineers and the public which has to understand the poor security of Internet software today; a second effort is the development of true secure coding standards that can be used by companies, governments and consumers; finally, there is the issue of defining security metrics, or methodologies to assess how vulnerable a particular software system is.
The second chapter deals with security in the first, architectural stage of development. The authors identify here 30 principles of security architecture. These include guidelines such as understand and respect the chain of trust, be stingy with privileges, test any proposed action against policy, address error-handling issues appropriately – to name a few.
After discussing the architecture, the authors move on to address the security issues in the design stage of development. Why is security so important at this stage? Because flawed design can lead later to a situation where you’re unable to recover without a complete redesign – and that’s so costly that it can stop a project, or more often result in a poor quality product. In this sense, the authors correctly advise to “construct a mental model” of the application and then consider how it might respond to a real world event (i.e. attack). This is particularly hard to do, although it may seem trivial at first.
A well known example of flawed design is the way TCP protocol works: the fractured dialogue (or three way handshake) preceding any TCP session, opens the possibility of denial of service attacks (SYN flood). If the design of the protocol had been more defensively oriented in the first place, such an attack would never have been possible.
Next is the implementation phase and that’s the fourth chapter. Most popular vulnerabilities originate at this stage – for example buffer overflows. How to avoid such failures? The authors try to help by citing some good and bad practices. First of all, it is imperative to handle data with caution, especially input data because improper bounds checking is the birthplace of buffer overflows. Another thing to do is reuse good code whenever possible: it makes good sense to reuse software that has been thoroughly reviewed and tested, and has withstood the tests of time and users. Code review is also important, both peer review and independent validation and verification.
Operations is the process of deploying and running the application in real world scenarios – and that’s the subject of the next chapter. The authors again go through a series of good and bad practices. What they advocate is close cooperation between application and systems staff. Applications must be deployed by a unified team that will ensure a secure environment on all levels: network, operating system and application. An application can be reasonably secure, but if the underlying operating system is not properly secured, the hard work spent in making the application secure is simply futile.
Finally, what needs to be done at the end is test the software. This is the conclusive part of the book which addresses testing issues and other ways of automating parts of the development process.
Testing is indeed a fundamental part of the development process, but it cannot be a substitute for sound design and implementation. That said, you will find this chapter a good reference on automated testing tools and methodologies. During implementation, you can use static code checkers which scan code for security vulnerabilities and programming mistakes; these include RATS, Splint and UNO. At operations time, runtime checkers can be used; these install a hook between the application and the operating system and scan all calls for correctness – the authors mention Libsafe, PurifyPlus and Immunix. Other tools that can help are the so called profilers (which monitor software behavior for any anomalies from its normal mode of operation), application level scanners such as Appscan and Whisker (which use fault injection to determine the robustness of an application), network protocol analysers, etc.
You will also find here an introduction to risk assessment methodologies such as ACSM/SAR and ASSET.
Final thoughts
Although it may seem at first as a highly technical book, “Secure Coding” is definitely not one, as it’s meant for a much larger readership. This book aims to clarify the issue of secure coding to a broad audience ranging from academics, software developers, down to executives, project managers, other security professionals and why not, software users. And that’s the main advantage of the book: because secure software is a goal that requires all parties to be adequately informed.
As for the primary party, the culprit if you wish – namely the software developer – this book doesn’t offer detailed discussions of secure coding issues, but it will certainly produce a higher sense of awareness when coding for the next project. And that’s a good start.