Interview with Michael Rash, Security Architect and Author of “Linux Firewalls”
Michael Rash is a security architect with Enterasys Networks, where he develops the Dragon intrusion and prevention system. He is a frequent contributor to open source projects and the creator of psad, fwknop, and fwsnort. Rash is an expert on firewalls, intrusion detection systems, passive OS fingerprinting, and the Snort rules language.
How did you gain interest in computer security?
In 1996 I started working for Digex, Inc., which at that time was a tier-1 ISP in Beltsville, MD. My initial role as a support technician had little to do with computer security, but less than a year later I moved into a group that was tasked with maintaining a set of nearly 100 Check Point firewalls and a few Cisco NetRanger systems for network IDS. This exposure to both the policy enforcement and network intrusion detection sides of computer security sparked a keen interest in the field, and because we were responsible for a large set of systems I also developed an interest in automation. At the time, I had decided to round out my academic pursuits as well and had entered graduate school in the Mathematics Department at the University of Maryland initially to pursue a Ph.D. in pure mathematics. However, my interest in computer security became strong enough (mostly because of the exposure to the field of intrusion detection) to compel me to change my degree path to applied mathematics with a concentration in computer security. I finished in 2000 with a Master’s degree. There was nothing more intellectually humbling than attempting to do graduate level work in pure mathematics, and I’m grateful for having had the chance to try, but my heart is in applied aspects of computer security.
Which is your favorite Linux distribution? Which one do you consider to be the most secure?
These days I’ve become a fan of Ubuntu, and run it on my laptop and also my desktop at work. With the completeness of the Debian repository tree, I find that Ubuntu meets my software and hardware support requirements. Also, Ubuntu is not “service happy”, and does not start a huge number of services by default that you might not need (or want) to run. At home, I have a Gentoo system, and a Fedora system as well.
When it comes to security, I view major Linux distributions as relatively similar; that is, they all provide security updates to interested users, many have installers can deploy a firewall, and some take the next step and provide the ability to deploy kernel-level security mechanisms (such as the Mandatory Access Control layer provided by SELinux). Even with all of these protections, it is best to think of security as a process (particularly as something that requires monitoring), and as such always needs to be applied regardless of the Linux distribution.
One area to pay special attention to is the kernel. Major Linux distributions have to compile the kernels they install with maximal hardware support because they need to be compatible with as many end systems as possible. This also extends to filesystems and other areas of the kernel that are not purely related to hardware support. Having a lot of extra compiled code around – especially code that is part of the kernel – is not good for security. In essence, Linux distributions have a built-in layer of unnecessary complexity when it comes to installing on a particular system. So, it is a good idea to recompile the kernel with a set of configuration options that are limited to the hardware and usage specifics where the OS will run, and this is an important step that applies to all major Linux distributions.
In your opinion, what is the current state of Linux security? What areas need improvement?
The Linux community has had major successes over the past few years, and many of these are a direct result of the high quality and stability of the code base. Linux server deployments solidified the platform’s indispensability to everyone from governments to major corporations, but it was not so long ago that the battle for the desktop seemed lost. Today, Firefox and OpenOffice have made major in-roads to territory once completely within Microsoft’s grip, and the polish provided to the look and feel of the Linux desktop by projects like X.org, Gnome, and Beryl has never been higher. How does this all relate to security? Allow me to offer a few perspectives. First, it is easier to secure systems that are less complex, and server systems that are designed to perform a specific set of functions (say a webserver) are less complex in general than desktop systems with lots of complex software installed derived from large code bases. This helped Linux together with high quality projects such as Apache as a server platform to at least not attain a poor reputation for security (that is a drastic understatement). Securing systems from client-side vulnerabilities is harder because the complexity of the target is higher, but here is where the open source development model has a huge role to play.
Consider for a moment the power of the scientific and academic communities around the world. Why does science really work? The main reason is the principle of peer review. Research that is reviewed by knowledgeable peers is worth more than research performed in a vacuum. Peer review makes community endeavors stronger, and is a driving factor behind Wikipedia’s meteoric success. Similarly, code that is reviewed by many developers (as only the open source model can make possible) is of higher quality than closed-source code written by a single entity. A great example of this is the Firefox web browser – it’s a better alternative on Windows systems by nearly every measure (security not least) than Microsoft’s own IE browser.
There are still areas that need improvement:
- Linux distributions (you know who you are) that under most deployments install as much software as possible and start up all sorts of services by default aren’t helping the state of Linux security. Of course, there is a tradeoff between making a distribution functional vs. making it secure, but it seems as though more emphasis should be on the “secure” part.
- Educating users about security, especially network and host security monitoring principles, is important and good information is available. I highly recommend Richard Bejtlich’s book “The Tao of Network Security Monitoring“. Also, education is a primary goal of the Bastille Linux project.
- Fundamentally, achieving a high level of security requires that bugs be removed from software, and this means that strong source code review by knowledgeable developers is important. The best example I can think of for an operating system that builds this process into its core is OpenBSD.
You work on various security projects, which one is your favorite creation? Why?
My favorite project is “fwknop” (the FireWall KNock Operator) because I feel that it is hopefully the most innovative. So far, I don’t think the security implications of Single Packet Authorization as implemented by fwknop (basically next generation port knocking on steroids) have been fully realized by the security community. An analogy can be drawn here between the evolution of email communications and the evolution of access control devices. Today, the effectiveness of email is being undermined by the pervasiveness of SPAM, and so mechanisms such as Bayesian filters and the Sender Policy Framework are commonly used to cut down the rate of unwanted email. The result is that email as a communications medium is becoming more restricted in order to minimize the effects of malicious traffic. In some cases, people even reject all email except for certain whitelisted addresses. This situation is similar to how network access control devices and firewalls became important to restrict access to services from an increasingly hostile and untrustworthy network. SPA does for network services what whitelists do for email. The difference is that SPAM can just be deleted, whereas a compromise of a system because a service was accessible from a malicious source is much more damaging.
SPA is certainly not a silver bullet and is not suitable for many services or network deployments, but using it to secure SSH communications is one area where SPA excels. Many people focus on password cracking attempts through the SSH daemon, and apply thresholds via log monitoring scripts to implement things like “if an IP address has N failed logins within 60 seconds, then automatically firewall off the IP”. The problem is that while password security is important, exploiting a software vulnerability rarely has anything to do with finding a weak password. The Gobbles challenge-response exploit from 2002 proved that OpenSSH could be remotely exploited, and there is no password guessing anywhere in sight. The actual vulnerability has of course long since been patched, but a random glance at the Securityfocus vulnerability tracking database shows that there have been recent security issues in some of the latest versions of OpenSSH. This is not meant to pick on OpenSSH; security is really hard, and a defense in depth approach is needed.
The real problem is not about password cracking; the real problem is that SSHD is accessible from arbitrary locations around the globe. Why should some random IP have the privilege of scanning for SSHD, seeing that it is accessible, and then be free to try an exploit (perhaps a new 0-day) against it? If you know that you only need to access SSHD from a limited set of IP addresses, then it is easy to write a firewall policy around these addresses, but what if you are on travel? This is where SPA comes in by maintaining a default-drop firewall stance for all SSH communications. Then, by passively sniffing for specially constructed (that is, encrypted and non-replayed) packets on the wire, the default-drop firewall policy is modified to allow an SSH connection. Details can be found in my USENIX ;login: paper “Single Packet Authorization with Fwknop“. There are also two chapters in the book about port knocking and SPA.
What’s your take on projects such as IPCop and Sentry Firewall?
Providing an easy to use Linux firewall to the masses is important, and I think IPCop goes a long way to accomplishing this. It looks as though development on Sentry Firewall has stopped, but the goal of the project – a bootable Linux CD that turns your system into a ready-made firewall and IDS – is a great concept. It allows anyone to try out a Linux firewall essentially for free on commodity hardware.
The knowledge barrier to deploying security technologies should be made as low as possible, and this means that ease of use is paramount. Also, not everyone is familiar with Linux as a network security technology, so projects like IPCop and Sentry Firewall help to increase exposure of Linux in this scenario. Finally, I wish to add that IPCop also provides a good firewall solution, and it is compatible with psad (discussed extensively in the book).
How long did it take you to write “Linux Firewalls: Attack Detection and Response with iptables, psad, and fwsnort” and what was it like? Any major difficulties?
It took me about two and a half years to write the book, which was slower than I had anticipated. Some books are harder to write than others I suppose, and the most difficult part about this book was the simultaneous software development that I needed to do in support of some concepts I wanted to present. So, writing the book resulted in new features implemented in all three of psad, fwknop, and fwsnort. For example, here are a few of the features added throughout the course of writing:
- Support in fwsnort for Snort rules with multiple application layer content matches.
- Support in fwknop for sending SPA packets over the Tor network.
- Support in psad for creating visualizations of iptables logs by interfacing with Gnuplot and AfterGlow.
What is the most interesting fact you’ve become aware of while researching for this book?
Intrusion detection systems and firewalls commonly offer the ability to tear down TCP connections by forging a RST packets, but the specifics of how this is done varies quite a bit across different IDS and firewall implementations. The most interesting fact I stumbled across during my research concerns differences in the handling of the ACK control bit on RST packets. For example, the iptables REJECT target implements an inverse relationship between setting the ACK bit on a RST vs. the packet that causes the RST to be generated. So, if a packet has the ACK control bit set and this packet is processed by iptables and matches a rule with the REJECT target, then the resulting RST packet coming from the Linux kernel will not have the ACK bit set. In contrast, the Snort “react” detection plugin never sets the ACK control bit on a RST regardless of whether the packet that causes the RST to be sent has it set, and both the “flexresp” and “flexresp2” detection plugins always set the ACK control bit on a RST. The REJECT target more closely emulates the behavior of a real TCP stack.
In your opinion, what are the most important things a Linux administrator has to do in order to keep a network secure?
Beyond the canonical tasks of deploying a restrictive firewall policy, making sure systems have the latest security patches applied, and educating users, it is important to recognize that security is a process. Applying security monitoring principles both at the host and network levels helps to catch suspicious activity early and provide time to do something about it. Deploy a good network IDS such as Snort, and use Nmap and Nessus regularly on the network to look for changes in how software is deployed and watch for new exploit paths. Consider running iptables on every Linux system; the burden of running iptables from a management perspective can be quite low considering the easily scriptable interface to iptables commands. If you have enough spare cycles, deploying SELinux can limit the damage from a successful compromise.
One area that is often not given enough emphasis is the client side exploit. All of the nice filtering provided by strong firewall policies means little when web connections are allowed out, and if a high profile website is compromised and serving up malicious code, then many internal users could be affected. With the complexity of web applications and encoding schemes, even proxies and inline IPS can be insufficient to detecting such exploits. Paying special attention to client-side vulnerabilities (particularly in web browsers) should be an integral part of a security administrator’s efforts.
What do you think about the full disclosure of vulnerabilities?
I’m a proponent of responsible full disclosure in the whitehat tradition. Security researchers should provide exploit details confidentially to vendors and give them a chance to patch vulnerabilities and users to upgrade. If a patch is not forthcoming from a vendor to fix a vulnerability, then I support the release of enough technical details about the vulnerability to allow security researchers to independently create a patch through a process of reverse engineering. Such a patch helps users to have a higher level of security and repel exploits even if a vendor is unresponsive to fixing their own bugs.
In some cases, an extremely serious and pervasive vulnerability is discovered that affects many platforms and has many entry points for exploitation. Such vulnerabilities need to be patched as quickly as possible, and sometimes the response from researchers is faster than any possible response from a large vendor. A great example is the Windows WMF vulnerability announced in the last week of December, 2005, which affected Windows operating systems from Windows 3.0 through Windows Server 2003. Before the week was out and before Microsoft released their own fix, Ilfak Guilfanov (a security researcher) released a patch to fix the vulnerability on December 31st. This provided a huge service to the Microsoft user community.
I wish to add that the application by some entities of misguided laws (such as the DMCA) in an effort to stifle security research is unfortunate. Computer security can only be achieved (and maybe not even then), by well-tested software implementations; not through legislation. Poking holes in software is done with ease by people who care nothing for laws, and as evidence I site the never ending malware scourge – much of which now is well-organized and driven by profit. What we need is a vibrant research community to counter this trend. Full disclosure and discussion of software bugs is the only viable alternative.
What are your plans for the future? Any exciting new developments?
There are some exciting developments for the fwknop project; I’m collaborating with a few network security enthusiasts who work at Calsoft, and they are contributing open source code into fwknop. Hopefully, the fruits of these efforts will result in several new features implemented from the fwknop TODO list. Also, a contributor to fwknop, Sean Greven, has developed a Windows client UI (currently in beta testing) that can generate properly formatted SPA packets without appealing to the fwknop client. This is an important step towards more widespread adoption of the technology I think.
In my professional life, I’m working with a set of engineers to extend the features offered by the Dragon IDS. Trying to achieve multi-gigabit speeds in full IPS mode is a real challenge, and interfacing with the appropriate hardware acceleration technology to offload parts of the pattern matching operations is an interesting integration problem.
What is your vision for Linux in the future?
Linux and other open source operating systems such as OpenBSD have proven that there are no limits to the effectiveness of the open source development model. People simply like to write code. Even the kernel used by Mac OS X, which is one of the most exciting operating systems available, is based on an open source kernel (Mach). The future of Linux is bright I think, and I see Linux making huge strides, especially on the desktop. Hardware support in the kernel will continue to improve as hardware vendors realize that Linux users are a growing market for their products (for example, see the release of ATI graphics cards specs), and projects like Greg Kroah Hartman’s driver development effort (http://www.kroah.com/log/linux/free_drivers.html). Further, Linux usability among newer users will increase as the desktop experience becomes more intuitive. Also, the compatibility layers between Linux and Microsoft fueled by OpenOffice and online solutions such as the Google Documents project will reduce the need for pure Microsoft Office applications. These advances will continue to push Linux into the hands of the masses. On balance, this will be good not only for Linux but also for the quality of software produced by for-profit vendors. Linux will increasingly be seen as a real competitor, and this will translate into higher quality proprietary software across the globe.
The arguments against using Linux are shrinking every day. My vision for Linux in the future is one where computer users question the benefits of Windows more often than they question the benefits of Linux. In the end, may the best OS win.