Application Security Matters: Deploying Enterprise Software Securely
One of the most interesting aspects of being an information security consultant is the exposure to an enormous variety of industries and organizations. From health care to governments, nonprofits to small private companies and more, the consultant must be versed in dozens of technologies, regulations, and business practices in order to be effective. Today, any security consultant worth his salt recognizes at least one common security weakness across the board: vendor-developed applications, often industry specific, are the Pandora’s Box of the information security program.
Commercial off-the-shelf (COTS) software companies have developed applications in every conceivable nook and cranny to digitize and automate processes, storing the lifeblood of the organization in a database and making it accessible through a GUI front end. Increasingly, these types of applications are web-based, migrating IT back to the thin application environment of the 1980s. Because these applications are the window to the very information that keeps organizations alive, it is essential that they be protected with every tool in the infosec arsenal. Unfortunately, the monotonous “features now, security later” adage still rules.
Consider a new application deployment at ACME Corporation, a medium-sized, industry-generic company computerizing a key business process for reporting and efficiency purposes. ACME might take the following approach to the project:
1) ACME management initiates the project and issues a request for proposals (RFP) to select a vendor.
2) After choosing the vendor and signing a contract, ACME assembles the internal implementation resources to work with the vendor. The team consists of a system administrator, a database administrator, a network administrator, a project manager, and a business representative.
3) The project team engages the vendor to gather first steps and schedule an implementation date. A checklist of installation steps may be provided. Most often, a vendor-supplied service professional will be on-site for the installation.
4) During an installation phase, test and production servers are configured, the application is installed and configured for ACME’s unique business needs, and data is scanned from paper into the application’s database.
5) The application is tested and accepted by the business representative.
6) End users are trained, and the application is promoted to production.
7) The maintenance and support phase begins.
This familiar IT project life cycle leaves a security guru feeling unquestionably queasy. How could this seemingly straightforward installation increase risk to the organization? Here are just a few vulnerabilities that were overlooked.
- Because the application was developed about a year ago, it was certified by the vendor against the operating system patches available at that time. Too busy developing the next version to certify new patches, the installation checklist explicitly disallows the more recent patches.
- After completing the database software installation, the widely accessible database administrator account password is left at the default – blank.
- Several application administrator accounts with simple passwords were created during the testing phase and are not removed prior to the production deployment.
- The application has auditing capabilities, but the required module was not purchased during the contracting phase and subsequently was not installed.
- The vendor supports only Telnet for remote support, and was given a generic account with a shared password for ongoing maintenance.
- The web front end is deployed without the use of SSL encryption.
This is an extremely common result of many application deployments at organizations such as ACME. Most vendors simply do not have the time, resources, or expertise to develop applications securely, and the customers may not see security as a key requirement of the project. What can be done to increase security when deploying new applications or migrating from old software?
First, security should be part of the process from the beginning. If all customers wrote security requirements into the RFP, for example, vendors would start to take it more seriously. At the very least, management would be aware of some of the risks inherent in selecting a particular vendor simply by reviewing responses to the requirements. Measures could be taken to mitigate any risks during the implementation, or the risk could be accepted and documented.
In step 2 of the project cycle, an important resource was not included in the team: the security team representative. This individual will watch out for, and hopefully mitigate, just the sort of weaknesses that were discovered after the fact. The security team should have a template (discussed below) for securing applications, but individually they will also be thinking outside of the box to proactively resolve non-standard problems as well.
It’s a rare organization that has documented security findings for each application in the environment. Adding a security sign-off in addition to the more common business acceptance procedure will show auditors that the company takes security seriously.
The security representative certainly has her work cut out for her. Convincing the vendor, management, and the rest of the project team that the security changes are implementation requirements is no simple task, and it takes creative technical thinking and attention to detail to resolve many of the technical issues.
To make the job more tangible, the security team should have a checklist of requirements for new application installations. Major version upgrades of existing software should follow a similar, or identical, procedure. The checklist can be broken into categories such as authentication, logging and auditing, encryption, networking, and so on. It may also make sense to include items such as backups and monitoring that are not solely security related.
To simplify the process, a standard, universally accepted checklist can be used as the basis for the certification process. One such guide is the DISA Application Security Checklist, available at iase.disa.mil/stigs. It provides an excellent, if overly wordy, guide for application security requirements. Although the document is aimed primarily at U.S. Department of Defense entities, it is easily adapted to any organization.
Using the DISA document as a template, we can quickly formulate our own set of application security requirements. For convenience, we’ll split them into logical sections, just as is done in the checklist.
Identification and authentication
This covers how applications process and authenticate user identities. Several lengthy requirements are listed in the DISA checklist, but they boil down to the following requirements:
- The application must use valid, standards-based strong encryption for authentication. For most organizations, this means that the application uses a certificate signed by an approved certificate authority. The certificate must not be expired, revoked, or otherwise invalid.
- An adequate client authentication process must be supported. This might take shape in a variety of ways. An obvious example would be a simple login form, but a less common case could be a web server becoming a client when connecting to a database server on the back end. Authentication processes may include a password, a certificate or key, and/or a biometric. If passwords are used, the application must support a minimum set of complexity requirements (for example, at least 9 characters of mixed alphanumeric and special characters and a set expiration). An application that allows access with only a username, does not support password complexity, or does not properly enforce controls that it claims to support would fail this requirement.
- If applicable, the client should authenticate the server. For example, a web browser connecting to an SSL-enabled web server would validate the SSL certificate. In this case, it should validate that the certificate was signed by a trust certificate authority, is not expired, and matches the URL of the page.
User account management
The DISA guide only contains one requirement in this section, but there are potentially many more concerns. For example, how does the application manage user accounts? Are administrative accounts carefully protected? A proper application certification thoroughly checks the user account protection mechanisms.
Requirements:
- User IDs should be unique. Duplicate user IDs can lead to overlooked privileges or weak passwords.
- The application must authenticate to a centralized authentication system. Most organizations have a centralized user account directory, such as Active Directory, OpenLDAP, or Red Hat Directory Server. To minimize the number of accounts and passwords that users must remember, the application should support at least LDAP authentication.
- Shared accounts must be prohibited. This is a central requirement of some regulations, such as HIPAA. Accounts should be tied to an individual – particularly administrative accounts.
- Access requests should follow a standard request procedure. This should be tracked and reported against on a regular basis.
Data protection
Requirements in this area are common in regulations and standards such as the Payment Card Industry Data Security Standard. Permissions and cryptography should be used to protect data when stored on disk and in transit.
- Sensitive data should be protected by file permissions at rest. On disk, files should only be accessible by administrators and by the processes that need access (for example, the operating system, database service, or application processes). If backups or duplicates of the data exist, they should also be examined.
- Authentication credentials should be encrypted at rest. Furthermore, non-privileged accounts should not have access to the keys that encrypt data.
- All sensitive data in transit should be encrypted with FIPS 140-2 validated cryptography. This can be accomplished in a variety of ways, such as by using technologies such as stunnel, SSL-enabled HTTP, or LDAPS.
- All cryptographic modules should be FIPS 140-2 validated. The check can be performed at csrc.nist.gov/groups/STM. In particular, be especially wary of applications that use proprietary, in-house developed encryption.
Audit
Auditing is certainly one of the least exciting yet most critical application security features. The application should log events and transactions in a meaningful manner.
- The application should adequately log security-related events. Such events might include startup/shutdown, user authentication, authorization changes, data transfer, configuration changes and more. Furthermore, the application should log specific information about the event, such as the user ID, success or failure, the date and time, the origin of the request, etcetera.
- The application should include a method to notify administrators when the logs are near full.
- The audit logs must not be vulnerable to unauthorized deletion, modification, or disclosure. Filesystem permissions should be reviewed, but the application interface might also be vulnerable. Integrity is perhaps the MOST important element of audit logs, particularly if they are to be used in court.
- Centralized logging should be supported. This can be done by syslog, by a manual database export and daily copy, or by sending logs to the system log utility (such as the Windows Event Viewer) and using a commercial tool. The benefits of centralized logging are widely known, and it should apply to applications as well as operating system logs.
Application operation
Certain aspects of the application’s operation have an impact on the overall security. This section looks at a variety of operational concerns.
- The application must support role-based access control. Administrative accounts should be able to perform system maintenance, manage user accounts, and review audit logs. Regular user accounts should have significant restrictions.
- Actions should be authorized prior to execution. For example, an attempt to delete a user account by a non-privileged user should be denied.
- The application should run with only the necessary privileges for operation. A Windows application, for example, should not run with domain administrator privileges. Similarly, a Linux application should not run as the root account.
- Session limits should exist. User sessions should time out after a period of activity, and perhaps a specific number of simultaneous sessions should be allowed.
- Users should not be able to circumvent the user interface to access resources in the supporting infrastructure. A user may be limited by privileges in the application, for example, but could use SSH or NFS to access data directly.
Enclave Impact
The most important consideration here is the logical separation of servers at the network level. Application servers should be properly limited by firewall Access Control Lists. Externally accessible servers should be located in a DMZ. The section also recommends several methods for determining what ports are in use, but most of these don’t make sense in the context of vendor-supplied applications. The vendor should be able and willing to supply information about what ports are needed for proper operation. If not, this can be easily determined by packet captures and network scans.
Application configuration and authorization
A variety of client-facing requirements for applications are discussed in this section.
- A warning banner should be displayed at user logon. This can contain text about a user’s consent to monitoring, no reasonable expectation of privacy, and other standard organizational user agreement text.
- Authentication credentials should not be stored on a client after the session terminates. Cookies are the most common method to store credentials for use in future sessions.
- Users should be able to explicitly terminate a session via a logout button or link. It should be easy to find and obvious to most users.
Summary
This laundry list of security requirements is a lot to think about for every application deployment, but vigilance in this area can drastically improve an organization’s security posture. The requirements can be put into a standardized template, and at the end of the process each requirement should have a mark for pass, fail, or perhaps not applicable. Anything marked as a failure should be noted and can be escalated or accepted as a risk.