Proactive or reactive: Should that be the question?
For a number of years digital forensics has referred to “the application of computer investigation and analysis techniques to gather evidence suitable for presentation in a court of law’. While collecting this digital evidence, to be used retrospectively in subsequent litigation, is a valid activity there is growing support for a more proactive proposition.
Organizations need all the help they can get if they’re to adequately fight back against malware proliferation and malicious activity. We’re about to witness a new dawn for digital forensics.
We’re all familiar with the risks our enterprises face from rogue or untrained IT administrators gaining access to the corporate servers and wreaking havoc. This can be anything from accidental and/or unwanted changes and bad IT practices to corporate espionage and malicious revenge attacks.
This has been a key driver for organisations to develop and store an audit trail of privileged activity, across the network, to provide clear visibility of what’s taking place and who is performing it. More recently, this trail has also been critical to verify an organisations compliance with legislation.
These activity logs, often touted as irrefutable evidence of the organisations regulatory stance for auditors, to all intents and purposes are examples of digital forensics in action.
Digital forensics can be split into two practices – proactive and reactive forensics. Let’s look at the evidence:
Reactive forensics
As the name suggests, reactive forensics looks at something that has already happened then, retrospectively, conducts a post mortem and analyses the witnessed behaviour to glean what can be learned to prevent it happening again. Often considered the more traditional approach to security, it forms the bedrock of a number of security applications – such as firewalls and anti-virus software.
Pro-active forensics
Conversely, proactive forensics is the practice of looking for something in advance based on high level futuristic rules. Rather than responding to a situation, proactive forensics can be used as an early warning system by using key characteristics to identify certain behavioural changes in applications, detect anomalies in network traffic or unexpected alterations to system configurations. It requires a very high level view of everything that’s going on across the entire network. However, to be truly effective it must also be capable of issuing timely alerts when something erroneous occurs.
The way I see it is both elements go hand in hand. You can’t build good proactive monitoring systems without first knowing what to look for. However, that’s just one element as it’s only as strong as the rules you use to analyse the information that’s coming back.
And therein lies the problem – they’re both based on rules. Unfortunately, malicious code writers and insider attackers don’t play by the rules so it’s an always going to be an ongoing struggle.
Ultimately, what it boils down to is the organisation’s ability to create and effectively use an intelligent set of rules, to filter the evidence digital forensics correlates to look for pre-determined behaviour or system configuration changes that it is not expecting.
For example, the use of a privileged identity can be a key indicator of suspicious activity, especially in applications that would not normally require admin rights to run. Take a web browser, for instance, if it were to ask for admin rights it should be flagged in any early warning system that something untoward may be about to occur.
From this proactive position, it should then reactively quantify the request to determine its legitimacy. It could be something benign – such as installing a trusted Active X control, or it could be sinister – such as a drive by download that is trying to gain admin rights to take control of the system.
A further complication for organisations is making timely use of the information being generated by the disparate security systems in use across the enterprise. If you don’t have the ability to process and make sense of all the information then ultimately it’s just more data taking up room.
Instead, the data needs to be fed into a single repository capable of processing this very large constant flow of high bandwidth information and alerting those responsible when something erroneous occurs.
For an organisation to be able to identify the one little nugget that might suggest that something bad has happened, or is about to happen, it needs good rules. Otherwise it risks the clues being missed and the alert not sounding or, if it’s too sensitive, the alert being hidden amongst all the generated “noise’.
As you can see this balancing act is exceptionally complex. Organisations need to build, or deploy, intelligent tools capable of dealing with the volume of information. It’s about understanding what to look for and using powerful tools to accurately determine something truly malicious that requires intervention.
If this expertise lies in house then that’s fantastic. Alternatively, solutions are available that offer and deliver the necessary intelligence.
While some might argue that prevention is better than cure, even the best antidote will need an initial injection of venom to stimulate the production of antibodies.
Digital forensics will become increasingly important as part of a security program, can you afford to let the clues slip through your virtual fingers?