How to prevent insider threats in your organization
Time and again, organizations of all sizes and in all industries fall victim to insider threats: disgruntled, malicious insiders – employees, former employees, contractors or business associates – who want to hurt the company or make money, or, more often, bumbling or indifferent employees who accidentally put sensitive company information at risk.
“Insider threats aren’t always malicious, there are incidences where they are unintentional and therefore training has a very important role to play in reducing the risk of these unintentional threats,” notes Greg Day, VP & CTO, EMEA, FireEye.
“The key to getting the training right is making it relevant. Focus on behaviours and aspects that you wish your employees to be aware of – typically companies will include aspects like recognising social engineering in phishing emails, and awareness of what information they share about themselves and the company online.”
This type of training has also the additional benefit of acting as a deterrent to the malicious insider by showing that the business has a strong security focus and outlining repercussions to intentional acts, Day pointed out.
When it comes to preventing malicious insiders from hurting the company, it’s important to understand their psychology.
“Insiders are not impulsive. They can move along a continuum from idea to action and therefore demonstrate a discernible pattern of behavior that can be proactively detected,” Dr. Michael Gelles, a Director with Deloitte Consulting LLP Federal practice, points out.
“Through the use of analytics, anomaly detection through employee monitoring can proactively identify potential risks. Identifying behaviors that are potential risk indicators such as performance, physical access, compliance, sites visited, size of downloads, printing large quantities of data or emailing large files outside the organization – when correlated using technology and analytics – can identify activity that warrants further inquiry in order to determine if an insider may be moving towards action,” he adds.
But if a person has legitimate access to a certain piece of information, how can any technology prevent the person from leaking the data?
“The first aspect to recognise is that leaking information doesn’t cause business impact, it’s how it’s used once it has been leaked. As such, being able to audit behaviour both in real time and post leak can often allow the recovery of information before it is used,” notes Day.
“Typically, if a user is looking to steal information, they are often detected by an increase in data being accessed, both DLP and network monitoring tools can identify such spikes away from the norms of the user’s behaviour. Depending on the businesses perceived value of the information, DLP tools can also be used to control who and how information is accessed. The most critical may be contained to limited use on internal only systems.”
Dr. Gelles says that technology is just one half of the equation when looking to prevent, detect and respond to insider threats.
“Today, organizations must develop a holistic approach to mitigating the insider threat that looks at the whole person and specifically at ‘what a person does’ in the virtual space as well as ‘what a person does’ in the non-virtual space,” he explains.
“An insider threat program is not just about the use of technology to detect anomalous behavior, but also to examine the way an organization does business to include: policies; the employee lifecycle from vetting and hiring to managing and separation procedures; and communications and training – all are critical elements that are beyond just the technology focus of an insider threat program.”
Charles Foley, Chairman and CEO of Watchful Software, says that there are two things CISOs should keep in mind when trying to address the problem of insider threats within their organization:
- “Go back to common sense,” and
- “Carbon’s out, Ether and Silicon are in.”
“For a hundred years, common sense ruled the flow of information. If it was sensitive, there were very real controls applied to it,” he notes.
Government agencies, large banks and companies in the 50s, 60s, and 70s had firm control over information by classifying it, stamping/marking it, and only allowing certain, trusted people to have access to it.
“Then came PCs, email, file servers, and smartphones – not to mention ‘the cloud’ – and everything fell apart,” says Foley.
“Once it became difficult to ‘stamp’ the words CONFIDENTIAL across a document (or more importantly, it became too easy to create one without it), and you couldn’t control information by locking a file room or filing cabinet, people entered the realm of the ‘trust paradigm’. Companies began to ‘trust’ their employees to do the right thing. And this has led to 90% of companies reporting that they’ve been breached in the last 12 months, over half from insiders either malicious or accidental.”
“Going back to common sense” means using today’s technology (which got us into this trouble in the first place) to dynamically identify sensitive/confidential information, automatically mark and tag it, and encrypt it so that only employees with the right level of clearance can open it, regardless of whether they get their hands on the file or not. This is how we apply the hundred-year old term of ‘data classification’ with today’s current technologies,” he explains.
“Carbon is out, Ether and Silicon are in” refers to the fact that, as much as you’d like to have your 10,000 employees know and enforce your security policy, it ‘s not going to happen.
“Honestly, it’s not their data, and it’s not their job,” Foley points out. “Read your own Employee Handbook; it’s a good bet that it clearly states that all data is the property of the COMPANY. And it’s likely not in the job description of the salesman, or clerk, or R&D associate to classify/mark and tag/secure data – it’s the company’s job.”
“Consider this: the average 5,000 person company generates a half-million emails daily and over 25,000 files/documents of which about 20%, or over 100,000 items could cause significant loss/damage to the company. Do you really want to trust that to people that have other jobs?” he asks, and advises companies to rely on “Silicon and Ether”, i.e. technology and software.
Malicious insiders working in a critical infrastructure environment are a particular worry, because of the devastating problems they can generate.
“In looking at insider threat we must look at activity driven behavior that could result in the exploitation of information, damage to material, sabotage to facilities or targeted violence, not just information in any circumstance,” notes Dr. Gelles. “Insider programs should look to mitigate risk surrounding the loss of information and data as well as sabotage and workplace violence.”
Insider threats in government and law enforcement are also exceptionally scary scenarios.
“Not only can they leak/disclose massive amounts of harmful information, but they also have a much higher likelihood of access to non-informational, operational systems. Think critical infrastructure, nuclear energy plants, traffic control, waste processing systems, power grids, etc,” says Foley.
“For this reason, government and law enforcement are two of the verticals that are not only embracing the ‘Go back to common sense’ and ‘Carbon is out…’ mantras, they are actively pursuing a third, which is: It’s not WHAT you know, or even what you HAVE, but WHO you ARE.”
Consequently, they are increasingly turning to biometrics to assure the person who wants access is the person they say they are.
“Today’s state of the art is eBiometrics, or types of biometrics that don’t require hardware – more Ether, less Silicon,” Foley explains. “Today’s systems know who you are because of how you interact with the system, your interface patterns, or your geolocation or through a combination of these things. It could be facial recognition married with behavioral metrics, or geolocation cross-referenced with language patterns. Only in this manner can you, with any degree of scale, ensure that the people using critical infrastructure systems are who they SAY they are, and who they are SUPPOSED to be and that’s how we’re going to be safe in an increasingly dangerous world.”
Things are obviously changing, and organizations are aware that they have to address insider threats. According to the results of a survey published earlier this year, currently 56% of IT professionals in the US have an insider threat program already in place, and 78% of those remaining, or 34% of the total, are planning to put one in place this year.
Most of them are also aware of the fact that they have to combine technology, policies, and organization-wide security training and awareness to mitigate insider threats.