In 2017, the digital will get physical when machines start to lie
In a memorable scene from a 2014 episode of the series Homeland, the Vice President is murdered by hackers who tamper with his pacemaker. Despite this plot idea reportedly originating from the actions of a real vice president, in 2014 this still seemed just the stuff of fiction.
Then came 2015-16, when cybercrime, cyberattacks, and cyberterrorism went completely mainstream. Digital-to-digital attacks – wherein attackers use digital means of varying sophistication to access sensitive digital information, steal digital funds like the recent attack on the Russian Central Bank, or ransom digital Personally Identifiable Information (PII) – are simply a fact of life.
Yet today, radically more aspects of our physical lives are controlled by connected systems and devices. From IoT sensors at critical infrastructure facilities like dams and power plants, through medical devices in hospitals and in our bodies themselves, the cars we drive, the planes in which we fly, our home appliances – our lives are in a very real way in the hands of connected technology.
It is this seam between the digital and the physical that cyber attackers will increasingly exploit in 2017 and beyond. And, as with all vulnerabilities, the weakest links in the chain are often the most innocuous. Because in 2017, digital threats will get physical when machines start to lie.
2017: When machines lie
The key to our connected existence is accurate data. Accurate altitude and GPS data allows ILS systems to land aircraft in fog. Accurate turbine data enables power generation facilities to tweak performance and maximize output. Accurate vital signs data enables surgical monitors to keep patients alive during complex procedures. The brilliance of these and a million other computer systems is their ability to process data and reach decisions far faster and more accurately than we as humans are capable of.
Yet they are all entirely data-dependent.
What happens when decisions, whether human or computer, are based on incorrect data? What happens when the sensor data that drives digital control systems and life-or-death decision making is maliciously manipulated? This is when the machines on which we rely, begin, in a very real way, to lie.
Data forgery is on the rise, and in 2017 we expect to see and feel its effects more dramatically. For example:
Data-dependent critical infrastructure
Industrial control systems have been running the show at industrial and critical infrastructure facilities for over 30 years. Though not designed for security, ICS systems have been effectively hardened and isolated over the years. Now, their security is considered on par with other network solutions.
But what about the data on which ICS systems rely?
Decision-making at large-scale industrial and critical infrastructure facilities is based on data fed to the ICS system from thousands of sensors. These sensors range from legacy devices to brand-new IoT monitors – and all remain notoriously vulnerable to cyberattack.
What happens, for example, when the SCADA systems controlling a hydroelectric dam, with thousands of people living downstream, think reservoir capacity is X when it’s actually 10X, and the dam was built to contain only 5X?
The dark side of medical technology
Much has been written about the vulnerability of healthcare systems and devices. In 2016, we saw a number of high-profile breaches at major healthcare providers. In 2015, according to the U.S. Department of Health and Human Services, the sensitive medical data of over 110 million people – almost half of the U.S. adult population – was compromised.
Yet data forgery in the healthcare field represents a new type of threat, and one that will likely intensify in 2017. Data from clinical sensors, medical records, or other protected health information (PHI) is vulnerable not just to theft, but also to malicious forgery. When critical medical systems and staff are misled by inaccurate data, lives are at stake.
What happens, for example, when a clinical database of blood glucose and insulin intake records for a population of diabetes patients is changed, even by just a fraction of a percentage? This is data used hourly by patients to adjust dosage and sugar intake – what happens if it can’t be trusted?
IoT: Everywhere and vulnerable
According to Gartner, by the end of 2016, 6.4 billion “connected things” will be in use worldwide – almost one device for every human being on the planet. By 2020, Gartner predicts, IoT devices will far outnumber us at over 20 billion.
The data generated by consumer IoT devices drives our daily lives and impacts our health. Data from industrial and infrastructure IoT devices, as mentioned earlier, can directly impact our safety. Yet IoT, for numerous reasons, is an inherently insecure paradigm, leaving data at risk of malicious manipulation and forgery.
As more and more IoT devices permeate our daily lives in 2017, heavy reliance on sensor data will have ever-greater impact on our physical well-being. What happens, for example, when crucial data driving the decision-making of a Tesla autonomous caris forged?
Wanted: A polygraph for process data
“Trust but verify,” Ronald Reagan said regarding U.S.-Soviet relations in the 1980s. If we are to trust that the computer systems effectively running our lives are making decisions based on accurate data, we need to be able to actually verify that this data has not been manipulated.
Thankfully, even as the threat of data forgery grows exponentially in 2017, recent breakthroughs can mitigate the risks. New technology enables intelligent detection of artificial manipulations of process data for mission-critical systems and infrastructures. These systems are like a polygraph for process data — detecting and alerting when our machines are lying to us. And in 2017, with data forgery moving mainstream, machines that lie may indeed become the scary new norm.