Let’s be realistic about our expectations of AI
Pop culture contains no shortage of intelligent robots. When the tool became viable and widely available in real life, people brought a number of enthusiastic but unrealistic expectations to the table. Unfortunately, Amazon’s Alexa isn’t as smart as HAL 9000, and a Roomba can’t clean your home like the Jetsons’ metallic maid, Rosie.
Perception of AI
Media narratives often form public perceptions. This is especially true for technological themes because the general public doesn’t have an in-depth understanding of the science and technology behind them. People rely on the stories, both fictional and nonfictional, to inform their perceptions.
A study by the Royal Society outlines how narratives — or the way things are portrayed and perceived — have affected public discourse around scientific areas such as nuclear power, genetic modification, and climate change. We can use these lessons from history to inform how we shape the narratives around artificial intelligence.
Media misrepresentations of AI may seem harmless when we think about animated robots on The Jetsons, but the consequences of portrayals both in entertainment and other media reporting can be significant when it comes to AI research and development. Many nontechnical observers today think of AI as a sci-fi technology with human-level performance, but AI is still only as smart as the data that scientists feed to it.
For example, running is an action that can be clearly defined. Scientists can train machines to detect that someone is running by feeding them datasets that clearly represent this motion. But the same can’t be said for something like suspicious action. Whether someone is acting suspiciously or not is something that can’t even be clearly defined by a human. Therefore, it would be impossible for scientists to train a machine to detect if someone is acting suspiciously.
Instead of assuming AI’s inability to complete such tasks is a failure of the technology, consumers need to shift their understanding and realize that it’s simply the reality of AI. Members of the public need to shift their expectations away from sensationalized accounts and toward the true capabilities of machines.
As enterprises and individuals flock to AI, they must realize that effective, and often impressive, technologies aren’t always capable in the same ways they would expect.
Leveraging AI in security for realistic results
In security, specifically, providers have a lot to gain from the advent and implementation of AI. Automating the work of monitoring surveillance cameras and identifying potential threats promises to make the industry far more efficient and cost-effective while also improving security. Contrary to what some may believe, however, security can’t be fully automated. The human element will always be essential.
Consider our hypothetical suspicious person. How can today’s (or even tomorrow’s) AI tell the difference between an intruder with real malicious intent and a technician authorized to be on-site? How about the difference between someone walking in a confident way and a suspicious way?
These determinations depend on understanding subjective context based on superficial data — a trait that is, so far, only human. It will take years before AI achieves that level of understanding. That’s not because it is underpowered now, but because analyzing thousands of variables in real time takes significant computing power.
That’s important to keep in mind when considering emotion recognition algorithms developed by tech titans. Each of them might offer impressive capabilities that can strategically supplement the work security teams already do — but none of them can replace the human members of those teams.
Worse than overestimating what AI can do, we tend to see it as a full replacement for human labor. This idea contributes to the perception of a binary humans-versus-robots scenario. But that perspective fails to see the fundamental difference between the two: AI offers high levels of precision and specialization, while humans have common sense and contextual knowledge. In that context, we see that man and machine must complement one another to work effectively.
Security companies hoping to leverage AI for real impact need to acknowledge the realistic capabilities of their technology. Current AI can detect specific actions and items, but there’s no AI in development that is even close to being able to make clear evaluations of security decisions.
If an AI-enabled camera sees someone loitering outside, for instance, a human will still need to evaluate the situation to determine whether that human is a threat. AI can say something looks like a weapon or that someone is loitering, but it can’t understand the context around situations. Maybe that weapon is only a realistic toy, or perhaps the “loiterer” is actually a contractor who’s on-site for the day.
These are simple situations, but AI isn’t yet capable of answering them. It requires the contextual knowledge only humans possess. In order for technology to be effective, both vendors and consumers must think about the capabilities of AI realistically. It can be a helpful security tool, but every piece of tech still needs to rely on highly trained humans to evaluate potential threats.