Man vs. machine: Striking the perfect balance in threat intelligence
In this Help Net Security interview, Aaron Roberts, Director at Perspective Intelligence, discusses how automation is reshaping threat intelligence. He explains that while AI tools can process massive data sets, the nuanced judgment of experienced analysts remains critical.
Roberts also offers insights on best practices for integrating automated systems, ensuring explainability, and addressing ethical challenges in cybersecurity.
Many organizations are increasing their reliance on AI-driven security tools. In your experience, where does automation provide the most value, and where is human expertise still essential?
I think that the biggest advantage we can leverage through automation is a combination of data collection and initial data processing. In threat intelligence, being able to ingest a large dataset and get some context on its content and any potential indications of key findings can be a huge time saver. If we consider February 2025, when the BlackBasta ransomware group had some of their internal chats leaked, within 24 hours, you could access a custom GPT and interrogate that data set. Before, we would have to rely on translations and doing this manually, but today – You could feasibly increase your understanding of the capability, methodology and victimology of a ransomware group in minutes by leveraging technology in this way.
However, we can’t consider that a silver bullet. As an intelligence practitioner, I would still need to verify and confirm those findings. I don’t think we’re anywhere near a point where we could consider AI reliable enough to provide the capabilities of a dedicated intelligence analyst. But I do think it’s increasingly becoming an amazing tool in the fight against cybercrime and can be a force multiplier for analyst teams to identify potential leads quickly.
AI-driven security solutions can respond to threats autonomously, but human judgment is often necessary for critical decisions. What are some best practices for integrating automated response systems with human decision-making?
I believe the best approach to implementing AI-driven responses is similar to most responses prior to these tools coming into existence. Having worked in those environments, it’s unlikely a defence team would add an automated response and not do any verification of the impact or if it’s working. The same goes for AI-led intervention. Take your time, implement it within a sandbox or non-production environment and see how it acts and responds to what it perceives as threats. I think when we consider specific scenarios and their likely impact on the business, then we can be pragmatic and introduce the automated responses gradually. Maybe you start with lower priority alerts, you keep an eye on how those are handled before you increase your trust and give up some of that control to the AI agent.
Similar to the last example, I think having human oversight of the actions or recommendations of the AI are paramount. We know that there are opportunities for it to misunderstand or misinterpret something, so adopting a considered and structured approach I think is vital for doing this well. I do think the power of AI is in being able to cut through the noise and provide you with the things that are likely the most relevant, but you still need the confidence and verification of a human to really understand that full context and potential impact of a recommendation or suggestion.
How critical is the concept of “explainable AI” in cybersecurity, and how can organizations ensure that AI-generated security insights are understandable to human analysts?
This is a great question and something I think is currently missing from platforms like ChatGPT. You can see its thought process and output, but often, I find that you can’t really see why something was changed. Being able to follow that process flow and see the decision-making process is definitely key to ensuring the right decisions are being made regarding cybersecurity.
With the growing advent of agents for AI-use, I think this is something that will improve. As you train the model to focus on its role, you can ensure that its following checks and balances and also reviewing itself against those checks and balances. The potential for these smaller, lightweight agents I think could be a real game-changer. Small, focused tasks that are capable of understanding if the initial output makes sense and reviewing itself to ensure that what the human sees makes sense. Although I would consider that while it’s an exciting development in the space, the likelihood of the agents being reliable enough at this moment is likely going to take a bit of time. Maybe with GPT5, Grok3 and the other incoming models we’ll see significant improvements, but time will tell.
AI models can sometimes reflect biases present in training data. How can human analysts help ensure that automated cybersecurity tools don’t reinforce biases or generate misleading security insights?
This will come down to training. As an intelligence practitioner, one of the key things you must be aware of is your unconscious biases. Because we all have them. But being able to understand that and implement practices that challenge your assumptions, analysis and hypotheses is key to providing the best intelligence product. I think it’s a fascinating problem, particularly as it’s not necessarily something a SOC analyst or a vulnerability manager may consider, because it’s not really a part of their job to think that way, right?
Fortunately, when it comes to working with the AI data, we can apply things like system prompts, we can be explicit in what we want to see as the output, and we can ask it to demonstrate where and why findings are identified, and their possible impact. Alongside that, I think the question also demonstrates the importance on why we as humans can’t forego things like training or maintaining skills. The risk of AI making mistakes is probably relatively low, particularly on a trained, specific data set, but it’s still not going to be zero. You need a human who can understand and interpret those findings, and capable of asking the right questions.
I see it particularly with local LLMs, where it will hallucinate URLs or statements, and while I’m usually testing a model for a specific reason without focusing on really training or prompt engineering it, I still see it creating things that are explicitly not correct. Now most of the time this is entirely harmless for me in that context, but as part of an incident response to a ransomware attack? It would be potentially catastrophic. By ensuring we as human analysts can challenge, question and correctly interpret the findings presented, that’s the best way to prevent biases from impacting the findings in my opinion.
Automated security tools can take autonomous actions, such as blocking access or mitigating threats. What ethical concerns should organizations consider when implementing these technologies?
Like all things in security, we always need to be mindful of the ethics behind what we’re doing and why. There are grey areas in this space without question, and it’s important that we ensure rigorous standards and procedures to our operations. If we outsource our blocking and mitigations entirely to an AI model, which is owned by a giant technology company, do we risk those ethics and the moral compass we attach to our work? Perhaps if one provider declared war on a second provider, and suddenly the AI blocks all access to that companies infrastructure? It sounds like science fiction but I’m not convinced it’s not at least a tiny bit plausible.
It’s also important that security continues to be a business enabler. There are times we interact with websites in countries that may have questionable points of view or human rights records. Does the AI block those countries because the training data indicates it shouldn’t support or provide access? Now some organisations will do domain blocking to an extreme level and require processes and approvals to access a website, it’s archaic and ridiculous in my opinion. Can AI help in that space? Almost certainly. But we must ensure that the guardrails for AI intervention are tightly controlled and rigorous.
The idea of real-time analysis and blocking of a website because it looks like a phishing site is something that would be an incredible asset, but if it’s not a phishing site and it’s a news site the AI believes to be harmful because the owner of the company has an opposing worldview to the organisation? Then we have an issue.
I think that we are living in an extraordinary time, where technology has the potential to exponentially increase our understanding of the universe and potentially to improve our lives. But it’s very early days in that journey, and I’m not convinced rushing to add AI into the security stack because it’s a constant talking point at the moment is necessarily the right move. We need to be considered, responsible and we need to ensure that the tools are working for us, to improve our workflows, and ultimately to make our businesses more secure.