Shaping effective AI governance is about balancing innovation with humanity

In this Help Net Security interview, Ben de Bont, CISO at ServiceNow, discusses AI governance, focusing on how to foster innovation while ensuring responsible oversight. He emphasizes the need for collaboration between technologists, policymakers, and ethicists to create ethical and effective frameworks.

AI governance

How do we balance innovation in AI with the need for stringent oversight?

The best innovation happens within clear boundaries. Governance doesn’t stifle innovation; it gives it purpose and direction. It’s like building a bridge—creativity designs the structure, and oversight ensures it’s built to last. In AI, this means embedding transparency, accountability, and human oversight into every stage of development.

In terms of oversight, a human-in-the-loop approach is particularly powerful, ensuring that AI outputs are not just accurate, but meaningful. Coupled with governance frameworks that prioritize varied datasets to reduce bias and robust feedback mechanisms from end users to refine models over time, organizations can innovate boldly while staying grounded in accountability and compliance. The key is recognizing that responsible AI is the foundation, not a barrier, to groundbreaking progress.

How do cultural and regional differences influence approaches to AI governance?

AI governance can often reflect the unique priorities, values, and regulations of the regions implementing it, and organizations need to be able to adapt to the various markets they operate in. Privacy, innovation, and accountability may be emphasized differently depending on cultural and regulatory contexts, but the core challenge remains the same: ensuring AI systems are trustworthy, ethical, and aligned with societal needs.

Transparency is a universal cornerstone of effective governance. Practices like clear labeling of AI-generated content and detailed documentation of AI models foster trust across regions. Similarly, inclusivity—ensuring AI is trained on diverse datasets and shaped by a range of perspectives—helps systems meet the needs of global users. By combining strong governance principles with tools that can adapt to local contexts, organizations can foster trust and innovation no matter where AI is deployed.

How can interdisciplinary collaboration among technologists, policymakers, and ethicists enhance AI governance?

AI governance is as much about people as it is about technology. Each of these groups brings unique expertise: technologists focus on the “how,” policymakers on the “should,” and ethicists on the “why.” When they come together, they create a critical feedback loop that strengthens accountability and builds trust.

Take bias, for example. Technologists can design algorithms and use bias detection tools to proactively identify and address inequities, while ethicists ensure these systems align with societal values and policymakers create frameworks that promote fairness and accountability. Similarly, policymakers advocating for clear labeling of AI-generated content can collaborate with technologists to make transparency accessible and user-friendly. The future of AI depends on this cross-pollination of ideas.

How should governments and companies address “black-box” AI models in terms of accountability?

Black-box AI is a trust issue, plain and simple. If users can’t understand how AI reaches its decisions, they won’t trust it—and rightfully so. The solution lies in transparency: explainable AI models, clear documentation, and consistent human oversight. A few ways companies can accomplish this is through:

  • Explainability standards: Encourage or require AI developers to create models that offer clear explanations for their decisions, ensuring stakeholders understand the rationale behind AI outcomes.
  • Clear accountability: Assign roles and responsibilities for monitoring AI systems for their performance and compliance.
  • Adopt AI governance frameworks: Use governance practices that align with ethical standards and regulations that keep fairness, safety, and reliability at the forefront.
  • Audit and monitor AI models: Regular reviews and audits of AI models can help detect potential risks, biases, or unintended outcomes.

By combining these efforts, AI can become a force for innovation that drives meaningful progress while upholding public trust and organizational integrity.

How can professionals in technology and policy contribute to shaping effective AI governance?

AI governance is a team sport. Technologists need to prioritize building systems with human-centricity and transparency baked in from the start. This means adopting practices like using varied or targeted datasets, ensuring human oversight for critical decisions, and creating clear labeling for AI-generated content to help users make informed choices.

Policy professionals, on the other hand, can drive forward standards that require these elements to be consistent and enforceable. Collaboration is where the magic happens—working together to align on shared principles can set the bar for what good governance looks like. Ultimately, shaping AI governance is about balancing innovation with humanity, and that requires everyone—technologists, policymakers, and ethicists—pulling in the same direction.

Don't miss