Tailoring responsible AI: Defining ethical guidelines for industry-specific use

In this Help Net Security interview, Chris Peake, CISO & SVP at Smartsheet, explains how responsible AI should be defined by each organization to guide their AI development and usage.

Peake emphasizes that implementing responsible AI requires balancing ethical considerations, industry regulations, and proactive risk assessment to ensure that AI is used transparently.

responsible AI

How should businesses and governments implement responsible AI to ensure ethical alignment, particularly in industries heavily reliant on AI?

Responsible AI can mean different things for businesses depending on your industry and how you’re leveraging AI. Therefore, step one defines what responsible AI means for your business. Consider the risks your business faces, regulatory and industry standards you must comply with, and whether you are a provider of AI, consumer of AI, or both. For example, a healthcare organization’s definition of responsible AI will likely prioritize data privacy and HIPAA compliance.

From there, use your definition of responsible AI to determine a set of principles that will guide your AI development and use. Then, determine how you will put your AI principles into practice. For example, if one of your principles is to ensure your AI systems are understandable, your team could build your AI tools to show their work and make it easy for customers to understand exactly how an AI tool arrived at an answer or insight.

Throughout this process, it’s a best practice to be transparent with your employees, customers, and partners. This includes:

  • Publicly sharing your AI principles
  • Acknowledging the challenges you expect to encounter as you develop your AI systems
  • Training your employees how to comply with the principles and use AI in a responsible way
  • Publicly sharing exactly how your company’s AI systems work

Once you’ve taken these steps, you can start aligning AI with products and services to drive better results responsibly.

Considering the increasing integration of AI in cybersecurity, what are the most effective strategies for enhancing digital security against AI-driven threats?

This is a moving target right now because AI is developing at such a rapid pace. For example, when generative AI tools like ChatGPT launched, phishing emails suddenly became much more sophisticated. We started seeing tailored phishing emails with perfect grammar that didn’t have some of the telltale signs we train our employees to look for.

The evolving nature of AI-driven threats makes them challenging to defend against. It’s also important to be aware that while AI can be used to benefit organizations, others will seek to abuse it. Security, IT, and governance teams, in particular, must anticipate how AI abuse can impact their organizations.

One effective strategy for defending against AI-driven threats is ongoing skilling and training for employees to better recognize and report new security threats. When my team noticed phishing emails were becoming more sophisticated, we notified our employees and evolved our phishing simulation tests accordingly. We also added a phish reporting button to the sidebar of our employees’ inboxes so they can report potential phishing emails with just two clicks.

What role does AI play in crisis management, and how can organizations better prepare for AI-related failures or breaches?

As AI becomes increasingly embedded in business operations, organizations must ask themselves how to prepare for and prevent AI-related failures, such as AI-powered data breaches. AI tools are enabling hackers to develop highly effective social engineering attacks. Right now, having a strong foundation in place to protect customer data is a good place to start. Ensuring third-party AI model providers don’t use your customers’ data also adds protection and control.

There are also opportunities for AI to help strengthen crisis management. The first relates to security crises, such as outages and failures, where AI can identify the root of an issue faster. AI can quickly sift through a ton of data to find the “needle in the haystack” that points to the source of the attack or the service that failed. It can also surface relevant data for you much faster using conversational prompts. In the future, an analyst might be able to ask an AI chatbot that’s embedded in its security framework questions about suspicious activity, such as, “What can you tell me about where this traffic originated from?” Or, “What kind of host was this on?”

There is also an opportunity for AI to help manage public crises. For example, during a natural disaster, AI could help response teams more easily coordinate with each other and manage requests for help.

With AI’s decision-making processes often opaque, what steps can organizations take to increase transparency and ensure accountability when AI systems go wrong?

One of the most important steps is publicly documenting how your AI systems work, including how you use public models and protect data. Encourage your team to get as specific as possible and go the extra mile to explain how everything works in detail. Since generative AI is still relatively new and evolving, I like to take a scientific approach to this process, including documenting the facts we know today, what we expect in the future, and the subsequent outcomes.

If you’re developing AI systems, it’s also important that the tools you’re building show their work so your customers understand the recommendations and insights AI provides. As AI becomes more integrated into our everyday work, explaining its reasoning will be crucial for maintaining customer trust.

How do you anticipate AI governance evolving, and what are the key challenges and opportunities you foresee?

AI has huge potential to strengthen data security and add an extra layer of protection. Nobody can manually monitor all the data flowing through their business; intelligent systems must take on that burden. AI can grow to “understand” what’s normal and flag anything that isn’t. This has the potential to greatly increase response rates and standardize processes.

One of the biggest governance challenges is the pace at which AI is being adopted and implemented. Organizations are moving quickly to adopt AI, and some are skipping the important steps of informing their customers how they’re integrating AI and allowing them to opt-out. This puts an extra burden on security teams to ensure their vendors are not using AI without their knowledge.

Don't miss