OpenAi ChatGPT application on smartphone showing AI governance

Your Company (And Your Teams) Are Using AI Whether You Know It or Not; Here Are Four Steps to Lay a Foundation for Better Policies

The most pressing challenge most businesses face today is how to grapple with safe AI deployments. Businesses want to move fast but need to feel more confident and secure with AI use. As we all know, using AI tools can make businesses vulnerable to risks, including confidentiality, data privacy, consumer safety, brand trustworthiness, intellectual property, and more. Leaders are asking themselves questions like, “How will I know if I have a data breach on my hands?” and “How will I know if an AI chatbot is sending inappropriate information to an end-user?” At its core, these concerns are all about the data going into AI and the data coming out. In fact, a recent Workday survey found nearly half (48%) of respondents cited security and privacy concerns as the main barriers to AI implementation.

To mitigate business risks, all of the “talk” (i.e., the data that goes in and out) between LLMs (large language models) and your business should have a single layer of governance. Without it, chaos has the potential to ensue – from sensitive data leaks, shadow IT concerns, inappropriate and harmful brand outputs, and a host of other risks.

Setting up the right AI governance is a crucial foundation in these early days of AI. Companies that get governance right will be able to move faster, more confidently in the space – likely outperforming companies that lack the right safeguards to mobilize AI effectively.  As AI takes hold more, this performance gap will likely widen.

Here are five ways leaders can lay the groundwork for best-in-class AI governance, to help move their business forward fast, and ensure that great governance isn’t a distant memory of a quaint, pre-AI world.

Align tone from the top

Step one is aligning the foundations of governance to the tone of your company’s AI strategy—even if it’s solely determining a “tone from the top.”

Is the strategy to encourage rapid AI deployment, focus AI efforts on solely one part of the business, or exercise pragmatic caution? We see most companies breaking down their AI efforts into two buckets. The first is “Internal usage,” meaning deployment of AI tools to enhance internal processes. For example, marketing use of generative AI tools like Jasper, or AI-based internal business tooling, for specific business actions.The second bucket is “Product / Feature development”, so building a product on top of AI for use with end users or customers.

To ensure the right tone, I’ve heard some companies launch AI listening tours, meeting with internal department heads to collate the ways they’re thinking about AI use in the near future. Listening Tours are an effective way of understanding potential applications and tapping general sentiment of AI usage across the business.

Establish an AI Committee & nail down its scope

A common step I’m seeing companies take is to set up an AI Committee to handle the fast-moving nature of all things AI, including governance. The cadence ranges from weekly to bi-weekly to ad hoc depending on the corporate tone on AI usage. Committee members often include executives from Security, Legal/Privacy, Product, and Engineering.

The scope of the committee may change depending on members and the overall corporate strategy. But common responsibilities include:

  • Setting, launching, and revising AI business policies.
  • Evaluating and approving all usage of AI within the company – whether from internal business teams or within user-facing product experiences.
  • Continually monitoring risks and opportunities in AI usage.

Establish and launch an AI policy

This one might feel like a no-brainer, and yet many companies are still behind: if your company plans to use AI in any capacity – from the marketing team to the customer service team to engineering writing code or building AI-powered features – define a policy of usage. We are seeing companies define “Approved Usage” and “Not Approved Usage” for ease.

Find a framework

Ideally any framework you use will be aligned with the definitions and recommendations described in the IAPP AI certification body of knowledge, the proposed EU AI Act, the NIST RMF Framework, and what we know about CCPA’s automated decision-making requirements, and more. It should also be flexible enough to allow organizations to collect additional information, evaluate risks based on their internal risk scoring criteria, and evolve with new requirements.

From there, you can move on to establishing other key risk frameworks, including security reviews of proposed vendors, and establishing AI Governance and Observability over your technology applications.

Build technical moats

Last, but certainly not least, it’s critical to establish a technical way of ensuring AI governance. Much like privacy or security, AI governance is ultimately a data problem. It’s only truly solvable in partnership with engineers and with a technical layer to govern the “talk” between your company and LLMs.

From an architectural point of view, an AI middleware layer is elegant, easy to scale with, and effective. It’s also one of those things that’s easiest to adopt early on in the age of AI. As more and more products are built with connections to AI models, switching to middleware becomes an increasingly complex migration project.

AI has the potential to radically change how we work, but regulations, rights, and brand safety are critical foundations of a healthy business marketplace. It’s clear from studying AI applications within businesses that the biggest issue is in what flows into and out of AI from a business perspective, so it’s worth reiterating that these foundational steps are important, but do not solve the root issue of governing technology –– it’s vital that governance is happening at the code layer too, because policies and frameworks are necessary but not sufficient for modern governance.