By Lauren Bean, Sales Lead at Vault Platform

Virtually every day we’re seeing news about incredible advances in Artificial Intelligence (AI), with a wide range of new applications emerging across different industries.

Yet, so far, we’re not seeing a huge amount of use of AI among Ethics and Compliance professionals.

That’s despite the fact that peers in other departments are already using AI for a variety of tasks – from detecting cybersecurity anomalies, to HR teams hiring talent and automating employee relations.

Does that mean E&C professionals are more skeptical? And what do they really think about AI?

AI receives the thumbs up

I had the unique opportunity to learn directly from Chief Compliance Officers at the Consero Chief Ethics and Compliance Officer forum on 26 March when Vault hosted an expert panel discussion on ‘How AI Can Help Compliance Officers’. As the moderator, I had the chance to ask our panelists – Compliance Leaders from TikTok, Aura, Kayak, and Match Group – and an audience of 100 E&C professionals from the biggest global brands what they think about the advances in AI.

More excitement than the average business leader

There were a handful of skeptics, naturally. Yet, the overwhelming reaction we saw from the room was excitement. About 90% expressed enthusiasm for AI coming into the day-to-day of the compliance function. That’s compared to just 62% of business leaders who responded to a similar recent poll.

It makes sense from a practical perspective. Compliance is a lean team, regardless of company size. More often than not, these are small teams, wearing many hats, and like everyone in this moment, they are expected to do more with less. So they’re searching for solutions and AI could be the answer.

Humans or AI – who’s the final decision maker?

As the discussion unfolded, one of the big questions that proved to be on everyone’s mind – panelists and audience alike was: what are the boundaries of how we use AI in decision making?

We talked a lot about Large Language Models (LLMs), such as ChatGPT, how you vet the information you put in, and analyze the information you get out. How confident can you feel about the outputs? Which decisions will you let the machine make for you?

We heard from one large business that has developed their own internal LLM. They’ve been using it as a data source to present all of their internal information to employees, solving the problem of putting data into third party channels.

The panel discussed how you can use LLMs for decision making around investigations outcomes, with some of the key questions being:

  • In terms of disciplinary actions, can you use AI to summarize an investigation and then use that data in order to make a decision?
  • At which point do you need human intervention?
  • Could you use AI to generate a decision without human input?
  • Would you want to use AI to make that final decision for you?

And the consensus on the answer? In short, no! The experts agreed they would not want to use AI to make the decision for them. They want humans to continue doing that.

AI in disciplinary proceedings

That led us on to talk about the blueprint for an AI Bill of Rights that was published by The White House. Within it there’s a piece that essentially talks about how employees would have a right to appeal any decision that AI makes about them.

Applying this to a workplace context, if there was a disciplinary action decided by AI and an employee appeals, there’s no way for the company to prove how that decision was made. It would simply be the chatbot that had recommended it, which is not good enough to counteract an appeal from an employment law perspective.

The unclear implications around the appeals process was one of the reasons why the panel agreed they would not want to use AI to make those decisions.

AI translations popular among delegates

There were a lot of questions about the benefits of AI in compliance team’s day to day work. A number of speakers said they were capitalizing on AI translations for working with colleagues in other countries to remove language barriers in real time, both within conferencing tools and within speak up channels.

It was good to hear how powerful this has become, particularly given our AI-powered Dynamic Translations service we already offer and that our panelist at Aura is leveraging.

Advice for peers

Rounding up the panelists’ advice to peers was primarily: ‘Give AI a try’. If you’re curious, go to ChatGPT, play with it and you’ll begin to discover the potential value. It might be useful, for example, to draft a policy or write a message to employees.

Want to learn more AI and Vault’s applications?

New, innovative AI tools are developing to assist in Ethics and Compliance programs. And at Vault, we’re leading the way.Interested in learning more about AI and how Vault’s AI applications can help your organisation? Drop me a line. I’m here on LinkedIn and you can reach me by requesting more info about Vault!