Tech

Businesses can’t escape the AI revolution—so here’s how to build a culture of safe and responsible use

Share
Share
ai and business
Credit: Pixabay/CC0 Public Domain

In November 2023, the estates of two now-deceased policyholders sued the US health insurer, United Healthcare, for deploying what they allege is a flawed artificial intelligence (AI) system to systematically deny patient claims.

The issue—they claim—wasn’t just how the AI was designed. It was that the company allegedly also limited the ability of staff to override the system’s decisions, even if they thought the system was wrong.

They allege the company even went so far as to punish staff who failed to act in accordance with the model’s predictions.

Regardless of the eventual outcome of this case, which remains before the US court system, the claims made in the suit highlight a critical challenge facing organizations.

While artificial intelligence offers tremendous opportunities, its safe and responsible use depends on having the right people, skills and culture to govern it properly.

Getting on the front foot

AI is pervading businesses whether they like it or not. Many Australian organizations are moving quickly on the technology. Far too few are focused on proactively managing its risks.

According to the Australian Responsible AI Index 2024, 78% of surveyed organizations claim their use of AI is in line with the principles of responsible AI.

Yet, only 29% said they had implemented practices to ensure it was.

Sometimes visible, sometimes not

In some cases, AI is a well-publicized selling point for new products, and organizations are making positive decisions to adopt it.

At the same time, these systems are increasingly hidden from view. They may be used by an upstream supplier, embedded as a subcomponent of a new product, or inserted into an existing product via an automatic software update.

Sometimes, they’re even used by staff on a “shadow” basis—out of sight of management.

The pervasiveness—and often hidden nature—of AI adoption means that organizations can’t treat AI governance as merely a compliance exercise or technical challenge.

Instead, leaders need to focus on building the right internal capability and culture to support safe and responsible AI use across their operations.

What to get right

Research from the University of Technology Sydney’s Human Technology Institute points to three critical elements that organizations must get right.

First, it’s absolutely critical that boards and senior executives have sufficient understanding of AI to provide meaningful oversight.

This doesn’t mean they have to become technical experts. But directors need to have what we call a “minimum viable understanding” of AI. They need to be able to spot the strategic opportunities and risks of the technology, and to ask the right questions of management.

If they don’t have this expertise, they can seek training, recruit new members who have it or establish an AI expert advisory committee.

Clear accountability

Second, organizations need to create clear lines of accountability for AI governance. These should place clear duties on specific people with appropriate levels of authority.

A number of leading companies are already doing this, by nominating a senior executive with explicitly defined responsibilities. This is primarily a governance role, and it requires a unique blend of skills: strong leadership capabilities, some technical literacy and the ability to work across departments.

Third, organizations need to create a governance framework with simple and efficient processes to review their uses of AI, identify risks and find ways to manage them.

Above all, building the right culture

Perhaps most importantly, organizations need to cultivate a critically supportive culture around AI use.

What does that mean? It’s an environment where staff—at all levels—understand both the potential and the risks of AI and feel empowered to raise concerns.

Telstra’s “Responsible AI Policy” is one case study of good practice in a complex corporate environment.

To ensure the board and senior management would have a good view of AI activities and risks, Telstra established an oversight committee dedicated to reviewing high-impact AI systems.

The committee brings together experts and representatives from legal, data, cyber security, privacy, risk and other teams to assess potential risks and make recommendations.

Importantly, the company has also invested in training all staff on AI risks and governance.

Bringing everyone along

The cultural element is particularly crucial because of how AI adoption typically unfolds.

Our previous research suggests many Australian workers feel AI is being imposed on them without adequate consultation or training.

This doesn’t just create pushback. It can also mean organizations miss out on important feedback on how their staff actually use AI to create value and solve problems.

Ultimately, our collective success with AI depends not so much on the technology itself, but on the human systems we build around it.

This is important whether you lead an organization or work for one. So, the next time your colleagues start discussing an opportunity to buy or use AI in a new way, don’t just focus on the technology.

Ask: “What needs to be true about our people, skills and culture to make this succeed?”

Provided by
The Conversation


This article is republished from The Conversation under a Creative Commons license. Read the original article.The Conversation

Citation:
Businesses can’t escape the AI revolution—so here’s how to build a culture of safe and responsible use (2025, January 9)
retrieved 9 January 2025
from

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.

Share

Leave a comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Related Articles