
The potential of AI to transform people’s lives in areas ranging from health care to better customer service is enormous. But as the technology advances, we must adopt policies to make sure the risks don’t overwhelm and stifle those benefits.
Importantly, we need to be on alert for algorithmic bias that could perpetuate inequality and marginalization of communities around the world.
Algorithmic bias occurs when systems—often based on machine learning or AI—deliver biased outcomes or decisions because the data it has been given is incomplete, imbalanced or not fully representative.
I and colleagues here in Cambridge and at Warwick Business School have proposed a new way of thinking about the issue—we call this a “relational risk perspective.” This approach looks at not just how AI is being used now, but how it may be used in the future and across different geographies, avoiding what we call “the dark side of AI.” The goal is to safeguard the benefits of AI for everyone, while minimizing the harm.
We look at the workplace as one example. AI is already having a huge impact on jobs, affecting both routine and creative tasks, and affecting activities that we’ve thought of as uniquely human—like creating art or writing film scripts.
As businesses use the technology more, and perhaps become over-dependent on it, we are at risk of undermining professional expertise and critical thinking, leaving workers de-motivated and expected to defer to machine-generated decisions.
This will impact not just tasks but also the social fabric of the workplace, by influencing how workers relate to each other and to organizations. If AI is used in recruitment, a lack of representation in datasets can reinforce inequalities when used to make decisions about hiring or promotions.
We also explore how this billion-dollar industry is often underpinned by largely ‘invisible’ workers in the Global South who clean data and refine algorithms for a user-group predominantly in the Global North. This ‘data colonialism’ not only reflects global inequalities but also reinforces marginalization: the people whose labor enables AI to thrive are the same people who are largely excluded from the benefits of that technology.
Health care data is in particular danger from such data-driven bias, so we need to ensure that health-related information analyzed by the large language models used to train AI tools reflects a diverse population. Basing health policy on data from selected and perhaps more privileged communities can lead to a vicious cycle in which disparity is more deeply entrenched.
Achieving its potential
I believe that we can counter these threats, but time is of the essence as AI quickly becomes embedded into society. We should remember that generative AI is still an emerging technology, and take note that it is progressing faster than the ethical or regulatory landscape can keep pace with.
Our relational risk perspective does not present AI as inherently good or bad. Rather, AI is seen as having potential for benefit and harm depending on how it is developed and experienced across different social contexts. We also recognize that the risks are not static, as they evolve with the changing relationships between technology, its users and broader societal structures.
Policymakers and technologists should anticipate, rather than react to, the ways in which AI can entrench or challenge existing inequities. They should also consider that some countries may develop AI maturity more quickly than others.
Finally, let’s draw on stakeholders far and wide in setting AI risk policy. A multidisciplinary approach which will help avoid bias, while at the same time helping to demonstrate to the public that AI policy really does reflect varied and diverse interests and communities.
Citation:
Opinion: We must balance the risks and benefits of AI (2025, April 7)
retrieved 7 April 2025
from
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.
Leave a comment