Tech

Digital tech’s rapid pace outstrips safety research, say researchers

Share
Share
digital technology
Credit: Pixabay/CC0 Public Domain

Scientific research on the harms of digital technology is stuck in a “failing cycle” that moves too slowly to allow governments and society to hold tech companies to account, according to two leading researchers in a new report published in the journal Science.

Dr. Amy Orben from the University of Cambridge and Dr. J. Nathan Matias from Cornell University say the pace at which new technology is deployed to billions of people has put unbearable strain on the scientific systems trying to evaluate its effects.

They argue that big tech companies effectively outsource research on the safety of their products to independent scientists at universities and charities who work with a fraction of the resources—while firms also obstruct access to essential data and information. This is in contrast to other industries where safety testing is largely done “in house.”

Orben and Matias call for an overhaul of “evidence production” assessing the impact of technology on everything from mental health to discrimination.

Their recommendations include accelerating the research process, so that policy interventions and safer designs are tested in parallel with initial evidence gathering, and creating registries of tech-related harms informed by the public.

“Big technology companies increasingly act with perceived impunity, while trust in their regard for public safety is fading,” said Orben, of Cambridge’s MRC Cognition and Brain Sciences Unit. “Policymakers and the public are turning to independent scientists as arbiters of technology safety.

“Scientists like ourselves are committed to the public good, but we are asked to hold to account a billion-dollar industry without appropriate support for our research or the basic tools to produce good quality evidence quickly. We must urgently fix this science and policy ecosystem so we can better understand and manage the potential risks posed by our evolving digital society.”

‘Negative feedback cycle’

In the Science paper, the researchers point out that technology companies often follow policies of rapidly deploying products first and then looking to “debug” potential harms afterwards. This includes distributing generative AI products to millions before completing basic safety tests, for example.

When tasked with understanding potential harms of new technologies, researchers rely on “routine science” which—having driven societal progress for decades—now lags the rate of technological change to the extent that it is becoming at times “unusable.”

With many citizens pressuring politicians to act on digital safety, Orben and Matias argue that technology companies use the slow pace of science and lack of hard evidence to resist policy interventions and “minimize their own responsibility.”

Even if research gets appropriately resourced, they note that researchers will be faced with understanding products that evolve at an unprecedented rate.

“Technology products change on a daily or weekly basis, and adapt to individuals. Even company staff may not fully understand the product at any one time, and scientific research can be out of date by the time it is completed, let alone published,” said Matias, who leads Cornell’s Citizens and Technology (CAT) Lab.

“At the same time, claims about the inadequacy of science can become a source of delay in technology safety when science plays the role of gatekeeper to policy interventions. Just as oil and chemical industries have leveraged the slow pace of science to deflect the evidence that informs responsibility, executives in technology companies have followed a similar pattern. Some have even allegedly refused to commit substantial resources to safety research without certain kinds of causal evidence, which they also decline to fund.”

The researchers lay out the current “negative feedback cycle” by explaining that tech companies do not adequately resource safety research, shifting the burden to independent scientists who lack data and funding. This means high-quality causal evidence is not produced in required timeframes, which weakens government’s ability to regulate—further disincentivising safety research, as companies are let off the hook.

Orben and Matias argue that this cycle must be redesigned, and offer ways to do it.

Reporting digital harms

To speed up the identification of harms caused by online technologies, policymakers or civil society could construct registries for incident reporting, and encourage the public to contribute evidence when they experience harms.

Similar methods are already used in fields such as environmental toxicology where the public reports on polluted waterways, or vehicle crash reporting programs that inform automotive safety, for example.

“We gain nothing when people are told to mistrust their lived experience due to an absence of evidence when that evidence is not being compiled,” said Matias.

Existing registries, from mortality records to domestic violence databases, could also be augmented to include information on the involvement of digital technologies such as AI.

The paper’s authors also outline a “minimum viable evidence” system, in which policymakers and researchers adjust the “evidence threshold” required to show potential technological harms before starting to test interventions.

These evidence thresholds could be set by panels made up of affected communities, the public, or “science courts,” expert groups assembled to make rapid assessments.

“Causal evidence of technological harms is often required before designers and scientists are allowed to test interventions to build a safer digital society,” said Orben.

“Yet intervention testing can be used to scope ways to help individuals and society, and pinpoint potential harms in the process. We need to move from a sequential system to an agile, parallelized one.”

Under a minimum viable evidence system, if a company obstructs or fails to support independent research, and is not transparent about their own internal safety testing, the amount of evidence needed to start testing potential interventions would be decreased.

Orben and Matias also suggest learning from the success of “green chemistry,” which sees an independent body hold lists of chemical products ranked by potential for harm, to help incentivize markets to develop safer alternatives.

“The scientific methods and resources we have for evidence creation at the moment simply cannot deal with the pace of digital technology development,” Orben said. “Scientists and policymakers must acknowledge the failures of this system and help craft a better one before the age of AI further exposes society to the risks of unchecked technological change.”

Matias added, “When science about the impacts of new technologies is too slow, everyone loses.”

More information:
Amy Orben et al, Fixing the science of digital technology harms, Science (2025). DOI: 10.1126/science.adt6807. www.science.org/doi/10.1126/science.adt6807

Provided by
University of Cambridge


Citation:
Digital tech’s rapid pace outstrips safety research, say researchers (2025, April 10)
retrieved 10 April 2025
from

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.

Share

Leave a comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Related Articles
Perovskite solar modules show year-long outdoor durability
Tech

Perovskite solar modules show year-long outdoor durability

Perovskite solar modules developed by imec. Credit: Anurag Krishna Perovskite photovoltaics (PV)...

Ink engineering approach boosts efficiency and cuts cost of quantum dot-based photovoltaics
Tech

Ink engineering approach boosts efficiency and cuts cost of quantum dot-based photovoltaics

The entire fabrication of printable quantum dot solar cells by a slot-die...

Google “could face breakup” after being found guilty of having illegal ad tech monopolies
Tech

Google “could face breakup” after being found guilty of having illegal ad tech monopolies

A US judge has deemed Google violated antitrust laws It reportedly monopolized...