Tech

Study explores how workers are using large language models and what it means for science organizations

Share
Share
Study explores how workers are using large language models and what it means for science organizations
Argo usage metrics by Operations and Science unique users for each month since initial deployment. Monthly usage is less than 10% of all lab employees. Note: this plot does not capture employee use of external LLMs and is based on auto-collected telemetry data. Credit: arXiv (2025). DOI: 10.48550/arxiv.2501.16577

Researchers investigated Argonne employees’ use of Argo, an internal generative artificial intelligence chatbot.

Generative artificial intelligence (AI) is becoming an increasingly powerful tool in the workplace. At science organizations like national laboratories, its use has the potential to accelerate scientific discovery in critical areas.

But with new tools come new questions. How can science organizations implement generative AI responsibly? And how are employees across different roles using generative AI in their daily work?

A recent study by the University of Chicago and the U.S. Department of Energy’s Argonne National Laboratory provides one of the first real-world examinations of how generative AI tools—specifically large language models (LLMs)—are being used within a national lab setting.

The study not only highlights AI’s potential to enhance productivity, but also emphasizes the need for thoughtful integration to address concerns in areas such as privacy, security and transparency. The paper is published on the arXiv preprint server.

Through surveys and interviews, the researchers studied how Argonne employees are already using LLMs—and how they envision using them in the future—to generate content and automate workflows. The study also tracked the early adoption of Argo, the lab’s internal LLM interface released in 2024. Based on their analysis, the researchers recommend ways organizations can support effective use of generative AI while addressing associated risks.

On April 26, the team presented their results at the 2025 Association of Computing Machinery CHI Conference on Human Factors in Computing Systems in Japan.

Argonne and Argo—A case study

Argonne’s organizational structure paired with the timely release of Argo made the lab an ideal environment for the study. Its workforce includes both science and engineering workers as well as operations workers in areas like human resources, facilities and finance.

“Science is an area where human-machine collaboration can lead to significant breakthroughs for society,” said Kelly Wagman, a Ph.D. student in computer science at the University of Chicago and lead author on the study. “Both science and operations workers are crucial to the success of a laboratory, so we wanted to explore how each group engages with AI and where their needs align and diverge.”

While the study focused on a national laboratory, some of the findings can extend to other organizations like universities, law firms and banks, which have varied user needs and similar cybersecurity challenges.

Argonne employees regularly work with sensitive data, including unpublished scientific results, controlled unclassified documents and proprietary information. In 2024, the lab launched Argo, which gives employees secure access to LLMs from OpenAI through an internal interface. Argo doesn’t store or share user data, which makes it a more secure alternative to ChatGPT and other commercial tools.

Argo was the first internal generative AI interface to be deployed at a national laboratory. For several months after Argo’s launch, the researchers tracked how it was used across different parts of the lab. Analysis revealed a small but growing user base of both science and operations workers.

“Generative AI technology is new and rapidly evolving, so it’s hard to anticipate exactly how people will incorporate it into their work until they start using it. This study provided valuable feedback that is informing the next iterations in Argo’s development,” said Argonne software engineer Matthew Dearing, whose team develops AI tools to support the laboratory’s mission.

Dearing, who holds a joint appointment at UChicago, collaborated on the study with Wagman and Marshini Chetty, a professor of computer science and leader of the Amyoli Internet Research Lab at the university.

Collaborating and automating with AI

The researchers found that employees used generative AI in two main ways: as a copilot and as a workflow agent. As a copilot, the AI works alongside the user, helping with tasks like writing code, structuring text or tweaking the tone of an email. For the most part, employees are currently sticking to tasks where they can easily check the AI’s work. In the future, employees reported envisioning using copilots to extract insights from large amounts of text, such as scientific literature or survey data.

As a workflow agent, AI is used to automate complex tasks, which it performs mostly on its own. Around a quarter of the survey’s open-ended responses—split evenly between operations and science workers—mentioned workflow automation, but the types of workflows differed between the two groups. For example, operations workers used AI to automate processes like searching databases or tracking projects. Scientists reported automating workflows for processing, analyzing and visualizing data.

“Science often involves very bespoke workflows with many steps. People are finding that with LLMs, they can create the glue to link these processes together,” said Wagman. “This is just the beginning of more complicated automated workflows for science.”

Expanding possibilities while mitigating risks

While generative AI presents exciting opportunities, the researchers also emphasize the importance of thoughtful integration of these tools to manage organizational risks and address employee concerns.

The study found that employees were significantly concerned about generative AI’s reliability and its tendency to hallucinate. Other concerns included data privacy and security, overreliance on AI, potential impacts on hiring and implications for scientific publishing and citation.

To promote the appropriate use of generative AI, the researchers recommend that organizations proactively manage security risks, set clear policies and offer employee training.

“Without clear guidelines, there will be a lot of variability in what people think is acceptable,” said Chetty. “Organizations can also reduce security risks by helping people understand what happens with their data when they use both internal and external tools—Who can access the data? What is the tool doing with it?”

At Argonne, almost 1,600 employees have attended the laboratory’s generative AI training sessions. These sessions introduce employees to Argo and generative AI and provide guidance for appropriate use.

“We knew that if people were going to get comfortable with Argo, it wasn’t going to happen on its own,” said Dearing. “Argonne is leading the way in providing generative AI tools and shaping how they are integrated responsibly at national laboratories.”

More information:
Kelly B. Wagman et al, Generative AI Uses and Risks for Knowledge Workers in a Science Organization, arXiv (2025). DOI: 10.48550/arxiv.2501.16577

Journal information:
arXiv


Provided by
Argonne National Laboratory


Citation:
Study explores how workers are using large language models and what it means for science organizations (2025, April 29)
retrieved 29 April 2025
from

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.

Share

Leave a comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Related Articles
Perfect is the enemy of good for distributed deep learning in the cloud
Tech

Perfect is the enemy of good for distributed deep learning in the cloud

OptiReduce improves latency compared to previous methods like Ring AllReduce by reducing...

Autonomous robot designed to simplify warehouse inventory tracking
Tech

Autonomous robot designed to simplify warehouse inventory tracking

Kennesaw State University assistant professor Jian Zhang. Credit: Darnell Wilburn / Kennesaw...

1200 V GaN switch enables bidirectional current flow with integrated free-wheeling diodes
Tech

1200 V GaN switch enables bidirectional current flow with integrated free-wheeling diodes

Monolithic bidirectional 1200 V GaN switches (MBDS) with integrated free-wheeling diodes, manufactured...