Tech

Researchers train AI to detect foreign interference online

Share
Share
ai
Credit: CC0 Public Domain

Modern technologies like social media are making it easier than ever for enemies of the United States to emotionally manipulate U.S. citizens.

U.S. officials warn that foreign adversaries are trying to produce tremendous amounts of false, misleading information online to sway public opinion in the U.S. Just this July, the Department of Justice announced it had disrupted a Kremlin-backed disinformation campaign that used nearly one thousand fake social media accounts in an attempt to spread disinformation.

While AI is commonly used on offense in disinformation wars to generate large amounts of content, AI is now playing an important role in defense, too.

Mark Finlayson, a professor at FIU’s College of Engineering and Computing, is an expert in training AI to understand stories. He has spent more than two decades studying the subject.

Persuasive—but false—stories

Storytelling is important to spreading disinformation.

“A heartfelt narrative or a personal anecdote is often more compelling to an audience than the facts,” says Finlayson. “Stories are particularly effective in overcoming resistance to an idea.”

For example, a climate activist may be more successful in convincing an audience about plastic pollution by sharing a personal story of a rescued sea turtle with a straw lodged in its nose, rather than only citing statistics, Finlayson says. The story makes the problem relatable.

“We are exploring the different ways stories are used to drive an argument,” he explains. “It’s a challenging problem, as stories in social media posts can be as brief as a single sentence, and sometimes, these posts may only allude to well-known stories without explicitly retelling them.”

Suspicious handles

Finlayson’s team is also exploring how AI can analyze usernames or handles in a social media profile. Azwad Islam, a Ph.D. student and co-author on a recent paper published with Finlayson, explains that usernames often contain significant clues about a user’s identity and intentions.

The paper was published in the Proceedings of the International AAAI Conference on Web and Social Media, a conference in artificial intelligence.

“Handles reveal much about users and how they want to be perceived,” Islam explains. “For example, a person claiming to be a New York journalist might choose the handle, ‘@AlexBurnsNYT’ rather than ‘@NewYorkBoy,” because it sounds more credible. Both handles, however, suggest the user is a male with an affiliation to New York.”

The FIU team demonstrated a tool that can analyze a user handle, reliably revealing a person’s claimed name, gender, location and even personality (if that information is hinted at in the handle).

Although a user handle alone can’t confirm whether an account is fake, it can be crucial in analyzing an account’s overall authenticity—especially as AI’s ability to understand stories evolves.

“By interpreting handles as part of the larger narrative an account presents, we believe usernames could become a critical tool in identifying sources of disinformation,” Islam says.

Questionable cultural cache

Objects and symbols can carry different meanings across cultures. If an AI model is unaware of the differences, it can make a grave mistake in how it interprets a story. Foreign adversaries can also use these symbols to make their messages more persuasive to a target audience.

Anurag Acharya is a former Ph.D. student of Finlayson’s who worked on this problem. He found that training AI with diverse cultural perspectives improves AI’s story comprehension.

“A story may say, ‘The woman was overjoyed in her white dress.’ An AI model trained exclusively on weddings from Western stories might read that and say, ‘That’s great!’ But if my mother saw this sentence, she would take great offense, because we only wear white to funerals,” says Acharya, who comes from a family of Hindu heritage.

It is critical that AI understands these nuances so it can detect when foreign adversaries are using cultural messages and symbols to have a greater malicious impact.

Acharya and Finlayson have published a recent paper on this topic, presented at a workshop at the Meeting of the North American Chapter of the Association for Computational Linguistics (NAACL), an AI conference.

Helping AI find order in chaos

Another difficulty of understanding stories is that the sequence of events that a narrative tells is rarely laid out neatly and precisely in order. Rather, events are often found in pieces, intertwined with other storylines. For human readers, this adds dramatic effect; but for AI models, such complex interrelations can create confusion.

Finlayson’s research on timeline extraction has significantly advanced AI’s understanding of event sequences within narratives.

“In a story, you can have inversions and rearrangements of events in many different, complex ways. This is one of the key things that we have worked on with AI. We have helped AI understand how to map out different events that happen in the real world, and how they might affect each other,” Finlayson says.

“This is a good example of something that people find easy to understand but is challenging for machines. An AI model must be able to order the events in a story accurately. This is important not only to identify disinformation, but also to support many other applications.”

The FIU team’s advancements in helping AI understand stories are positioned to help intelligence analysts fight disinformation with new levels of efficiency and accuracy.

More information:
Azwad Anjum Islam et al, A Semantic Interpreter for Social Media Handles, Proceedings of the International AAAI Conference on Web and Social Media (2024). DOI: 10.1609/icwsm.v18i1.31343

Anurag Acharya et al, Discovering Implicit Meanings of Cultural Motifs from Text, Proceedings of the Sixth Workshop on Natural Language Processing and Computational Social Science (NLP+CSS 2024) (2024). DOI: 10.18653/v1/2024.nlpcss-1.4

Provided by
Florida International University

Citation:
Researchers train AI to detect foreign interference online (2024, November 21)
retrieved 21 November 2024
from

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.

Share

Leave a comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Related Articles
Companies are feeling the urge to get up to speed with AI – but many simply aren’t ready
Tech

Companies are feeling the urge to get up to speed with AI – but many simply aren’t ready

Companies are keen to do more with AI, but are lacking capabilities,...

The iPhone 17 Air could be just as unpopular as the iPhone Plus and mini
Tech

The iPhone 17 Air could be just as unpopular as the iPhone Plus and mini

Reputable analyst Mark Gurman predicts the iPhone 17 Air won’t be a...

What CIOs can do differently to prepare their infrastructure for a service outage
Tech

What CIOs can do differently to prepare their infrastructure for a service outage

IT downtime costs businesses over £300 billion annually. 2024 alone has proven...