Tech

Explainable AI can enhance deepfake detection transparency

Share
Share
The use of explainable artificial intelligence to recognize deepfakes
Explainable artificial intelligence (XAI) approach. Credit: Applied Sciences (2025). DOI: 10.3390/app15020725

A new study by SRH University emphasizes the benefits of explainable AI systems for the reliable and transparent detection of deepfakes. AI decisions can be presented in a comprehensible way through feature analyses and visualizations, thus promoting trust in AI technologies.

A research team led by Prof Dr. Alexander I. Iliev from SRH University, with key contributions by the researcher Nazneen Mansoor, has developed an innovative method for detecting deepfakes. In the study recently published in the journal Applied Sciences, the scientists present the use of explainable artificial intelligence (Explainable AI) to increase transparency and reliability in the identification of manipulated media content.

Deepfakes, i.e., fake media content such as videos or audio files created using artificial intelligence, pose an increasing threat to society as they can be used to spread misinformation and undermine public trust. Conventional detection methods often reach their limits, especially when it comes to making the decision-making processes of AI models comprehensible.

In its study, the SRH University team carried out extensive tests in which different AI models were tested for their ability to reliably identify deepfakes. Particular attention was paid to explainable AI, which makes it possible to present the basis for the models’ decisions in a transparent and comprehensible manner.

This is done, for example, using visualization techniques such as “heat maps,” which highlight in color which image areas the AI has identified as relevant for its decision. In addition, the explainable models analyze specific features such as textures or movement patterns that indicate manipulation.

Prof Dr. Iliev, Head of the Computer Science—Big Data & Artificial Intelligence Master’s program, explains the importance of these approaches: “Our aim was to create technologies that are not only effective, but also trustworthy. The ability to make the decision-making process of AI transparent is becoming increasingly important—be it in law enforcement, the media industry or in science.”

The study shows that explainable AI not only improves recognition accuracy, but also promotes understanding and trust in AI technologies. By showing how the decisions were made, weaknesses in the models can be identified and future systems can be optimized in a targeted manner. This is a crucial step in strengthening the responsible use of AI in society.

More information:
Nazneen Mansoor et al, Explainable AI for DeepFake Detection, Applied Sciences (2025). DOI: 10.3390/app15020725

Provided by
SRH University

Citation:
Explainable AI can enhance deepfake detection transparency (2025, February 6)
retrieved 6 February 2025
from

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.

Share

Leave a comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Related Articles
Analysts warn US could be handing chip market to China
Tech

Analysts warn US could be handing chip market to China

Analysts say US chipmakers like Nvidia, whose CEO Jensen Huang is seen...

Humanoid robots run a Chinese half-marathon alongside flesh-and-blood competitors
Tech

Humanoid robots run a Chinese half-marathon alongside flesh-and-blood competitors

The Sky Project Ultra robot also known as Tien Kung Ultra crosses...

Cybercrime set to rise as phishing-as-a-service could make hacking and identity theft go mainstream
Tech

Cybercrime set to rise as phishing-as-a-service could make hacking and identity theft go mainstream

Phishing kits sold for $25 give low-skilled criminals powerful tools to steal...