ΒιΆΉΤΌΕΔ

Research & Development

Abstract

Advancements in image generation and manipulation technology offer both opportunities and risks. Detecting manipulated media is crucial to combat misinformation. This report, part of the β€˜Anomaly Detection’ project within generative AI pilot initiatives, aims to understand the strengths and weaknesses of existing detection algorithms. By doing so, the ΒιΆΉΤΌΕΔ can make informed decisions regarding the integration of these algorithms into the journalistic process.

Our evaluation dataset comprises three image types: fully generated, partially manipulated, and face-altered. Augmented to simulate real-world scenarios (compression, resizing, social media processing), our study focuses on precision, recall, and F1-score metrics, addressing both false positives and false negatives. According to the evaluations conducted for this report in early 2024, the ΒιΆΉΤΌΕΔ’s assessment is that none of the deepfake detectors tested perform effectively in detecting all types of deepfake instances.

This paper is authored by ΒιΆΉΤΌΕΔ Research & Development's Marc Gorriz-Blanch, Woody Bayliss, Sinead O’Brien, Danijela Horak and Juil Sock, with Maryam Ahmed and Blathnaid Healy, our partners from ΒιΆΉΤΌΕΔ News

Image by / Β© ΒιΆΉΤΌΕΔ / / Mirror B /

Topics