Multi-strategy adversarial learning for robust face forgery detection under heterogeneous and composite attacks
Face forgery detection has recently progressed to address the threat from image synthesis technology, although robust face forgery detection under heterogeneous attacks remains challenging. When forgers leverage image post-processing techniques to manipulate forged photos, recent detection methods exhibit significant performance degradation. In this work, we propose a novel multi-strategy adversarial learning (MAL) method to extract salient features in order to achieve more reliable forgery detection under attacks. In particular, our MAL framework creates a large number of positive and negative sample pairs by designing a composite attack generation module with supervised contrastive training to ensure the attack robustness. In addition, we exploit two intuitive strategies, hard sample selection and region consistency, to enhance the contrastive losses for further strengthened feature reliability. Extensive experimental results demonstrate our proposed method to outperform recent state-of-the-art face forgery detection methods in terms of overall accuracy under various single and composite attacks.
Funding
Natural Science Foundation of Hunan Province
National Natural Science Foundation of China
Central South University
History
School
- Science
Department
- Computer Science
Published in
2024 IEEE International Conference on Multimedia and Expo (ICME)Source
2024 IEEE International Conference on Multimedia and Expo (ICME 2024)Publisher
Institute of Electrical and Electronics Engineers (IEEE)Version
- AM (Accepted Manuscript)
Rights holder
© IEEEPublisher statement
Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.Acceptance date
2024-03-12Publication date
2024-09-30Copyright date
2024ISBN
9798350390155; 9798350390155ISSN
1945-7871eISSN
1945-788XPublisher version
Language
- en