Publications

Patching the Cracks: Detecting and Addressing Adversarial Examples in Real-World Applications

AuthorBunzel, Niklas
Date2024
TypeConference Paper
AbstractNeural networks, essential for high-security tasks such as autonomous vehicles and facial recognition, are vulnerable to attacks that alter model predictions through small input perturbations. This paper outlines current and future research on detecting real-world adversarial attacks. We present a framework for detecting transferred black-box attacks and a novel method for identifying adversarial patches without prior training, focusing on high entropy regions. In addition, we investigate the effectiveness and resilience of 3D adversarial attacks to environmental factors.
ConferenceInternational Conference on Dependable Systems and Networks 2024
Urlhttps://publica.fraunhofer.de/handle/publica/475678