Abstract | Neural networks have proven to be extremely effective at tasks such as image classification and object detection. However, their security and robustness are controversial. Even state-of-The-Art object detectors can be fooled by localized patch attacks, which might lead to safety-critical incidents. In these attacks, adversaries place a subtle adversarial patch in an image, causing detectors to either miss real objects or detect phantom objects. These adversarial patches often force state-of-The-Art detectors to make highly confident but incorrect predictions. The practical implications of these attacks in real-world settings further increase the concern. This paper presents a unique method for detecting real-world adversarial patches using entropy-sensitive depth estimation. Therefore, we take advantage of the fact that adversarial patches typically introduce high local entropy and are located in front of an object. We have fine-Tuned a monocular depth estimation neural network to exploit these two features to extract adversarial patches from an image. Using this approach, we are able to achieve a true positive detection rate of 77.5% on the APRICOT test set. |
---|