Abstrakt | Face recognition systems are susceptible to differences in performance across demographic or non-demographic groups. However, the understanding of the behavior of face recognition models given such biases is still very limited and based mainly on observing model performance indicators when training/testing data is varied. On the other hand, very recently, face recognition explainability has gained increasing attention enabling the spatial explanation of face matching processes between two face images. This overcame the inapplicability of existing visual explainability methods to explain face matching decisions as they are designed for pure classification tasks. In this paper, and for the first time, we investigate the inner behavior of face recognition models with respect to bias using face recognition explainability tools. Using two state-of-the-art explainability tools, five models with different bias patterns, and a set of visualization tools, our investigation led to a set of interesting observations. This included noticing the tendency of more biased models to have more distributed attention on the facial image in comparison to focusing on the main facial features for the less biased models, all when considering the most discriminated demographic group. |
---|