Author | Emeršić, Ž.; Ohki, T.; Akasaka, M.; Arakawa, T.; Maeda, S.; Okano, M.; Sato, Y.; George, A.; Marcel, S.; Ganapathi, I. I.; Ali, S. S.; Javed, S.; Werghi, N.; Işık, S. G.; Sarıtaş, E.; Ekenel, H. K.; Sharma, G.; Kolf, Jan Niklas; Boutros, Fadi; Damer, Naser; Kamboj, A.; Nigam, A.; Hudovernik, V.; Jain, D. K.; Cámara-Chávez, G.; Peer, P.; Štruc, V. |
---|
Abstract | The paper provides a summary of the 2023 Unconstrained Ear Recognition Challenge (UERC), a benchmarking effort focused on ear recognition from images acquired in uncontrolled environments. The objective of the challenge was to evaluate the effectiveness of current ear recognition techniques on a challenging ear dataset while analyzing the techniques from two distinct aspects, i.e., verification performance and bias with respect to specific demographic factors, i.e., gender and ethnicity. Seven research groups participated in the challenge and submitted a seven distinct recognition approaches that ranged from descriptor-based methods and deep-learning models to ensemble techniques that relied on multiple data representations to maximize performance and minimize bias. A comprehensive investigation into the performance of the submitted models is presented, as well as an in-depth analysis of bias and associated performance differentials due to differences in gender and ethnicity. The results of the challenge suggest that a wide variety of models (e.g., transformers, convolutional neural networks, ensemble models) is capable of achieving competitive recognition results, but also that all of the models still exhibit considerable performance differentials with respect to both gender and ethnicity. To promote further development of unbiased and effective ear recognition models, the starter kit of UERC 2023 together with the baseline model, and training and test data is made available from: http://ears.fri.uni-lj.si/ |
---|