Abstract | Deep learning-based systems for periocular recognition make use of the high recognition performance of neural networks, which, however, is accompanied by high computational costs and memory footprints. This can lead to deployability problems, especially in mobile devices and embedded systems. Few previous works strived towards building lighter models, however, while still depending on floating-point numbers associated with higher computational cost and memory footprint. In this paper, we propose to adapt model quantization for periocular recognition. This, within the proposed scheme, leads to reducing the memory footprint of periocular recognition network by up to five folds while maintaining high recognition performance. We present a comprehensive analysis over three backbones and diverse experimental protocols to stress the consistency of our conclusions, along with a comparison with a wide set of baselines that prove the optimal trade-off between performance and model size achieved by our proposed solution. The code and pre-trained models have been made available at https://github.com/jankolf/ijcb-periocular-quantization. |
---|