IEEE International Joint Conference on Biometrics (IJCB 2023)

Competitions

4th International Competition on Human Identification at a Distance 2023

Welcome to the 4th International Competition on Human Identification at a Distance (HID 2023)!
The competition focuses on human identification at a distance (HID) in videos. The dataset proposed for the competition will be SUSTech-Competition, which is a new dataset collected in 2022 and released to the scientific community for the first time. It contains 859 subjects. After the success of the previous 3 competitions, we surely believe that the competition will be successful and promote the research on HID. Our sponsor, Watrix Technology, will provide 6 awards (19,000 CNY in total, ~2850 USD) to the top 6 teams in the second phase. We are grateful to Watrix Technology to sponsor the competition.


First Prize (1 team): 10,000 CNY (~1,500 USD)
Second Prize (2 teams): 3,000 CNY (~450 USD)
Third Prize (3 teams): 1,000 CNY (~150 USD)

Organizers: Shiqi Yu, Md Atiqur Rahman Ahad, Yongzhen Huang, Liang Wang, Yasushi Makihara

Advisory committee: Mark Nixon, Tieniu Tan, Yasushi Yagi

Website: https://hid2023.iapr-tc4.org/

DeepFake Game Competition on Visual Realism Assessment (DFGC-VRA)

Deep-learning based face-swap videos, also known as deepfakes, are becoming more and more realistic and deceiving. The malicious usage of these face-swap videos has caused wide concerns. There is an ongoing deepfake game between its creators and detectors, with the human in the loop. The research community has been focusing on the automatic detection of these fake videos, but the assessment of their visual realism, as perceived by human eyes, is still an unexplored dimension. Visual realism assessment, or VRA, is essential for assessing the potential impact that may be brought by a specific face-swap video, and it is also useful as a quality metric to compare different face-swap methods. An automatic VRA method should take a deepfake video as input and output the predicted realism score. The main research question to be answered by this competition is to what extent can we accurately predict the subjective realism scores of deepfake videos using automatic VRA methods. The expected outcome is a comprehensive study of the SOTA performance of deepfake VRA methods and to promote the research on deepfake realism assessment. Top teams will be awarded with certificates, and all teams have opportunities to collaborate on the summary paper to be submitted to IJCB’2023. To help the participants easily start their training, we also provide a starter-kit containing example codes of some baseline methods.

Organizers: Bo Peng, Jing Dong, Caiyong Wang, Wei Wang, Zhenan Sun

Website: https://codalab.lisn.upsaclay.fr/competitions/10754

8th Edition of International Fingerprint Liveness Detection Competition

In developing biometric systems, the study of the techniques necessary to preserve the systems’ integrity and thus guarantee their security, is a crucial problem. Fingerprint authentication systems are highly vulnerable to artificial reproductions of fingerprint made up of materials such as silicon, gelatine or latex, also called fake fingerprints or presentation attacks. To counteract this possibility, fingerprint liveness (presentation attack) detection is a discipline aimed at designing pattern recognition-based algorithms for distinguishing between live and fake fingerprints.
The International Fingerprint Liveness Detection competition is at its 8th edition. Its relevance is acknowledged by the scientific community and of large interest for many private companies involved in fingerprint liveness detection. Two basic challenges are investigated in this edition: the integration of presentation attack detection algorithms into matchers and the ability to provide compact and effective embeddings for direct use in mobile devices of the proposed solution. A third challenge consists of presenting two “unknown” sensors to competitors of which they know only live samples provided in the training set. A novel data set is provided for the final test, whilst the participants can train their solutions over the provided training set. The LivDet competition aims to evaluate the robustness and limits of fingerprint recognition and presentation attack detection algorithms against presentation attacks and identify new attack methods to prevent them.

Organizers: Giulia Orrù, Gian Luca Marcialis, Roberto Casula, Sara Concas, Simone Maurizio La Cava, Marco Micheletto, Andrea Panzino

Website: https://livdet.diee.unica.it/

Liveness Detection Competition – Iris (LivDet-Iris 2023)

LivDet-Iris 2023 will be the fifth competition in the Livet-Iris series offering (a) independent assessment of current state of the art in iris Presentation Attack Detection algorithms and (b) an evaluation protocol, including publicly available datasets of spoof and live iris images, that can be followed by researchers after the competition is closed and compare their solutions with LivDet-Iris winners and baselines.
This competition will have three parts. Competitors can participate in one, two or all parts. For each part a separate winner demonstrating the best performance will be announced.
Part 1: Algorithms-Self-Tested will involve the self-evaluation (that is, done by competitors on the sequestered and never-published-before test dataset) incorporating a large number of ISO-compliant live and fake samples, including those synthesized by modern Generative Adversarial Networks-based models (StyleGAN2 and StyleGAN3) and near-infrared pictures of various artefacts simulating physical iris presentation attacks.
Part 2: Algorithms-Independently-Tested will involve the evaluation of the software
solutions submitted by the competitors and performed by the organizers on a sequestered dataset similar in terms of attack types to the one used in Part 1.
Part 3: Systems will involve the systematic testing of submitted iris recognition systems based on physical artifacts presented to the sensors.

Organizers: Adam Czajka, Patrick Tinsley, Mahsa Mitcheff, Patrick Flynn, Kevin Bowyer, Stephanie Schuckers, Masudul Haider Imtiaz, Sandip Purnapatra, Surendra Singh, Naveenkumar Venkataswamy

Website: https://livdetiris23.github.io/

The Unconstrained Ear Recognition Challenge 2023 – Maximizing Performance and Minimizing Bias (UERC)

The Unconstrained Ear Recognition Challenge 2023 aims to promote research in the bias-aware ear recognition field, addressing demographic bias and promoting the development of fair and unbiased recognition techniques that can be deployed in practice. The competition will evaluate submitted recognition models based on both recognition performance and demographic bias, and promote the development of explicit bias mitigation mechanisms. The competition will consist of two tracks, with a dataset, evaluation tool, and baseline models provided for each. The first track will evaluate ear recognition models in unconstrained environments, with both recognition and bias scores contributing to the overall ranking. The second track will address bias mitigation strategies, with a baseline ResNet model provided, and participants tasked with designing schemes to reduce initial bias without adversely affecting performance. The testing data will be sequestered and consist of six groups with all ethnicity-gender combinations. The expected outcomes of the competition include the development of more accurate and unbiased ear recognition models and increased awareness and understanding of demographic bias in the field. The competition will be the third in the series of Unconstrained Ear Recognition Challenges and will provide a novel benchmark for comparison among researchers in the field.

Organizers: Žiga Emeršič, Vitomir Štruc, Peter Peer, Hazım Kemal Ekenel, Guillermo Cámara Chávez,

Website: http://uerc.fri.uni-lj.si/

Sclera Segmentation and Recognition Benchmarking Competition (SSRBC 2023)

Sclera biometrics have gained significant popularity among emerging ocular traits in the last few years. In order to evaluate the potential of this trait, a considerable amount of research has been presented in the literature, both employing the sclera individually and in combination with the iris. In spite of those initiatives, sclera biometrics need to be studied more extensively to ascertain their usefulness. Moreover, the sclera segmentation task still requires a significant amount of attention due to challenges associated with the performance of existing techniques while sclera recognition is performed in cross-sensor and resolution scenarios. In order to investigate these challenges, document recent development and attract the attention/interest of researchers we are planning to host the next Sclera Segmentation and Recognition Benchmarking Competition SSRBC 2023. SSRBC 2023 will be the 7 th in the series of sclera (segmentation and recognition) benchmarking competitions following SSBC 2015, SSRBC 2016, SSERBC 2017, SSBC 2018, SSBC 2019 and SSBC 2020 held in conjunction with BTAS 2015, ICB 2016, IJCB 2017, ICB 2018, 19 and 20, respectively. Due to the overwhelming success of SSBC 2015, SSRBC 2016, SSERBC 2017, SSBC 2018, 2019 and IJCB 2020, we plan to organize this proposed competition to benchmark sclera segmentation and recognition jointly with both cross-sensor and low and high-resolution images.

Organizers: Abhijit Das, Aritra Mukherjee, Umapada Pal, Peter Peer, Vitomir Štruc

Website: https://sites.google.com/hyderabad.bits-pilani.ac.in/ssrbc2023/home?pli=1

Competition on Efficient Face Recognition (IJCB: EFaR-2023)

Biometric authentication systems have become standard, especially in mobile devices. Facial recognition in particular has established itself as a key feature for unlocking smartphones or encrypted data on the device. However, current face recognition systems use deep neural networks with many parameters to ensure good recognition performance for a globally deployed system with as little loss of usability as possible. These systems are limited in their application by the high computational effort required, whether because of a shared resource environment of the mobile phone, in which computing capacity and battery are shared with other programms, or because of an too long execution time, which reduces the usability for the user. Therefore, there is a great need for face recognition systems that offer high performance at the lowest possible computational cost. The aim of this competition is to evaluate the latest and state-of-the-art approaches for efficient and lightweight face recognition and to motivate the development of novel techniques.

Organizers: Jan Niklas Kolf, Fadi Boutros, Naser Damer

Website: https://sites.google.com/view/ijcb-2023-efar/

Liveness Detection Competition – 2023 for Noncontact Fingerprint Algorithms and Systems (LivDet – 2023 Noncontact Fingerprint)

For the last few years, contactless fingerprint systems are on the rise in demand because of the ability to turn any device with a camera into a fingerprint reader, so smartphones can be easily used as the sensor. Yet, before we can fully utilize the benefit of non-contactless methods, the biometric community needs to resolve a few concerns such as the resiliency of the system against presentation attacks. One of the major obstacles to testing contactless fingerprint detection systems is the limited publicly available data sets with an adequate variety of spoof and live data. Lack of data and opportunity to test the system resilience on this problem can make these systems vulnerable which can lead to a big security breach. LivDet-2023 noncontact fingerprint competition would serve as an important benchmark in noncontact-based fingerprint presentation attack detection (PAD) (also called liveness detection) for noncontact-based fingerprint systems and would offer (a) independent assessment of the current state of the art in noncontact fingerprint PAD, and (b) evaluation protocol, and share state-of-the-art PAD dataset, that can be easily accessed by researchers after the competition is closed and compare their solutions with LivDet-2023 noncontact fingerprint algorithm and system competition winners.

Organizers: Sandip Purnapatra, Stephanie Schuckers, Srirangaraj Setlur, Bhavin Jawade, Soumyabrata Dey

Website: https://noncontactfingerprint2023.livdet.org/index.php

Face Presentation Attack Detection Based on Privacy-aware Synthetic Training Data (IJCB: SynFacePAD-2023)

Significant progress has been made in face presentation attack detection (PAD), which aims to secure face recognition systems against presentation attacks, owing to the availability of several face PAD datasets. However, all available datasets are based on privacy and legally-sensitive authentic biometric data with a limited number of subjects. The need for the development of face PAD datasets that prioritize the privacy of individuals, promote data sharing within the research community, and ensure the reproducibility and continuity of face PAD research motivated us to hold SynFacePAD competition. This was also enabled by the recent release of a synthetic-based PAD development dataset, SynthASpoof [1], that proved the sanity of such a solution. SynFacePAD competition is the first to attract and showcase technical solutions that improve the accuracy of face PAD and it is the first biometric PAD competition to be completely and restrictively based on synthetic training data. The impact of the competition is to promote the development of privacy-aware biometric solution and bring the development of PAD solution forward in terms of detection performance.

[1] Meiling Fang, Marco Huber, and Naser Damer: SynthASpoof: Developing Face Presentation Attack Detection Based on Privacy-friendly Synthetic Data. 2023.

Organizers: Meiling Fang, Marco Huber, Julian Fierrez, Raghavendra Ramachandra, Naser Damer

Website: https://sites.google.com/view/ijcb-synfacepad-2023

AG-ReID2023: Aerial-Ground Person ReID Challenge

The AG-ReID2023 competition is an exciting opportunity for participants to showcase their expertise in person re-identification, particularly across aerial-ground environments. This novel challenge requires participants to develop effective algorithms to re-identify specific individuals across aerial and ground imagery. What sets this competition apart is the use of the new large-scale AG-ReID dataset, which comprises 100,502 frames of 1,615 identities collected using a UAV flying at multiple altitudes ranging from 15 to 45 meters, a ground-based CCTV camera, and a wearable camera on smart glasses on a university campus. When combined with other challenges in camera resolutions, occlusion, and lighting, the differences between the elevated view of aerial cameras and the horizontal view of ground cameras provide a new research area to develop practical and robust person re-identification systems.

Organizers: Kien Nguyen Thanh, Thanh Nhat Huy Nguyen, Clinton Fookes, Sridha Sridharan, Feng Liu, Xiaoming Liu, Arun Ross, Dana Michalski

Website: https://www.kaggle.com/t/2018fa71974d4143a0306fd833bbaeaa