We propose selective debiasing – an inference-time safety mechanism designed to enhance the overall model quality in terms of prediction performance and fairness, especially in scenarios where retraining the model is impractical. The method draws inspiration from selective classification, where at inference time, predictions with low quality, as indicated by their uncertainty scores, are discarded. In our approach, we identify the potentially biased model predictions and, instead of discarding them, we remove bias from these predictions using LEACE – a post-processing debiasing method. To select problematic predictions, we propose a bias quantification approach based on KL divergence, which achieves better results than standard uncertainty quantification methods. Experiments on text classification datasets with encoder-based classification models demonstrate that selective debiasing helps to reduce the performance gap between post-processing methods and debiasing techniques from the at-training and pre-processing categories.
DOI: 10.18653/v1/2025.naacl-short.9
Скачать статью (PDF) с сайта конференции (англ.): https://aclanthology.org/2025.naacl-short.9/
ResearchGate: https://www.researchgate.net/publication/382654704_Inference-Time_Selective_Debiasing
Gleb Kuzmin, Neemesh Yadav, Ivan Smirnov, Timothy Baldwin, and Artem Shelmanov. 2025. Inference-Time Selective Debiasing to Enhance Fairness in Text Classification Models // In Proceedings of the 2025 Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 2: Short Papers), pages 95–107, Albuquerque, New Mexico.