Applying temperature scaling to reduce DNN miscalibration

Loading...
Thumbnail Image

Date

2022

Journal Title

Journal ISSN

Volume Title

Publisher

Abstract

Deep Neural Networks (DNNs) exhibit state of art performance in a wide variety of fields. However, with the modern architectural trends and increased network capacity, DNN tends to be poorly calibrated. Only well-calibrated models can make probabilistic predictions that reflect real- world probabilities. This issue limits the usage of DNN base models in critical decision scenarios. For achieving a reliable probability score for pre-trained models, several techniques have been introduced including histogram binning, Bayesian Binning into Quantiles, isotonic regression, and Platt scaling. Temperature scaling (TS) was recently introduced, which achieves superior performance compared to the other techniques. But TS does not work properly with small validation datasets, highly accurate networks, and non-highly accurate ones. Therefore, DNN calibration remains a research challenge. This research focuses on improving the performance of DNN calibration by introducing changes to the state of art post-hoc calibration technique, Temperature Scaling. To overcome the limitations of the Temperature Scaling technique, the novel Class-wise Temperature Scaling technique is proposed. The proposed technique calculates the optimum temperature for each class by binning the predictions on class and the confidence values are rescaled using class-wise temperature values. Class-wise Temperature Scaling can further minimize calibration error in classification models, up to 10% than the standard Temperature scaling technique.

Description

Citation

Hasantha, K.M. K. (2022). Applying temperature scaling to reduce DNN miscalibration [Master’s theses, University of Moratuwa]. , University of Moratuwa]. Institutional Repository University of Moratuwa. https://dl.lib.uom.lk/handle/123/23991

DOI

Endorsement

Review

Supplemented By

Referenced By