%0 Journal Article %T Using Cross Entropy as a Performance Metric for Quantifying Uncertainty in DNN Image Classifiers: An Application to Classification of Lung Cancer on CT Images %A Eri Matsuyama %A Masayuki Nishiki %A Noriyuki Takahashi %A Haruyuki Watanabe %J Journal of Biomedical Science and Engineering %P 1-12 %@ 1937-688X %D 2024 %I Scientific Research Publishing %R 10.4236/jbise.2024.171001 %X


Cross entropy is a measure in machine learning and deep learning that assesses the difference between predicted and actual probability distributions. In this study, we propose cross entropy as a performance evaluation metric for image classifier models and apply it to the CT image classification of lung cancer. A convolutional neural network is employed as the deep neural network (DNN) image classifier, with the residual network (ResNet) 50 chosen as the DNN archi-tecture. The image data used comprise a lung CT image set. Two classification models are built from datasets with varying amounts of data, and lung cancer is categorized into four classes using 10-fold cross-validation. Furthermore, we employ t-distributed stochastic neighbor embedding to visually explain the data distribution after classification. Experimental results demonstrate that cross en-tropy is a highly useful metric for evaluating the reliability of image classifier models. It is noted that for a more comprehensive evaluation of model perfor-mance, combining with other evaluation metrics is considered essential.

%K Cross Entropy %K Performance Metrics %K DNN Image Classifiers %K Lung Cancer %K Prediction Uncertainty %U http://www.scirp.org/journal/PaperInformation.aspx?PaperID=130521