https://doi.org/10.65770/PKQL4153
ABSTRACT
The differentiation of benign and malignant skin lesion types is an essential problem in medical images, which facilitates diagnosis and treatment of skin cancer. Traditional deep learning approaches usually depend on the use of the pre-trained models, which are not customized to the specific dataset or task. In this work, to tackle the challenge, a Custom Residual-based Depth-wise Separable Lightweight Inception Model with Custom Attention Mechanism is proposed together with Explainable AI (XAI) techniques. It is written from scratch, using only bottleneck convolutions (1×1) and spatially separable convolutions (1×3,3×1), which makes it really lightweight and efficient. The approach includes developing a new model architecture based on residuals and multi-scale (which known as Inception-style) feature extraction using a novel attention mechanism that guides the network on the important image features. Furthermore, XAI methods such as Grad-CAM, LIME, and SHAP are employed to analyze and explain the model’s output, facilitating transparency and reliability in the medical domain. This coursework train and test the model on an accessible skin lesion dataset, using extensive data preprocessing, data augmentation, and class balancing. The results show that the proposed model is superior, with highest accuracy, precision, recall, and F1-score as well as good ROC-AUC and PR-AUC. The XAI visualizations also further confirm the model’s attention on clinically relevant regions of the images, which makes the model more interpretable. In conclusion, a deep learning model that is lightweight, interpretable, and efficient for skin lesion classification is proposed. The combination of PI layers and XAI methodologies guarantees a good level of performance and transparency, thus proving to be an effective instrument for the analysis of medical images.
References
- [1] Attallah, ‘Skin-CAD: Explainable deep learning classification of skin cancer from dermoscopic images by feature selection of dual high-level CNNs features and transfer learning’, Comput. Biol. Med., vol. 178, p. 108798, Aug. 2024, doi: 10.1016/j.compbiomed.2024.108798.
- [2] Mehta and A. Aneja, ‘Deep Learning Meets Traditional AI: A Hybrid CNN-RF Model for Accurate and Explainable Skin Lesion Classification’, in 2025 International Conference on Automation and Computation (AUTOCOM), Dehradun, India: IEEE, Mar. 2025, pp. 1–5. doi: 10.1109/AUTOCOM64127.2025.10956217.
- [3] Abdullah, A. Siddique, K. Shaukat, and T. Jan, ‘An Intelligent Mechanism to Detect Multi-Factor Skin Cancer’, Diagnostics, vol. 14, no. 13, p. 1359, June 2024, doi: 10.3390/diagnostics14131359.
- [4] Wen et al., ‘Convolutional neural networks for classification of Alzheimer’s disease: Overview and reproducible evaluation’, Med. Image Anal., vol. 63, p. 101694, July 2020, doi: 10.1016/j.media.2020.101694.
- [5] Hajabdollahi, R. Esfandiarpoor, P. Khadivi, S. M. R. Soroushmehr, N. Karimi, and S. Samavi, ‘Simplification of neural networks for skin lesion image segmentation using color channel pruning’, Comput. Med. Imaging Graph., vol. 82, p. 101729, June 2020, doi: 10.1016/j.compmedimag.2020.101729.
- [6] -H. Tarn, C. H. Chong, L. Wang, C.-F. Kuo, and J. Chen, ‘A lightweight image segmentation network leveraging inception and squeeze-excitation modules for efficient skin lesion analysis’, Eng. Appl. Artif. Intell., vol. 159, p. 111541, Nov. 2025, doi: 10.1016/j.engappai.2025.111541.
- [7] Wu, W. Wang, J. Zhong, B. Lei, Z. Wen, and J. Qin, ‘SCS-Net: A Scale and Context Sensitive Network for Retinal Vessel Segmentation’, Med. Image Anal., vol. 70, p. 102025, May 2021,doi:10.1016/j.media.2021.102025.
- [8] Jahmunah, E. Y. K. Ng, T. R. San, and U. R. Acharya, ‘Automated detection of coronary artery disease, myocardial infarction and congestive heart failure using GaborCNN model with ECG signals’, Comput. Biol. Med., vol. 134, p. 104457, July 2021, doi: 10.1016/j.compbiomed.2021.104457.
- [9] Deepa and P. Madhavan, ‘An advanced skin lesion segmentation and classification framework using deep learning strategies’, Sci. Rep., vol. 15, no. 1, p. 33926, Sept. 2025, doi: 10.1038/s41598-025-08255-0.
- Deepa and P. Madhavan, ‘segmentation and classification’.
- He et al., ‘Multi-channel attention-fusion neural network for brain age estimation: Accuracy, generality, and interpretation with 16,705 healthy MRIs across lifespan’, Med. Image Anal., vol. 72, p. 102091, Aug. 2021, doi: 10.1016/j.media.2021.102091.
- A. Yoganathan, ‘Generating synthetic images from cone beam computed tomography using self-attention residual UNet for head and neck radiotherapy’, Phys. Imaging Radiat. Oncol., 2023.
- Junkai et al., ‘GPR-TransUNet: An improved TransUNet based on self-attention mechanism for ground penetrating radar inversion’, J. Appl. Geophys., vol. 222, p. 105333, Mar. 2024,
- doi: 10.1016/j.jappgeo.2024.105333.
- Brahim, E. Amri, and W. Barhoumi, ‘Enhancing Change Detection in Spectral Images: Integration of UNet and ResNet Classifiers’, in 2023 IEEE 35th International Conference on Tools with Artificial Intelligence (ICTAI), Atlanta, GA, USA: IEEE, Nov. 2023, pp. 513–517. doi: 10.1109/ICTAI59109.2023.00082.
Download all article in PDF
![]()



