A Comprehensive Review of Knowledge Distillation for Lightweight Medical Image Segmentation
DOI:
https://doi.org/10.59247/jahir.v2i2.294Keywords:
Knowledge Distillation, Medical Image Segmentation, Model Compression, Lightweight Deep Learning, Comprehensive ReviewAbstract
Medical image segmentation plays a crucial role in computer-aided diagnosis by enabling precise identification of anatomical and pathological structures. While deep learning models have significantly improved segmentation accuracy, their high computational complexity limits deployment in resource-constrained environments, such as mobile healthcare and edge computing. Knowledge Distillation (KD) has emerged as an effective model compression technique, allowing a lightweight student model to inherit knowledge from a complex teacher model while maintaining high segmentation performance. This review systematically examines key KD techniques, including Response-Based, Feature-Based, and Relation-Based Distillation, and analyzes their advantages and limitations. Major challenges in KD, such as boundary preservation, domain generalization, and computational trade-offs, are explored in the context of lightweight model development. Additionally, emerging trends, including the integration of KD with Transformers, Federated Learning, and Self-Supervised Learning, are discussed to highlight future directions in efficient medical image segmentation. By providing a comprehensive analysis of KD for lightweight segmentation models, this review aims to guide the development of deep learning solutions that balance accuracy, efficiency, and real-world applicability in medical imaging
References
A. M. Breesam, S. R. Adnan, and S. M. Ali, “Segmentation and Classification of Medical Images Using Artificial Intelligence: A Review,” Al-Furat Journal of Innovations in Electronics and Computer Engineering, vol. 3, no. 2, pp. 299–320, Jul. 2024, doi: 10.46649/fjiece.v3.2.20a.29.5.2024.
S. Mishra, H. K. Tripathy, and B. Acharya, “A Precise Analysis of Deep Learning for Medical Image Processing,” 2021, pp. 25–41. doi: 10.1007/978-981-15-5495-7_2.
Md. B. Hossain, N. Gong, and M. Shaban, “Computational Complexity Reduction Techniques for Deep Neural Networks: A Survey,” in 2023 IEEE International Conference on Artificial Intelligence, Blockchain, and Internet of Things (AIBThings), IEEE, Sep. 2023, pp. 1–6. doi: 10.1109/AIBThings58340.2023.10292477.
J. Wong and Q. Zhang, “Deep Knowledge Distillation Learning for Efficient Wearable Data Mining on the Edge,” in 2023 IEEE International Conference on Consumer Electronics (ICCE), IEEE, Jan. 2023, pp. 1–3. doi: 10.1109/ICCE56470.2023.10043546.
J. Gou, B. Yu, S. J. Maybank, and D. Tao, “Knowledge Distillation: A Survey,” Int J Comput Vis, vol. 129, no. 6, pp. 1789–1819, Jun. 2021, doi: 10.1007/s11263-021-01453-z.
S. Zhang, C. Chen, Q. Xie, H. Sun, F. Dong, and S. Peng, “Distribution Unified and Probability Space Aligned Teacher-Student Learning for Imbalanced Visual Recognition,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 34, no. 4, pp. 2414–2425, Apr. 2024, doi: 10.1109/TCSVT.2023.3311142.
J. Kim, S. Chang, and N. Kwak, “PQK: Model Compression via Pruning, Quantization, and Knowledge Distillation,” in Interspeech 2021, ISCA: ISCA, Aug. 2021, pp. 4568–4572. doi: 10.21437/Interspeech.2021-248.
J. Kim, “Quantization Robust Pruning With Knowledge Distillation,” IEEE Access, vol. 11, pp. 26419–26426, 2023, doi: 10.1109/ACCESS.2023.3257864.
Y. Qi, W. Zhang, X. Wang, X. You, S. Hu, and J. Chen, “Efficient Knowledge Distillation for Brain Tumor Segmentation,” Applied Sciences, vol. 12, no. 23, p. 11980, Nov. 2022, doi: 10.3390/app122311980.
Z. Zheng and G. Kang, “Model Compression with NAS and Knowledge Distillation for Medical Image Segmentation,” in 2021 4th International Conference on Data Science and Information Technology, New York, NY, USA: ACM, Jul. 2021, pp. 173–176. doi: 10.1145/3478905.3478940.
X. Zeng et al., “DSP-KD: Dual-Stage Progressive Knowledge Distillation for Skin Disease Classification,” Bioengineering, vol. 11, no. 1, p. 70, Jan. 2024, doi: 10.3390/bioengineering11010070.
V. Gorade, S. Mittal, D. Jha, and U. Bagci, “Rethinking Intermediate Layers Design in Knowledge Distillation for Kidney and Liver Tumor Segmentation,” in 2024 IEEE International Symposium on Biomedical Imaging (ISBI), IEEE, May 2024, pp. 1–6. doi: 10.1109/ISBI56570.2024.10635141.
X. Shi, Y. Li, J. Cheng, J. Bai, G. Zhao, and Y.-W. Chen, “Knowledge Distillation Using Segment Anything to U-Net Model for Lightweight High accuracy Medical Image Segmentation,” in 2024 IEEE 13th Global Conference on Consumer Electronics (GCCE), IEEE, Oct. 2024, pp. 1073–1076. doi: 10.1109/GCCE62371.2024.10760506.
qiaoyi chen and X. Yu, “Feature denoising distillation for medical image segmentation,” in Fourth International Conference on Image Processing and Intelligent Control (IPIC 2024), K. Du and A. bin Mohd Zain, Eds., SPIE, Aug. 2024, p. 42. doi: 10.1117/12.3038513.
Y. Wen, L. Chen, S. Xi, Y. Deng, X. Tang, and C. Zhou, “Towards Efficient Medical Image Segmentation Via Boundary-Guided Knowledge Distillation,” in 2021 IEEE International Conference on Multimedia and Expo (ICME), IEEE, Jul. 2021, pp. 1–6. doi: 10.1109/ICME51207.2021.9428395.
L. Lin et al., “FedLPPA: Learning Personalized Prompt and Aggregation for Federated Weakly-supervised Medical Image Segmentation,” IEEE Trans Med Imaging, pp. 1–1, 2024, doi: 10.1109/TMI.2024.3483221.
O. S. EL-Assiouti, G. Hamed, D. Khattab, and H. M. Ebied, “HDKD: Hybrid data-efficient knowledge distillation network for medical image classification,” Eng Appl Artif Intell, vol. 138, p. 109430, Dec. 2024, doi: 10.1016/j.engappai.2024.109430.
L. Xu, Z. Wang, W. Song, Y. Ji, and C. Liu, “SPARK: Cross-Guided Knowledge Distillation with Spatial Position Augmentation for Medical Image Segmentation,” 2025, pp. 431–445. doi: 10.1007/978-981-97-8496-7_30.
X. Qi et al., “Exploring Generalizable Distillation for Efficient Medical Image Segmentation,” IEEE J Biomed Health Inform, vol. 28, no. 7, pp. 4170–4183, Jul. 2024, doi: 10.1109/JBHI.2024.3385098.
Y. Zhang, S. Li, and X. Yang, “Knowledge Distillation with Active Exploration and Self-Attention Based Inter-Class Variation Transfer for Image Segmentation,” in ICASSP 2023 - 2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), IEEE, Jun. 2023, pp. 1–5. doi: 10.1109/ICASSP49357.2023.10097262.
P. Liang, J. Chen, Q. Chang, and L. Yao, “RSKD: Enhanced medical image segmentation via multi-layer, rank-sensitive knowledge distillation in Vision Transformer models,” Knowl Based Syst, vol. 293, p. 111664, Jun. 2024, doi: 10.1016/j.knosys.2024.111664.
X. Qi, G. Yang, Y. He, W. Liu, A. Islam, and S. Li, “Contrastive Re-localization and History Distillation in Federated CMR Segmentation,” 2022, pp. 256–265. doi: 10.1007/978-3-031-16443-9_25.
D. Qing and L. Qi, “Research on Lightweight Spine X-ray Image Segmentation Algorithm Based on Knowledge Distillation,” in Proceedings of the 2024 4th International Conference on Bioinformatics and Intelligent Computing, New York, NY, USA: ACM, Jan. 2024, pp. 142–146. doi: 10.1145/3665689.3665713.
L. Serrador, F. P. Villani, S. Moccia, and C. P. Santos, “Knowledge distillation on individual vertebrae segmentation exploiting 3D U-Net,” Computerized Medical Imaging and Graphics, vol. 113, p. 102350, Apr. 2024, doi: 10.1016/j.compmedimag.2024.102350.
L. Lin et al., “FedLPPA: Learning Personalized Prompt and Aggregation for Federated Weakly-supervised Medical Image Segmentation,” IEEE Trans Med Imaging, pp. 1–1, 2024, doi: 10.1109/TMI.2024.3483221.
Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2024 Asmat Burhan, Purwono Purwono

This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.
All articles published in the JAHIR Journal are licensed under the Creative Commons Attribution-ShareAlike 4.0 International (CC BY-SA 4.0) license. This license grants the following permissions and obligations:
1. Permitted Uses:
- Sharing – You may copy and redistribute the material in any medium or format.
- Adaptation – You may remix, transform, and build upon the material for any purpose, including commercial use.
2. Conditions of Use:
- Attribution – You must give appropriate credit to the original author(s), provide a link to the license, and indicate if changes were made. You may do so in any reasonable manner, but not in a way that suggests the licensor endorses you or your use.
- ShareAlike – If you remix, transform, or build upon the material, you must distribute your contributions under the same license as the original (CC BY-SA 4.0).
- No Additional Restrictions – You may not apply legal terms or technological measures that legally restrict others from doing anything the license permits.
3. Disclaimer:
- The JAHIR Journal and the authors are not responsible for any modifications, interpretations, or derivative works made by third parties using the published content.
- This license does not affect the ownership of copyrights, and authors retain full rights to their work.
For further details, please refer to the official Creative Commons Attribution-ShareAlike 4.0 International License.