A Bibliometric Analysis of Knowledge Distillation in Medical Image Segmentation
DOI:
https://doi.org/10.59247/jahir.v2i3.297Keywords:
Knowledge, Distillation, Medical, Image, SegmentationAbstract
This study conducts a bibliometric analysis and systematic review to examine research trends in the application of knowledge distillation for medical image segmentation. A total of 806 studies from 343 distinct sources, published between 2019 and 2023, were analyzed using Publish or Perish and VOSviewer, with data retrieved from Scopus and Google Scholar. The findings indicate a rising trend in publications indexed in Scopus, whereas a decline was observed in Google Scholar. Citation analysis revealed that the United States and China emerged as the leading contributors in terms of both publication volume and citation impact. Previous research predominantly focused on optimizing knowledge distillation techniques and their implementation in resource-constrained devices. Keyword analysis demonstrated that medical image segmentation appeared most frequently with 144 occurrences, followed by medical imaging with 110 occurrences. This study highlights emerging research opportunities, particularly in leveraging knowledge distillation for U-Net architectures with large-scale datasets and integrating transformer models to enhance medical image segmentation performance
References
X. Li et al., “HAL-IA: A Hybrid Active Learning framework using Interactive Annotation for medical image segmentation,” Med. Image Anal., vol. 88, p. 102862, 2023, doi: https://doi.org/10.1016/j.media.2023.102862.
Y. Tang, S. Wang, Y. Qu, Z. Cui, and W. Zhang, “Consistency and adversarial semi-supervised learning for medical image segmentation,” Comput. Biol. Med., vol. 161, p. 107018, 2023, doi: https://doi.org/10.1016/j.compbiomed.2023.107018.
H. Zhang et al., “BCU-Net: Bridging ConvNeXt and U-Net for medical image segmentation,” Comput. Biol. Med., vol. 159, p. 106960, 2023, doi: https://doi.org/10.1016/j.compbiomed.2023.106960.
J. Cheng et al., “DDU-Net: A dual dense U-structure network for medical image segmentation,” Appl. Soft Comput., vol. 126, p. 109297, 2022, doi: https://doi.org/10.1016/j.asoc.2022.109297.
Z. Wang, J. Zhu, S. Fu, S. Mao, and Y. Ye, “RFPNet: Reorganizing feature pyramid networks for medical image segmentation,” Comput. Biol. Med., vol. 163, p. 107108, 2023, doi: https://doi.org/10.1016/j.compbiomed.2023.107108.
Y. Wan, W. Zhang, Z. Li, H. Zhang, and Y. Li, “Dual Knowledge Distillation for neural machine translation,” Comput. Speech Lang., vol. 84, p. 101583, 2024, doi: https://doi.org/10.1016/j.csl.2023.101583.
Z. Chen, L. Deng, J. Gou, C. Wang, J. Li, and D. Li, “Building and road detection from remote sensing images based on weights adaptive multi-teacher collaborative distillation using a fused knowledge,” Int. J. Appl. Earth Obs. Geoinf., vol. 124, p. 103522, 2023, doi: https://doi.org/10.1016/j.jag.2023.103522.
J. Gou, Y. Hu, L. Sun, Z. Wang, and H. Ma, “Collaborative knowledge distillation via filter knowledge transfer,” Expert Syst. Appl., vol. 238, p. 121884, 2024, doi: https://doi.org/10.1016/j.eswa.2023.121884.
H. Fang, Y. Long, X. Hu, Y. Ou, Y. Huang, and H. Hu, “Dual cross knowledge distillation for image super-resolution,” J. Vis. Commun. Image Represent., vol. 95, p. 103858, 2023, doi: https://doi.org/10.1016/j.jvcir.2023.103858.
J. Gou, B. Yu, S. J. Maybank, and D. Tao, “Knowledge Distillation: A Survey,” Int. J. Comput. Vis., vol. 129, no. 6, pp. 1789–1819, 2021, doi: 10.1007/s11263-021-01453-z.
S. Manivannan, “Collaborative deep semi-supervised learning with knowledge distillation for surface defect classification,” Comput. Ind. Eng., vol. 186, p. 109766, 2023, doi: https://doi.org/10.1016/j.cie.2023.109766.
E. I. Yuwono, D. Tjondonegoro, G. Sorwar, and A. Alaei, “Scalability of knowledge distillation in incremental deep learning for fast object detection,” Appl. Soft Comput., vol. 129, p. 109608, 2022, doi: https://doi.org/10.1016/j.asoc.2022.109608.
Z. Wang, Z. Li, D. He, and S. Chan, “A lightweight approach for network intrusion detection in industrial cyber-physical systems based on knowledge distillation and deep metric learning,” Expert Syst. Appl., vol. 206, p. 117671, 2022, doi: https://doi.org/10.1016/j.eswa.2022.117671.
B. Leng, M. Leng, M. Ge, and W. Dong, “Knowledge distillation-based deep learning classification network for peripheral blood leukocytes,” Biomed. Signal Process. Control, vol. 75, p. 103590, 2022, doi: https://doi.org/10.1016/j.bspc.2022.103590.
P. Ju and Y. Zhang, “Knowledge distillation for object detection based on Inconsistency-based Feature Imitation and Global Relation Imitation,” Neurocomputing, vol. 566, p. 127060, 2024, doi: https://doi.org/10.1016/j.neucom.2023.127060.
S. Zhu, R. Shang, K. Tang, S. Xu, and Y. Li, “BookKD: A novel knowledge distillation for reducing distillation costs by decoupling knowledge generation and learning,” Knowledge-Based Syst., vol. 279, p. 110916, 2023, doi: https://doi.org/10.1016/j.knosys.2023.110916.
M. Sepahvand, F. Abdali-Mohammadi, and A. Taherkordi, “Teacher–student knowledge distillation based on decomposed deep feature representation for intelligent mobile applications,” Expert Syst. Appl., vol. 202, p. 117474, 2022, doi: https://doi.org/10.1016/j.eswa.2022.117474.
W. Du, L. Geng, J. Liu, Z. Zhao, C. Wang, and J. Huo, “Decoupled knowledge distillation method based on meta-learning,” High-Confidence Comput., vol. 4, no. 1, p. 100164, 2024, doi: https://doi.org/10.1016/j.hcc.2023.100164.
L. Zhao, X. Qian, Y. Guo, J. Song, J. Hou, and J. Gong, “MSKD: Structured knowledge distillation for efficient medical image segmentation,” Comput. Biol. Med., vol. 164, 2023, doi: 10.1016/j.compbiomed.2023.107284.
Y. Tang, Y. Chen, and L. Xie, “Self-knowledge distillation based on knowledge transfer from soft to hard examples,” Image Vis. Comput., vol. 135, p. 104700, 2023, doi: https://doi.org/10.1016/j.imavis.2023.104700.
F. MohiEldeen Alabbasy, A. S. Abohamama, and M. F. Alrahmawy, “Compressing medical deep neural network models for edge devices using knowledge distillation,” J. King Saud Univ. - Comput. Inf. Sci., vol. 35, no. 7, p. 101616, 2023, doi: https://doi.org/10.1016/j.jksuci.2023.101616.
J. Mi, L. Wang, Y. Liu, and J. Zhang, “KDE-GAN: A multimodal medical image-fusion model based on knowledge distillation and explainable AI modules,” Comput. Biol. Med., vol. 151, p. 106273, 2022, doi: https://doi.org/10.1016/j.compbiomed.2022.106273.
Y. Yang, X. Guo, C. Ye, Y. Xiang, and T. Ma, “CReg-KD: Model refinement via confidence regularized knowledge distillation for brain imaging,” Med. Image Anal., vol. 89, p. 102916, 2023, doi: https://doi.org/10.1016/j.media.2023.102916.
R. Hossain, R. B. Ibrahim, and H. B. Hashim, “Literature Review Automated Brain Tumor Detection Using Machine Learning : A Bibliometric Review,” World Neurosurg., vol. 175, pp. 57–68, 2023, doi: 10.1016/j.wneu.2023.03.115.
N. Donthu, S. Kumar, D. Mukherjee, N. Pandey, and W. M. Lim, “How to conduct a bibliometric analysis: An overview and guidelines,” J. Bus. Res., vol. 133, pp. 285–296, 2021, doi: https://doi.org/10.1016/j.jbusres.2021.04.070.
J. Han, H.-J. Kang, M. Kim, and G. H. Kwon, “Mapping the intellectual structure of research on surgery with mixed reality: Bibliometric network analysis (2000–2019),” J. Biomed. Inform., vol. 109, p. 103516, 2020, doi: https://doi.org/10.1016/j.jbi.2020.103516.
Q. Wang and M. Su, “Integrating blockchain technology into the energy sector — from theory of blockchain to research and application of energy blockchain,” Comput. Sci. Rev., vol. 37, p. 100275, 2020, doi: https://doi.org/10.1016/j.cosrev.2020.100275.
E. H. Ramahi and C. D. Fuller, “Publication Productivity Radiation Oncology Journals: Application of Lotka’s Law,” Int. J. Radiat. Oncol., vol. 78, no. 3, Supplement, pp. S485–S486, 2010, doi: https://doi.org/10.1016/j.ijrobp.2010.07.1138.
C. You, Y. Zhou, R. Zhao, L. Staib, and J. S. Duncan, “SimCVD: Simple Contrastive Voxel-Wise Representation Distillation for Semi-Supervised Medical Image Segmentation,” IEEE Trans. Med. Imaging, vol. 41, no. 9, pp. 2228–2237, 2022, doi: 10.1109/TMI.2022.3161829.
P. Zhang, Y. Zhong, Y. Deng, X. Tang, and X. Li, “A Survey on Deep Learning of Small Sample in Biomedical Image Analysis,” arXiv Prepr. arXiv1908.00473, 2019, [Online]. Available: http://arxiv.org/abs/1908.00473.
Y. Liu, K. Chen, C. Liu, Z. Qin, Z. Luo, and J. Wang, “Structured knowledge distillation for semantic segmentation,” Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit., vol. 2019-June, pp. 2599–2608, 2019, doi: 10.1109/CVPR.2019.00271.
Z. Huang, Z. Wang, J. Chen, Z. Zhu, and J. Li, “Real-time Colonoscopy Image Segmentation Based on Ensemble Knowledge Distillation,” ICARM 2020 - 2020 5th IEEE International Conference on Advanced Robotics and Mechatronics. pp. 454–459, 2020, doi: 10.1109/ICARM49381.2020.9195281.
Q. Dou, Q. Liu, P. A. Heng, and B. Glocker, “Unpaired Multi-Modal Segmentation via Knowledge Distillation,” IEEE Trans. Med. Imaging, vol. 39, no. 7, pp. 2415–2425, 2020, doi: 10.1109/TMI.2019.2963882.
L. Alzubaidi, O. Al-Shamma, M. A. Fadhel, L. Farhan, J. Zhang, and Y. Duan, “Optimizing the performance of breast cancer classification by employing the same domain transfer learning from hybrid deep convolutional neural network model,” Electron., vol. 9, no. 3, 2020, doi: 10.3390/electronics9030445.
P. Ardimento, L. Aversano, M. L. Bernardi, M. Cimitile, M. Iammarino, and C. Verdone, “Evo-GUNet3++: Using evolutionary algorithms to train UNet-based architectures for efficient 3D lung cancer detection,” Appl. Soft Comput., p. 110465, 2023, doi: https://doi.org/10.1016/j.asoc.2023.110465.
K. K. Tseng, R. Zhang, C. M. Chen, and M. M. Hassan, “DNetUnet: a semi-supervised CNN of medical image segmentation for super-computing AI service,” J. Supercomput., vol. 77, no. 4, pp. 3594–3615, 2021, doi: 10.1007/s11227-020-03407-7.
S. Kim, “A Virtual Knowledge Distillation via Conditional GAN,” IEEE Access, vol. 10, pp. 34766–34778, 2022, doi: 10.1109/ACCESS.2022.3163398.
J. Song, Y. Chen, J. Ye, and M. Song, “Spot-Adaptive Knowledge Distillation,” IEEE Trans. Image Process., vol. 31, pp. 3359–3370, 2022, doi: 10.1109/TIP.2022.3170728.
Z. Tu, X. Liu, and X. Xiao, “A General Dynamic Knowledge Distillation Method for Visual Analytics,” IEEE Trans. Image Process., vol. 31, pp. 6517–6531, 2022, doi: 10.1109/TIP.2022.3212905.
W. Zou, X. Qi, W. Zhou, M. Sun, Z. Sun, and C. Shan, “Graph Flow: Cross-Layer Graph Flow Distillation for Dual Efficient Medical Image Segmentation,” IEEE Trans. Med. Imaging, vol. 42, no. 4, pp. 1159–1171, 2023, doi: 10.1109/TMI.2022.3224459.
D. Qin et al., “Efficient Medical Image Segmentation Based on Knowledge Distillation,” IEEE Trans. Med. Imaging, vol. 40, no. 12, pp. 3820–3831, 2021, doi: 10.1109/TMI.2021.3098703.
C. Fang et al., “Reliable Mutual Distillation for Medical Image Segmentation Under Imperfect Annotations,” IEEE Trans. Med. Imaging, vol. 42, no. 6, pp. 1720–1734, 2023, doi: 10.1109/TMI.2023.3237183.
B. Hu, S. Zhou, Z. Xiong, and F. Wu, “Cross-Resolution Distillation for Efficient 3D Medical Image Registration,” IEEE Trans. Circuits Syst. Video Technol., vol. 32, no. 10, pp. 7269–7283, 2022, doi: 10.1109/TCSVT.2022.3178178.
S. Zhai, G. Wang, X. Luo, Q. Yue, K. Li, and S. Zhang, “PA-Seg: Learning from Point Annotations for 3D Medical Image Segmentation Using Contextual Regularization and Cross Knowledge Distillation,” IEEE Trans. Med. Imaging, vol. 42, no. 8, pp. 2235–2246, 2023, doi: 10.1109/TMI.2023.3245068.
F. Schmid, K. Koutini, and G. Widmer, “Efficient Large-Scale Audio Tagging Via Transformer-to-CNN Knowledge Distillation,” in ICASSP 2023 - 2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2023, pp. 1–5, doi: 10.1109/ICASSP49357.2023.10096110.
J. Guo, X. Guan, Y. Liu, and Y. Lu, “Distillation-Based Hashing Transformer for Cross-Modal Vessel Image Retrieval,” IEEE Geosci. Remote Sens. Lett., vol. 20, pp. 1–5, 2023, doi: 10.1109/LGRS.2023.3294393.
X. Zhang and J. Saniie, “Steel Material Microstructure Characterization using Knowledge Distillation Based Transformer Neural Networks for Data-Efficient Ultrasonic NDE System,” in 2022 IEEE International Ultrasonics Symposium (IUS), 2022, pp. 1–4, doi: 10.1109/IUS54386.2022.9958564.
G. Yuyao, C. Yiting, W. Jia, Z. Hanlin, and C. Lizhe, “Vision Transformer Based on Knowledge Distillation in TCM Image Classification,” in 2022 IEEE 5th International Conference on Computer and Communication Engineering Technology (CCET), 2022, pp. 120–125, doi: 10.1109/CCET55412.2022.9906332.
C. Guan, S. Wang, G. Liu, and A. W.-C. Liew, “Lip Image Segmentation in Mobile Devices Based on Alternative Knowledge Distillation,” in 2019 IEEE International Conference on Image Processing (ICIP), 2019, pp. 1540–1544, doi: 10.1109/ICIP.2019.8803087.
A. G. Putrada, N. Alamsyah, S. F. Pane, M. N. Fauzan, and D. Perdana, “Knowledge Distillation for a Lightweight Deep Learning-Based Indoor Positioning System on Edge Environments,” in 2023 International Seminar on Intelligent Technology and Its Applications (ISITIA), 2023, pp. 370–375, doi: 10.1109/ISITIA59021.2023.10220996.
G. Xiaowei, T. Hui, and D. Zhongjian, “Structured Attention Knowledge Distillation for Lightweight Networks,” in 2021 33rd Chinese Control and Decision Conference (CCDC), 2021, pp. 1726–1730, doi: 10.1109/CCDC52312.2021.9601745
Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2024 Novita Ranti Muntiari, Purwono Purwono, Rania Majdoubi

This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.
All articles published in the JAHIR Journal are licensed under the Creative Commons Attribution-ShareAlike 4.0 International (CC BY-SA 4.0) license. This license grants the following permissions and obligations:
1. Permitted Uses:
- Sharing – You may copy and redistribute the material in any medium or format.
- Adaptation – You may remix, transform, and build upon the material for any purpose, including commercial use.
2. Conditions of Use:
- Attribution – You must give appropriate credit to the original author(s), provide a link to the license, and indicate if changes were made. You may do so in any reasonable manner, but not in a way that suggests the licensor endorses you or your use.
- ShareAlike – If you remix, transform, or build upon the material, you must distribute your contributions under the same license as the original (CC BY-SA 4.0).
- No Additional Restrictions – You may not apply legal terms or technological measures that legally restrict others from doing anything the license permits.
3. Disclaimer:
- The JAHIR Journal and the authors are not responsible for any modifications, interpretations, or derivative works made by third parties using the published content.
- This license does not affect the ownership of copyrights, and authors retain full rights to their work.
For further details, please refer to the official Creative Commons Attribution-ShareAlike 4.0 International License.