Intelligent navigation systems: Adaptive endoscopic navigation strategies for complex intestinal environments by combining the biomechanics
Abstract
The advancement of endoscopic visual navigation is critical for accurate diagnosis and treatment of colonic disease. However, traditional techniques face challenges in adaptability to complex colonic conditions. This is exemplified by their limited ability to respond effectively to different situations, which results in less accurate navigation planning. This study proposes a navigation system to overcome this issue and improve navigation adaptability. An adaptive strategy has been designed based on a multidimensional discrimination method to guide endoscopes in traversing complex colonic environments. This strategy takes into account the biomechanical properties of the colon, such as tissue flexibility and kinematic characteristics, enhancing navigation accuracy. Furthermore, specific strategies for calculating navigation points have been developed for colonic collapses and tumors to ensure effective navigation under various biomechanical conditions. In simulated tests for the colon model, the system achieved an overall success rate of 92.5% (multiple scenarios), with average deviations of 3.15 mm horizontally and 2.51 mm vertically. Additionally, the system is characterized by its ease of operation, thereby reducing the reliance on operational experience and protracted training. By combining the principles of biomechanics, this study not only improves the accuracy of endoscopic navigation but also provides a new perspective for the treatment of colon diseases and emphasizes the importance of biomechanics in clinical applications.
References
1. Bray F, Laversanne M, Weiderpass E, et al. The ever‐increasing importance of cancer as a leading cause of premature death worldwide. Cancer. 2021; 127(16): 3029-3030. doi: 10.1002/cncr.33587
2. Bray F, Laversanne M, Sung H, et al. Global cancer statistics 2022: GLOBOCAN estimates of incidence and mortality worldwide for 36 cancers in 185 countries. CA: A Cancer Journal for Clinicians. 2024; 74(3): 229-263. doi: 10.3322/caac.21834
3. Li M, Wang B, Yang J, et al. Multistage adaptive control strategy based on image contour data for autonomous endoscope navigation. Computers in Biology and Medicine. 2022; 149: 105946. doi: 10.1016/j.compbiomed.2022.105946
4. Tang Y, Anandasabapathy S, Richards‐Kortum R. Advances in optical gastrointestinal endoscopy: a technical review. Molecular Oncology. 2020; 15(10): 2580-2599. doi: 10.1002/1878-0261.12792
5. Siegel RL, Miller KD, Fuchs HE, et al. Cancer statistics, 2022. CA: A Cancer Journal for Clinicians. 2022; 72(1): 7-33. doi: 10.3322/caac.21708
6. Keswani RN, Crockett SD, Calderwood AH. AGA Clinical Practice Update on Strategies to Improve Quality of Screening and Surveillance Colonoscopy: Expert Review. Gastroenterology. 2021; 161(2): 701-711. doi: 10.1053/j.gastro.2021.05.041
7. Boini A, Acciuffi S, Croner R, et al. Scoping review: autonomous endoscopic navigation. Artificial Intelligence Surgery. 2023; 3(4): 233-248. doi: 10.20517/ais.2023.36
8. Gastone C, Skonieczna-Żydecka K, Marlicz W, et al. Frontiers of Robotic Colonoscopy: A Comprehensive Review of Robotic Colonoscopes and Technologies. Journal of Clinical Medicine. 2020; 9(6): 1648. doi: 10.3390/jcm9061648
9. Luo X, Mori K, Peters TM. Advanced Endoscopic Navigation: Surgical Big Data, Methodology, and Applications. Annual Review of Biomedical Engineering. 2018; 20(1): 221-251. doi: 10.1146/annurev-bioeng-062117-120917
10. Cagiltay NE, Ozcelik E, Berker M, et al. The Underlying Reasons of the Navigation Control Effect on Performance in a Virtual Reality Endoscopic Surgery Training Simulator. International Journal of Human–Computer Interaction. 2018; 35(15): 1396-1403. doi: 10.1080/10447318.2018.1533151
11. Prendergast JM, Formosa GA, Heckman CR, et al. Autonomous Localization, Navigation and Haustral Fold Detection for Robotic Endoscopy. Proceedings of the 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS); 2018. doi: 10.1109/iros.2018.8594106
12. Reilink R, Stramigioli S, Misra S. Image-based flexible endoscope steering. Proceedings of the 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems. Published online October 2010: 2339-2344. doi: 10.1109/iros.2010.5652248
13. Xia S, Krishnan SM, Tjoa MP, Goh PMY. A novel methodology for extracting colon’s lumen from colonoscopic images. Systemics Cybern. Inform; 202.
14. Zhang Z, Qian J, Zhang Y, et al. An Intelligent Endoscopic Navigation System. Proc. Int. Conf. Mechatronics Autom; 2006. doi: 10.1109/icma.2006.257444
15. Liu Q, Li H, He L. Optical Flow Algorithm Based on the Medical Flexible Endoscope System. Electronic Sci. & Tech; 2015.
16. van der Stap N, Reilink R, Misra S, et al. The use of the focus of expansion for automated steering of flexible endoscopes. Proceedings of the 2012 4th IEEE RAS & EMBS International Conference on Biomedical Robotics and Biomechatronics (BioRob); 2012. doi: 10.1109/biorob.2012.6290804
17. Ciuti G, Visentini-Scarzanella M, Dore A, et al. Intra-operative monocular 3D reconstruction for image-guided navigation in active locomotion capsule endoscopy. Proceedings of the 2012 4th IEEE RAS & EMBS International Conference on Biomedical Robotics and Biomechatronics (BioRob); 2012. doi: 10.1109/biorob.2012.6290771
18. Abu-Kheil Y, Ciuti G, Mura M, et al. Vision and inertial-based image mapping for capsule endoscopy. Proceedings of the 2015 International Conference on Information and Communication Technology Research (ICTRC); 2015. doi: 10.1109/ictrc.2015.7156427
19. Floor PA, Farup I, Pedersen M. 3D reconstruction of the human colon from capsule endoscope video. Proc. Colour Vis. Comput. Symp. (CVCS); 2022.
20. Onogi S, Nakajima Y. Assessment of All-in-focus Image Quality in Shape-from-focus Technique. Sensors and Materials. 2023; 35(4): 1327. doi: 10.18494/sam4219
21. Jiang W, Zhou Y, Wang C, et al. Navigation strategy for robotic soft endoscope intervention. The International Journal of Medical Robotics and Computer Assisted Surgery. 2019; 16(2). doi: 10.1002/rcs.2056
22. Krishnan SM, Tan CS, Chan KL. Closed-boundary extraction of large intestinal lumen. Proceedings of 16th Annual International Conference of the IEEE Engineering in Medicine and Biology Society; 1994. doi: 10.1109/iembs.1994.411878
23. Lucas BD, Kanade T. An iterative image registration technique with an application to stereo vision. Proc. Int. Joint Conf. Artif. Intell; 1981.
24. Mondal R, Dey MS, Chanda B. Image Restoration by Learning Morphological Opening-Closing Network. Mathematical Morphology—Theory and Applications. 2020; 4(1): 87-107. doi: 10.1515/mathm-2020-0103
25. Albawi S, Mohammed TA, Al-Zawi S. Understanding of a convolutional neural network. Proceedings of the 2017 International Conference on Engineering and Technology (ICET); 2017. doi: 10.1109/icengtechnol.2017.8308186
26. Sandler M, Howard A, Zhu M, et al. MobileNetV2: Inverted Residuals and Linear Bottlenecks. Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition; 2018. doi: 10.1109/cvpr.2018.00474
27. Zhang X, Zhou X, Lin M, et al. ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices. Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition; 2018. doi: 10.1109/cvpr.2018.00716
28. Tan M, Le Q. Efficientnet: Rethinking model scaling for convolutional neural networks. Proc. Int. Conf. Mach. Learn; 2019.
29. Pan SJ, Yang Q. A Survey on Transfer Learning. IEEE Transactions on Knowledge and Data Engineering. 2010; 22(10): 1345-1359. doi: 10.1109/tkde.2009.191
30. Weiss K, Khoshgoftaar TM, Wang D. A survey of transfer learning. Journal of Big Data. 2016; 3(1). doi: 10.1186/s40537-016-0043-6
31. Perez L, Wang J. The effectiveness of data augmentation in image classification using deep learning. Available online: https://arxiv.org/abs/1712.04621 (accessed on 11 January 2025).
32. Dong K, Zhou C, Ruan Y, et al. MobileNetV2 Model for Image Classification. Proceedings of the 2020 2nd International Conference on Information Technology and Computer Application (ITCA); 2020 doi: 10.1109/itca52113.2020.00106
33. Chen LC, Zhu Y, Papandreou G, et al. Encoder-decoder with atrous separable convolution for semantic image segmentation. Proc. ECCV; 2018.
Copyright (c) 2025 Author(s)

This work is licensed under a Creative Commons Attribution 4.0 International License.
Copyright on all articles published in this journal is retained by the author(s), while the author(s) grant the publisher as the original publisher to publish the article.
Articles published in this journal are licensed under a Creative Commons Attribution 4.0 International, which means they can be shared, adapted and distributed provided that the original published version is cited.