Mechanics mechanisms and optimization strategies for the interaction between the motion precision of mechanical arms and biological tissues in medical device Ex Vivo diagnostics
Abstract
In this paper, a control system of medical robot arm based on DRL algorithm is designed by combining deep reinforcement learning (DRL) with compliant control. The system uses a trial-and-error mechanism to collect data and gradually optimize control strategies through continuous interaction between the robotic arm and the environment. Considering the actual use cost, time cost and security problems, the model is trained by the simulator based on physics engine, and the trained model is transferred to the actual robot for verification. To ensure seamless communication between different software components, Robot Operating System (ROS) was chosen as a development platform to build modular and distributed systems that are easy to test and modify. The experimental results show that the maximum distance error and repeated positioning accuracy are obviously optimized after Modified Denavit-Hartenberg (MDH) parameter modification.
References
1. Li X, Xiao J, Zhao W, et al. Multiple peg-in-hole compliant assembly based on a learning-accelerated deep deterministic policy gradient strategy[J]. Industrial Robot: the international journal of robotics research and application, 2022, 49(1): 54-64.
2. Zhao, Yufei, Shixiao Xu, and Hua Duan. "HGNN− BRFE: Heterogeneous Graph Neural Network Model Based on Region Feature Extraction." Electronics 13.22 (2024): 4447.
3. Zhao Y, Wang S, Duan H. LSPI: Heterogeneous Graph Neural Network Classification Aggregation Algorithm Based on Size Neighbor Path Identification[J]. arXiv preprint arXiv:2405.18933, 2024.
4. Y. Zhao, H. Liu and H. Duan, "HGNN-GAMS: Heterogeneous Graph Neural Networks for Graph Attribute Mining and Semantic Fusion," in IEEE Access, doi: 10.1109/ACCESS.2024.3518777.
5. Liu C, Lai C, Yao X, et al. Robot-assisted nephrectomy using the newly developed EDGE SP1000 single-port robotic surgical system: a feasibility study in porcine model[J]. Journal of Endourology, 2020, 34(11): 1149-1154.
6. Xu K, Zhao J, Fu M. Development of the SJTU unfoldable robotic system (SURS) for single port laparoscopy[J]. IEEE/ASME Transactions on Mechatronics, 2014, 20(5): 2133- 2145.
7. Lin J, Liu Z, Chen C L P, et al. A wavelet broad learning adaptive filter for forecasting and cancelling the physiological tremor in teleoperation[J]. Neurocomputing, 2019, 356: 170- 183.
8. Liu Z, Mao C, Luo J, et al. A three-domain fuzzy wavelet network filter using fuzzy PSO for robotic assisted minimally invasive surgery[J]. Knowledge-Based Systems, 2014, 66: 13-27.
9. Alebooyeh M, Urbanic R J. Neural network model for identifying workspace, forward and inverse kinematics of the 7-DOF YuMi 14000 ABB collaborative robot[J]. IFAC PapersOnLine, 2019, 52(10): 176-181.
10. Liu Y, Jiang D, Yun J, et al. Self-tuning control of manipulator positioning based on fuzzy PID and PSO algorithm[J]. Frontiers in Bioengineering and Biotechnology, 2022, 9: 1443.
11. Nubert J, Köhler J, Berenz V, et al. Safe and fast tracking on a robot manipulator: Robust mpc and neural network control[J]. IEEE Robotics and Automation Letters, 2020, 5(2): 3050-3057.
12. Minniti M V, Farshidian F, Grandia R, et al. Whole-body mpc for a dynamically stable mobile manipulator[J]. IEEE Robotics and Automation Letters, 2019, 4(4): 3687-3694.
13. Finn C, Tan X Y, Duan Y, et al. Deep spatial autoencoders for visuomotor learning[C]//2016 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2016: 512-519.
14. Jiang R, Wang Z, He B, et al. Vision-Based Deep Reinforcement Learning For UR5 Robot Motion Control[C]//2021 IEEE International Conference on Consumer Electronics and Computer Engineering (ICCECE). IEEE, 2021: 246-250.
15. Tsumura R, Koseki Y, Nitta N, et al. Towards fully automated robotic platform for remote auscultation[J]. The International Journal of Medical Robotics and Computer Assisted Surgery, 2023, 19(1): e2461.
Copyright (c) 2025 Author(s)

This work is licensed under a Creative Commons Attribution 4.0 International License.
Copyright on all articles published in this journal is retained by the author(s), while the author(s) grant the publisher as the original publisher to publish the article.
Articles published in this journal are licensed under a Creative Commons Attribution 4.0 International, which means they can be shared, adapted and distributed provided that the original published version is cited.