Deep reinforcement learning and biomechanical modeling are integrated to optimize the scheduling problem of intelligent logistics and warehousing robots

  • Shuzhao Dong Institute of Logistics Science and Engineering, Shanghai Maritime University, Shanghai 201306, China
Keywords: deep reinforcement learning; biomechanical modeling; intelligent logistics; warehousing robots; scheduling optimization; energy efficiency
Article ID: 1507

Abstract

This study introduces an innovative approach to optimizing the scheduling of intelligent logistics and warehousing robots by integrating deep reinforcement learning (DRL) with biomechanical modeling. Leveraging a comprehensive dataset from a large-scale logistics company, the research formulates the scheduling problem as a Markov Decision Process (MDP) and incorporates biomechanical principles to accurately model robot energy consumption. A Deep Q-network (DQN) is employed to learn the optimal scheduling policy, which is further refined using policy gradient optimization. This integrated framework aims to maximize task completion efficiency while minimizing energy usage, addressing the complexity of balancing these competing objectives. Extensive simulations validate the proposed approach, demonstrating significant improvements in task completion rates, average travel distances, and energy consumption compared to baseline algorithms such as random scheduling and greedy algorithms. The methodology presents a robust and efficient solution for enhancing operational efficiency in intelligent logistics and warehousing systems.

References

1. Haarnoja T, Zhou A, Abbeel P, Levine S. Soft Actor-Critic: Off-Policy Maximum Entropy Deep Reinforcement Learning with a Stochastic Actor. arXiv. 2018; arXiv:1801.01290.

2. Mnih V, Badia AP, Mirza M, et al. Asynchronous Methods for Deep Reinforcement Learning. arXiv. 2016; arXiv:1602.01783.

3. Mnih V, Kavukcuoglu K, Silver D, et al. Playing Atari with Deep Reinforcement Learning. arXiv. 2013; arXiv:1312.5602.

4. Mnih V, Kavukcuoglu K, Silver D, et al. Human-level control through deep reinforcement learning. Nature. 2015; 518(7540): 529-533. doi: 10.1038/nature14236

5. Shao HC, Wang J, Bai T, et al. Real-time liver tumor localization via a single x-ray projection using deep graph neural network-assisted biomechanical modeling. Physics in Medicine & Biology. 2022; 67(11): 115009. doi: 10.1088/1361-6560/ac6b7b

6. Shao H, Huang X, Folkert MR, et al. Automatic liver tumor localization using deep learning-based liver boundary motion estimation and biomechanical modeling (DL‐Bio). Medical Physics. 2021; 48(12): 7790-7805. doi: 10.1002/mp.15275

7. Van Hasselt H, Guez A, Silver D. Deep Reinforcement Learning with Double Q-Learning. Proceedings of the AAAI Conference on Artificial Intelligence. 2016; 30(1). doi: 10.1609/aaai.v30i1.10295

8. Wang Z, Schaul T, Hessel M, et al. Dueling Network Architectures for Deep Reinforcement Learning. arXiv. 2015; arXiv:1511.06581.

9. Lillicrap TP, Hunt JJ, Pritzel A, et al. Continuous control with deep reinforcement learning. arXiv. 2015; arXiv: 1509.02971.

10. Hessel M, Modayil J, Van Hasselt H, et al. Rainbow: Combining Improvements in Deep Reinforcement Learning. Proceedings of the AAAI Conference on Artificial Intelligence. 2018; 32(1). doi: 10.1609/aaai.v32i1.11796

11. Vogt N. Biomechanical modeling of whole bodies. Nature Methods. 2024; 21(12): 2228-2228. doi: 10.1038/s41592-024-02548-4

12. Lindquist Liljeqvist M, Bogdanovic M, Siika A, et al. Geometric and biomechanical modeling aided by machine learning improves the prediction of growth and rupture of small abdominal aortic aneurysms. Scientific Reports. 2021; 11(1). doi: 10.1038/s41598-021-96512-3

13. Ruijsink B, Zugaj K, Wong J, et al. Dobutamine stress testing in patients with Fontan circulation augmented by biomechanical modeling. PLOS ONE. 2020; 15(2): e0229015. doi: 10.1371/journal.pone.0229015

14. Lee CH, Laurence DW, Ross CJ, et al. Mechanics of the Tricuspid Valve—From Clinical Diagnosis/Treatment, In-Vivo and In-Vitro Investigations, to Patient-Specific Biomechanical Modeling. Bioengineering. 2019; 6(2): 47. doi: 10.3390/bioengineering6020047

15. Zhang Y, Huang X, Wang J. Advanced 4-dimensional cone-beam computed tomography reconstruction by combining motion estimation, motion-compensated reconstruction, biomechanical modeling and deep learning. Visual Computing for Industry, Biomedicine, and Art. 2019; 2(1). doi: 10.1186/s42492-019-0033-6

16. Miller K, Joldes GR, Bourantas G, et al. Biomechanical modeling and computer simulation of the brain during neurosurgery. International Journal for Numerical Methods in Biomedical Engineering. 2019; 35(10). doi: 10.1002/cnm.3250

17. Chanda A. Biomechanical Modeling of Human Skin Tissue Surrogates. Biomimetics. 2018; 3(3): 18. doi: 10.3390/biomimetics3030018

18. Zhou M. Analysis Of Current Development Status of Global Path Planning for Intelligent Carrier Robots in Warehouse Logistics Scenarios. Highlights in Science, Engineering and Technology. 2024; 103: 14-19. doi: 10.54097/3zcsde07

19. Yang M, Sun J, Li J. Research on Hot Spots and Development Trends of Intelligent Logistics in the Era of Artificial Intelligence an Analysis of Knowledge Graph Based on CiteSpace. In: Proceedings of the 4th International Conference on Artificial Intelligence and Computer Engineering. 2023. pp. 586-591. doi: 10.1145/3652628.3652726

20. Duan LM. Path Planning for Batch Picking of Warehousing and Logistics Robots Based on Modified A* Algorithm. International Journal of Online and Biomedical Engineering (iJOE). 2018; 14(11): 176-192. doi: 10.3991/ijoe.v14i11.9527

21. Tian R, Zhu L, Cai Z. Research on task scheduling and path planning in intelligent warehousing system. In: Proceedings of the Third International Conference on Control and Intelligent Robotics (ICCIR 2023). p. 127. doi: 10.1117/12.3011342

22. Mo Y, Sun Z, Yu C. EventTube: An Artificial Intelligent Edge Computing Based Event Aware System to Collaborate With Individual Devices in Logistics Systems. IEEE Transactions on Industrial Informatics. 2023; 19(2): 1823-1832. doi: 10.1109/tii.2022.3189177

23. Han J, Zhao J, Li R. Dynamic Path Planning of Intelligent Warehouse Robot Based on Improved A* Algorithm. Computer-Aided Design and Applications. 2024; 134-143. doi: 10.14733/cadaps.2024.s23.134-143

24. Gui Y. Logistics Robot Path Planning: A* Algorithm Operation and Analysis. Journal of Physics: Conference Series. 2023; 2547(1): 012028. doi: 10.1088/1742-6596/2547/1/012028

25. Yuan R, Pan L, Li J, et al. Road Network Optimization of Intelligent Warehouse Picking Systems Based on Improved Genetic Algorithm. In: Proceedings of the 2021 IEEE 7th International Conference on Cloud Computing and Intelligent Systems (CCIS). 2021. pp. 361-366. doi: 10.1109/ccis53392.2021.9754603

26. Liu Y, Li J. Runtime Verification-Based Safe MARL for Optimized Safety Policy Generation for Multi-Robot Systems. Big Data and Cognitive Computing. 2024; 8(5): 49. doi: 10.3390/bdcc8050049

Published
2025-03-24
How to Cite
Dong, S. (2025). Deep reinforcement learning and biomechanical modeling are integrated to optimize the scheduling problem of intelligent logistics and warehousing robots. Molecular & Cellular Biomechanics, 22(5), 1507. https://doi.org/10.62617/mcb1507
Section
Article