Research and Development of Omnidirectional Mobile Robot Tracking Control Based on Artificial Intelligence
International Journal of Electronics and Communication Engineering |
© 2023 by SSRG - IJECE Journal |
Volume 10 Issue 3 |
Year of Publication : 2023 |
Authors : Chau Thanh Phuong |
How to Cite?
Chau Thanh Phuong, "Research and Development of Omnidirectional Mobile Robot Tracking Control Based on Artificial Intelligence," SSRG International Journal of Electronics and Communication Engineering, vol. 10, no. 3, pp. 15-22, 2023. Crossref, https://doi.org/10.14445/23488549/IJECE-V10I3P103
Abstract:
This paper presents the research and construction of a motion tracing control system for omnidirectional mobile robots based on reinforcement learning techniques in automatic control. The process of controlling a mobile robot in a flat environment with definite and unknown obstacles, taking into account the nonlinear factor of interference. Research and application of programming tools are operating systems for mobile robots (Robot Operating System - ROS). From updated information on maps, operating environment, robot control position, and obstacle identification (SLAM) to calculate the movement trajectory of a three-wheeled omnidirectional mobile robot. The positioning system calculates the orbital tracking for the robot based on the Q-learning algorithm. The results of simulation research in the Gazebo environment and running tests on real Turtlebot mobile robots have shown the practical effectiveness of the research problem of tracking motion tracking and intelligent navigation for mobile robots.
Keywords:
Three-wheeled mobile robot, Self-propelled robot, Automatic system, ROS, Artificial intelligence, Q-learning algorithm, Reinforcement learning.
References:
[1] Andrea Bacciotti, Stability and Control of Linear Systems, Publishing Ltd; Springer Nature Switzerland AG, 2019.
[Publisher link]
[2] N. T. Tuan, Base Deep Learning, The Legrand Orange Book,Version 2, Last Update, 2020.
[3] Mohit Sewak, Deep Reinforcement Learning, Frontiers of Artificial Intelligence Springer Nature, 2019.
[Publisher link]
[4] V. T. T. Nga, O. X. Loc, and T. H. Nam, Enhanced Learning in Automatic Control with Matlab Simulink, Hanoi Polytechnic Publishing House, 2020.
[5] N. T. Luy, Textbook of Machine Learning and Intelligent Control Application, Publishing House of Industrial University of Ho Chi Minh City, Ho Chi Minh, 2019.
[6] Xiaogang Ruan et al., “Mobile Robot Navigation Based on Deep Reinforcement Learning,” Chinese Control and Decision Conference, pp. 6174-6178, 2019.
[CrossRef] [Google Scholar] [Publisher link]
[7] Rajesh Kannan Megalingam et al., “ROS Based Autonomous Indoor Navigation Simulation Using SLAM Algorithm”, International Journal of Pure and Applied Mathematics, vol. 118, no. 7, pp. 199-205, 2018.
[Google Scholar] [Publisher link]
[8] Charu C. Aggarwal, Neural Networks and Deep Learning, Springer International Publishing AG, Part of Springer Nature, 2018.
[Publisher link]
[9] Thanh Tung Pham, Minh Thanh, and Chi-Ngon Nguyen, “Omnidirectional Mobile Robot Trajectory Tracking Control with Diversity of Inputs,” International Journal of Mechanical Engineering and Robotics Research, vol. 10, no. 11, 2021.
[CrossRef] [Google Scholar] [Publisher link]
[10] Hiep Do Quang et al., “Design a Nonlinear MPC Controller for Autonomous Mobile Robot Navigation System Based on ROS,” International Journal of Mechanical Engineering and Robotics Research, vol. 11, no. 6, pp. 379 - 388, 2022.
[Google Scholar] [Publisher link]
[11] Hiep Do Quang et al., “Mapping and Navigation With Four-Wheeled Omnidirectional Mobile Robot Based on Robot Operating System,” 2019 International Conference on Mechatronics, Robotics and Systems Engineering (MoRSE), pp. 54–59, 2019.
[CrossRef] [Google Scholar] [Publisher link]
[12] Yuankai Wu et al., “Deep Reinforcement Learning of Energy Management with Continuous Control Strategy and Traffic Information for a Series-Parallel Plug-in Hybrid Electric Bus,” Applied Energy, vol. 247, pp. 454-466, 2019.
[CrossRef] [Google Scholar] [Publisher link]
[13] Shixiang Gu et al., “Deep Reinforcement Learning for Robotic Manipulation with Asynchronous Off-Policy Updates,” 2017 IEEE International Conference on Robotics and Automation, pp. 3389–3396, 2017.
[CrossRef] [Google Scholar] [Publisher link]
[14] L. T. T. Nga, and L. H. Lan, “Controlling the Swarm Robot to Avoid Obstacles and Search for Targets,” 2015.
[15] Evan Prianto et al., “Path Planning for Multi-Arm Manipulators Using Deep Reinforcement Learning: Soft Actor–Critic with Hindsight Experience Replay,” Sensors, vol. 20, no. 20, pp. 1-22, 2020.
[CrossRef] [Google Scholar] [Publisher link]
[16] H. T. K. Duyen et al., “Controlling Self-Propelled Robot Object Tracking by Exponential Sliding Control Algorithm,” Research Journal Military Science and Technology, ACME Special Issue, 2017.
[17] Van Nguyen Thi Thanh et al., “Autonomous Navigation for Omnidirectional Robot Based on Deep Reinforcement Learning”, International Journal of Mechanical Engineering and Robotics Research, vol. 9, no. 8, pp. 1134-1139, 2020. 10.18178/Ijmerr.9.8.1134- 1139.
[CrossRef] [Google Scholar] [Publisher link]
[18] D. N. Thang, P. T. Dung, and N. Q. Hung, “Research on Obstacle Avoidance Problems for Self-Propelled Robots on the Basis of Enhanced Deep Learning DQN,” Journal of Military Science and Technology Research, Special Issue of FEE National Conference , p. 48-56, 2020.
[19] Wen-Kung Tseng, and Hou-Yu Chen, "The Study of Tracking Control for Autonomous Vehicle," SSRG International Journal of Mechanical Engineering, vol. 7, no. 11, pp. 57-62, 2020.
[CrossRef] [Publisher link]
[20] Avi Singh et al., “End-to-End Robotic Rein-Forcement Learning without Reward Engineering,” University of California, Berkeley 2019.
[CrossRef] [Google Scholar] [Publisher link]
[21] Yuda Irawan, Hendry Fonda, and Yulisman, Mardeni, "Garbage Collecting Ship Robot Using Arduino Uno Microcontroller Based on Android Smartphone," International Journal of Engineering Trends and Technology, vol. 69, no. 6, pp. 25-30, 2021.
[CrossRef] [Google Scholar] [Publisher link]
[22] Rajesh Kannan Megalingam, Jeeba M Varghese, and Aarsha Anil S, “Distance Estimation and Direction Finding Using I2C Protocol for an Auto-Navigation Platform,” International Conference on VLSI Systems, Architectures, Technology and Applications (VLSI-SATA), pp. 1-4, 2016.
[CrossRef] [Google Scholar] [Publisher link]
[23] Sandeep B.S., and Supriya P, “Analysis of Fuzzy Rules for Robot Path Planning,” Conference on Advances in Computing, Communications and Informatics (ICACCI), pp. 309-314, 2016.
[CrossRef] [Google Scholar] [Publisher link]
[24] Giada Giorgi, Guglielmo Frigo, and Claudio Narduzzi, “Dead Reckoning in Structured Environments for Human Indoor Navigation,” IEEE Sensors Journal, vol. 17, no. 23, pp. 7794-7802, 2017.
[CrossRef] [Google Scholar] [Publisher link]
[25] [Online]. Available: http://wiki.ros.org/ros/tutorials, 2-2023
[26] [Online]. Available: https://emanual.robotis.com/
[27] Justin Fu et al., “Variational Inverse Control with Events: A General Framework for Data-Driven Reward Definition,” 32nd Conference on Neural Information Processing Systems, 2018.
[CrossRef] [Google Scholar] [Publisher link]