PID vs. Reinforcement Learning: A Comparative Study on Autonomous Driving in the Gymnasium Car Racing Environment

Authors

  • Ali Roshandelzade MSc student at Babol Noshirvani University of Technology, Shariati Av., Babol, Mazandaran, Iran Author
  • Behrooz Rezaie Department of Electrical and Computer Engineering, Babol Noshirvani University of Technology, Shariati Av., Babol, Mazandaran, Iran Author

Keywords:

reinforcement learning, control systems, pid, ppo, proximal policy optimization

Abstract

In this paper, we investigate two distinct control strategies for autonomous vehicles navigating tracks: Proportional-Integral-Derivative (PID) control and Proximal Policy Optimization (PPO). We compare their feasibility and computational efficiency and introducing a novel approach for longitudinal and lateral control within the CarRacing environment of OpenAI’s Gymnasium. While deep reinforcement learning methods, such as PPO, have demonstrated significant potential in the control domain, they often require substantial computational resources and time due to the inherent exploration-exploitation trade-off. Our findings suggest that, in certain scenarios, classical control techniques like PID offer greater reliability and ease of implementation.

Downloads

Download data is not yet available.

Author Biographies

  • Ali Roshandelzade, MSc student at Babol Noshirvani University of Technology, Shariati Av., Babol, Mazandaran, Iran

      

  • Behrooz Rezaie, Department of Electrical and Computer Engineering, Babol Noshirvani University of Technology, Shariati Av., Babol, Mazandaran, Iran

      

References

[1] E. Raffone, C. Rei, and M. Rossi, “Optimal look-ahead vehicle lane centering control design and application for mid-high speed and curved roads,” in 2019 18th European Control Conference (ECC), 2019, pp. 2024–2029. doi: 10.23919/ECC.2019.8796031.

[2] D. Kim, J. Kang, and K. Yi, “Control strategy for high-speed autonomous driving in structured road,” in 2011 14th International IEEE Conference on Intelligent Transportation Systems (ITSC), 2011, pp. 186–191. doi: 10.1109/ITSC.2011.6082856.

[3] J. Guldner, V. I. Utkin, and J. Ackermann, “A sliding mode control approach to automatic car steering,” in Proceedings of 1994 American Control Conference - ACC ’94, 1994, pp. 1969–1973 vol.2. doi: 10.1109/ACC.1994.752420.

[4] G. Tagne, R. Talj, and A. Charara, “Higher-order sliding mode control for lateral dynamics of autonomous vehicles, with experimental validation,” in 2013 IEEE Intelligent Vehicles Symposium (IV), 2013, pp. 678–683. doi: 10.1109/IVS.2013.6629545.

[5] M. Corno, G. Panzani, F. Roselli, M. Giorelli, D. Azzolini, and S. M. Savaresi, “An LPV Approach to Autonomous Vehicle Path Tracking in the Presence of Steering Actuation Nonlinearities,” IEEE Trans. Control Syst. Technol., vol. 29, no. 4, pp. 1766–1774, 2021, doi: 10.1109/TCST.2020.3006123.

[6] A. Norouzi, M. Masoumi, A. Barari, and S. Farrokhpour Sani, “Lateral control of an autonomous vehicle using integrated backstepping and sliding mode controller,” Proc. Inst. Mech. Eng. Part K J. Multi-Body Dyn., vol. 233, no. 1, pp. 141–151, 2019.

[7] E. Meyer, H. Robinson, A. Rasheed, and O. San, “Taming an Autonomous Surface Vehicle for Path Following and Collision Avoidance Using Deep Reinforcement Learning,” IEEE Access, vol. 8, pp. 41466–41481, 2020, doi: 10.1109/ACCESS.2020.2976586.

[8] K. B. Naveed, Z. Qiao, and J. M. Dolan, “Trajectory Planning for Autonomous Vehicles Using Hierarchical Reinforcement Learning.” 2020. [Online]. Available: https://arxiv.org/abs/2011.04752

[9] J. Ma, H. Xie, K. Song, and H. Liu, “Self-optimizing path tracking controller for intelligent vehicles based on reinforcement learning,” Symmetry, vol. 14, no. 1, p. 31, 2021.

[10] J. Schulman, F. Wolski, P. Dhariwal, A. Radford, and O. Klimov, “Proximal Policy Optimization Algorithms.” 2017. [Online]. Available: https://arxiv.org/abs/1707.06347

[11] “Gymnasium Documentation.” [Online]. Available: https://gymnasium.farama.org/

[12] “Google Colaboratory.” [Online]. Available: colab.google

Downloads

Published

2025-05-21

How to Cite

PID vs. Reinforcement Learning: A Comparative Study on Autonomous Driving in the Gymnasium Car Racing Environment. (2025). Development Engineering Conferences Center Articles Database, 2(7). https://pubs.bcnf.ir/index.php/Articles/article/view/609

Similar Articles

81-90 of 91

You may also start an advanced similarity search for this article.