Exploring Deep Q-Network for Autonomous Driving Simulation Across Different Driving Modes
DOI:
https://doi.org/10.62411/faith.3048-3719-31Keywords:
Automous Vehicle, Deep Q-Network, Driving Modes, Highway-env, MLP, Reinforcement Learning, Traffic SimulationAbstract
The rapid growth in vehicle ownership has led to increased traffic congestion, making the need for autonomous driving solutions more urgent. Autonomous Vehicles (AVs) offer a promising solution to improve road safety and reduce traffic accidents by adapting to various driving conditions without human intervention. This research focuses on implementing Deep Q-Network (DQN) to enhance AV performance in different driving modes: safe, normal, and aggressive. DQN was selected for its ability to handle complex, dynamic environments through experience replay, asynchronous training, and epsilon-greedy exploration. We designed a simulation environment using the Highway-env platform and evaluated the DQN model under varying traffic densities. The performance of the AV was assessed based on two key metrics: success rate and total reward. Our findings show that the DQN model achieved a success rate of 90.75%, 94.625%, and 95.875% in safe, normal, and aggressive modes, respectively. Although the success rate increased with traffic intensity, the total reward remained lower in aggressive driving scenarios, indicating room for optimization in decision-making processes under highly dynamic conditions. This study demonstrates that DQN can adapt effectively to different driving needs, but further optimization is needed to enhance performance in more challenging environments. Future work will focus on improving the DQN algorithm to maximize both success rate and reward in high-traffic scenarios and testing the model in more diverse and complex environments.
Downloads
References
D. R. I. M. Setiadi, R. R. Fratama, and N. D. A. Partiningsih, “Improved Accuracy of Vehicle Counter for Real-Time Traffic Monitoring System,” Transp. Telecommun. J., vol. 21, no. 2, pp. 125–133, Apr. 2020, doi: 10.2478/ttj-2020-0010.
D. R. I. M. Setiadi, R. R. Fratama, N. D. A. Partiningsih, E. H. Rachmawanto, C. A. Sari, and P. N. Andono, “Real-time multiple vehicle counter using background subtraction for traffic monitoring system,” in Proceedings - 2019 International Seminar on Application for Technology of Information and Communication: Industry 4.0: Retrospect, Prospect, and Challenges, iSemantic 2019, Sep. 2019, pp. 23–27. doi: 10.1109/ISEMANTIC.2019.8884277.
N. Mirzaeian, S.-H. Cho, and A. Scheller-Wolf, “A Queueing Model and Analysis for Autonomous Vehicles on Highways,” Manage. Sci., vol. 67, no. 5, pp. 2904–2923, May 2021, doi: 10.1287/mnsc.2020.3692.
J. E. Park, W. Byun, Y. Kim, H. Ahn, and D. K. Shin, “The Impact of Automated Vehicles on Traffic Flow and Road Capacity on Urban Road Networks,” J. Adv. Transp., vol. 2021, pp. 1–10, Nov. 2021, doi: 10.1155/2021/8404951.
P. Rodrigues and S. Vieira, “Optimizing Agent Training with Deep Q-Learning on a Self-Driving Reinforcement Learning Environment,” in 2020 IEEE Symposium Series on Computational Intelligence (SSCI), Dec. 2020, pp. 745–752. doi: 10.1109/SSCI47803.2020.9308525.
B. Thunyapoo, C. Ratchadakorntham, P. Siricharoen, and W. Susutti, “Self-Parking Car Simulation using Reinforcement Learning Approach for Moderate Complexity Parking Scenario,” in 2020 17th International Conference on Electrical Engineering/Electronics, Computer, Telecommunications and Information Technology (ECTI-CON), Jun. 2020, pp. 576–579. doi: 10.1109/ECTI-CON49241.2020.9158298.
Z. Cao et al., “Highway Exiting Planner for Automated Vehicles Using Reinforcement Learning,” IEEE Trans. Intell. Transp. Syst., vol. 22, no. 2, pp. 990–1000, Feb. 2021, doi: 10.1109/TITS.2019.2961739.
X. Xu, L. Zuo, X. Li, L. Qian, J. Ren, and Z. Sun, “A Reinforcement Learning Approach to Autonomous Decision Making of Intelligent Vehicles on Highways,” IEEE Trans. Syst. Man, Cybern. Syst., pp. 1–14, 2019, doi: 10.1109/TSMC.2018.2870983.
A. Amballa, A. P., P. Sasmal, and S. Channappayya, “Discrete Control in Real-World Driving Environments using Deep Reinforcement Learning,” arXiv. Nov. 28, 2022. [Online]. Available: http://arxiv.org/abs/2211.15920
D. Valencia et al., “Comparison of Model-Based and Model-Free Reinforcement Learning for Real-World Dexterous Robotic Manipulation Tasks,” in 2023 IEEE International Conference on Robotics and Automation (ICRA), May 2023, pp. 871–878. doi: 10.1109/ICRA48891.2023.10160983.
P. Christodoulou, “Soft Actor-Critic for Discrete Action Settings,” arXiv. Oct. 16, 2019. [Online]. Available: http://arxiv.org/abs/1910.07207
D. Attard and J. Bajada, “Autonomous Navigation of Tractor-Trailer Vehicles through Roundabout Intersections,” arXiv. Jan. 10, 2024. [Online]. Available: http://arxiv.org/abs/2401.04980
R. Sharma and P. Garg, “Optimizing Autonomous Vehicle Navigation with DQN and PPO: A Reinforcement Learning Approach,” in 2024 Asia Pacific Conference on Innovation in Technology (APCIT), Jul. 2024, pp. 1–5. doi: 10.1109/APCIT62007.2024.10673440.
J. Wang, C. Hu, J. Zhao, L. Zhang, and Y. Han, “Deep Q-Network-Enabled Platoon Merging Approach for Autonomous Vehicles,” Transp. Res. Rec. J. Transp. Res. Board, vol. 2678, no. 7, pp. 17–31, Jul. 2024, doi: 10.1177/03611981231203229.
A. Rizehvandi, S. Azadi, and A. Eichberger, “Enhancing Highway Driving: High Automated Vehicle Decision Making in a Complex Multi-Body Simulation Environment,” Modelling, vol. 5, no. 3, pp. 951–968, Aug. 2024, doi: 10.3390/modelling5030050.
Z. Lu and L. Lu, “Autonomous Driving Strategy for Highway Parallel-type On-ramp Merging Based on Deep Reinforcement Learning,” in 2023 5th International Academic Exchange Conference on Science and Technology Innovation (IAECST), Dec. 2023, pp. 958–966. doi: 10.1109/IAECST60924.2023.10503333.
T. Chu, J. Wang, L. Codeca, and Z. Li, “Multi-Agent Deep Reinforcement Learning for Large-Scale Traffic Signal Control,” IEEE Trans. Intell. Transp. Syst., vol. 21, no. 3, pp. 1086–1095, Mar. 2020, doi: 10.1109/TITS.2019.2901791.
B. Toghi, R. Valiente, D. Sadigh, R. Pedarsani, and Y. P. Fallah, “Altruistic Maneuver Planning for Cooperative Autonomous Vehicles Using Multi-agent Advantage Actor-Critic,” Jul. 2021, [Online]. Available: http://arxiv.org/abs/2107.05664
Z. Peng, X. Zhou, Y. Wang, L. Zheng, M. Liu, and J. Ma, “Curriculum Proximal Policy Optimization with Stage-Decaying Clipping for Self-Driving at Unsignalized Intersections,” in 2023 IEEE 26th International Conference on Intelligent Transportation Systems (ITSC), Sep. 2023, pp. 5027–5033. doi: 10.1109/ITSC57777.2023.10422594.
C. Yu, L. Zhang, D. Yin, D. Peng, and H. Huang, “Proximal Policy Optimization with Future rewards,” J. Phys. Conf. Ser., vol. 2010, no. 1, p. 012085, Sep. 2021, doi: 10.1088/1742-6596/2010/1/012085.
C.-V. Pal and F. Leon, “Brief Survey of Model-Based Reinforcement Learning Techniques,” in 2020 24th International Conference on System Theory, Control and Computing (ICSTCC), Oct. 2020, pp. 92–97. doi: 10.1109/ICSTCC50638.2020.9259716.
L. Kaiser et al., “Model-Based Reinforcement Learning for Atari,” arXiv. Mar. 01, 2019. [Online]. Available: http://arxiv.org/abs/1903.00374
I. Clavera, J. Rothfuss, J. Schulman, Y. Fujita, T. Asfour, and P. Abbeel, “Model-Based Reinforcement Learning via Meta-Policy Optimization,” in 2nd Conference on Robot Learning (CoRL 2018), Sep. 2018. [Online]. Available: http://arxiv.org/abs/1809.05214
S. Y.-C. Chen, “Quantum Deep Q-Learning with Distributed Prioritized Experience Replay,” in 2023 IEEE International Conference on Quantum Computing and Engineering (QCE), Sep. 2023, pp. 31–35. doi: 10.1109/QCE57702.2023.10180.
T. P. Lillicrap et al., “Continuous control with deep reinforcement learning,” arXiv. Sep. 09, 2015. [Online]. Available: http://arxiv.org/abs/1509.02971
Z. Zhu, C. Hu, C. Zhu, Y. Zhu, and Y. Sheng, “An Improved Dueling Deep Double-Q Network Based on Prioritized Experience Replay for Path Planning of Unmanned Surface Vehicles,” J. Mar. Sci. Eng., vol. 9, no. 11, p. 1267, Nov. 2021, doi: 10.3390/jmse9111267.
S. Nugroho, D. R. I. M. Setiadi, and H. M. M. Islam, “Exploring DQN-Based Reinforcement Learning in Autonomous Highway Navigation Performance Under High-Traffic Conditions,” J. Comput. Theor. Appl., vol. 1, no. 3, pp. 274–286, Feb. 2024, doi: 10.62411/jcta.9929.
A. EL Sallab, M. Abdou, E. Perot, and S. Yogamani, “Deep Reinforcement Learning framework for Autonomous Driving,” Electron. Imaging, vol. 29, no. 19, pp. 70–76, Jan. 2017, doi: 10.2352/ISSN.2470-1173.2017.19.AVM-023.
Z. Cao and J. Yun, “Self-Awareness Safety of Deep Reinforcement Learning in Road Traffic Junction Driving,” in arXiv, Jan. 2022. [Online]. Available: http://arxiv.org/abs/2201.08116
Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2024 Journal of Future Artificial Intelligence and Technologies
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.