Reinforcement Learning Algorithm Optimizes Multi-Sensor Energy Management Strategy for New Energy Vehicles
DOI:
https://doi.org/10.52152/4375Keywords:
Energy management strategy, Multi-sensor fusion, Plug-in hybrid electric vehicle, Double Deep Q-Network, Priority experience replayAbstract
Aiming at the problem that the current multi-sensor Energy Management Strategy (EMS) of new energy vehicles is not adaptable enough and has insufficient system stability when dealing with sensor data mutations and complex road conditions, this paper takes Plug-in Hybrid Electric Vehicle (PHEV) as the research object, integrates Reinforcement Learning (RL) algorithm, and studies the optimization of multi-sensor EMS, aiming to improve the energy consumption control and system robustness of PHEV under non-steady-state conditions and sensor interference conditions. This paper first constructs a state observation module that integrates multi-sensor data to provide input for decision-making strategies through refined perception of the vehicle's operating environment. Then, a dual network structure is introduced based on Deep Q-Network (DQN) to alleviate the problem of Q-value overestimation and improve strategy stability by separating action selection and value evaluation processes. Finally, combined with Prioritized Experience Replay (PER), the training priority of experience samples is dynamically adjusted according to the Temporal Difference (TD) error to improve the learning efficiency of key states and the generalization ability of strategies. The conclusion shows that the EMS under the proposed method has strong dynamic load stability and adaptability in complex working conditions of a disturbance environment, providing a new idea with more engineering adaptability for the energy efficiency optimization of PHEV.
Downloads
Published
Issue
Section
License
Copyright (c) 2025 Shanshan Li (Author)

This work is licensed under a Creative Commons Attribution 4.0 International License.