TY - GEN
T1 - A Reinforcement Learning Approach for Adaptive Covariance Tuning in the Kalman Filter
AU - Gu, Jiajun
AU - Li, Jialong
AU - Tei, Kenji
N1 - Funding Information:
ACKNOWLEDGMENT The research was partially supported by JSPS KAKENHI, JSPS Research Fellowships for Young Scientists.
Publisher Copyright:
© 2022 IEEE.
PY - 2022
Y1 - 2022
N2 - State estimation and localization for the autonomous vehicle are essential for accurate navigation and safe maneuvers. The commonly used method is Kalman filtering, but its performance is affected by the noise covariance. An inappropriate set value will decrease the estimation accuracy and even makes the filter diverge. The noise covariance estimation problem has long been considered a tough issue because there is too much uncertainty in where the noise comes from and therefore unable to model it systematically. In recent years, Deep Reinforcement Learning (DRL) has made astonishing progress and is an excellent choice for tackling the problem that cannot be solved by conventional techniques, such as parameter estimation. By finely abstracting the problem as an MDP, we can use the DRL methods to solve it without too many prior assumptions. We propose an adaptive covariance tuning method applied to the Error State Extend Kalman Filter by taking advantage of DRL, called Reinforcement Learning Aided Covariance Tuning. The preliminary experiment result indicates that our method achieves a 14.73% estimation accuracy improvement on average compared with the vanilla fixed-covariance method and bound the estimation error within 0.4 m.
AB - State estimation and localization for the autonomous vehicle are essential for accurate navigation and safe maneuvers. The commonly used method is Kalman filtering, but its performance is affected by the noise covariance. An inappropriate set value will decrease the estimation accuracy and even makes the filter diverge. The noise covariance estimation problem has long been considered a tough issue because there is too much uncertainty in where the noise comes from and therefore unable to model it systematically. In recent years, Deep Reinforcement Learning (DRL) has made astonishing progress and is an excellent choice for tackling the problem that cannot be solved by conventional techniques, such as parameter estimation. By finely abstracting the problem as an MDP, we can use the DRL methods to solve it without too many prior assumptions. We propose an adaptive covariance tuning method applied to the Error State Extend Kalman Filter by taking advantage of DRL, called Reinforcement Learning Aided Covariance Tuning. The preliminary experiment result indicates that our method achieves a 14.73% estimation accuracy improvement on average compared with the vanilla fixed-covariance method and bound the estimation error within 0.4 m.
KW - Autonomous driving
KW - Kalman filter
KW - Reinforcement learning
KW - State estimation
UR - http://www.scopus.com/inward/record.url?scp=85147672002&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85147672002&partnerID=8YFLogxK
U2 - 10.1109/IMCEC55388.2022.10020019
DO - 10.1109/IMCEC55388.2022.10020019
M3 - Conference contribution
AN - SCOPUS:85147672002
T3 - IMCEC 2022 - IEEE 5th Advanced Information Management, Communicates, Electronic and Automation Control Conference
SP - 1569
EP - 1574
BT - IMCEC 2022 - IEEE 5th Advanced Information Management, Communicates, Electronic and Automation Control Conference
A2 - Xu, Bing
A2 - Xu, Bing
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 5th IEEE Advanced Information Management, Communicates, Electronic and Automation Control Conference, IMCEC 2022
Y2 - 16 December 2022 through 18 December 2022
ER -