TY - GEN
T1 - A runtime monitoring framework to enforce invariants on reinforcement learning agents exploring complex environments
AU - Mallozzi, Piergiuseppe
AU - Castellano, Ezequiel
AU - Pelliccione, Patrizio
AU - Schneider, Gerardo
AU - Tei, Kenji
N1 - Funding Information:
This work was supported by the Wallenberg AI, Autonomous Systems and Software Program (WASP).
PY - 2019/5
Y1 - 2019/5
N2 - Without prior knowledge of the environment, a software agent can learn to achieve a goal using machine learning. Model-free Reinforcement Learning (RL) can be used to make the agent explore the environment and learn to achieve its goal by trial and error. Discovering effective policies to achieve the goal in a complex environment is a major challenge for RL. Furthermore, in safety-critical applications, such as robotics, an unsafe action may cause catastrophic consequences in the agent or in the environment. In this paper, we present an approach that uses runtime monitoring to prevent the reinforcement learning agent to perform 'wrong' actions and to exploit prior knowledge to smartly explore the environment. Each monitor is de?ned by a property that we want to enforce to the agent and a context. The monitors are orchestrated by a meta-monitor that activates and deactivates them dynamically according to the context in which the agent is learning. We have evaluated our approach by training the agent in randomly generated learning environments. Our results show that our approach blocks the agent from performing dangerous and safety-critical actions in all the generated environments. Besides, our approach helps the agent to achieve its goal faster by providing feedback and shaping its reward during learning.
AB - Without prior knowledge of the environment, a software agent can learn to achieve a goal using machine learning. Model-free Reinforcement Learning (RL) can be used to make the agent explore the environment and learn to achieve its goal by trial and error. Discovering effective policies to achieve the goal in a complex environment is a major challenge for RL. Furthermore, in safety-critical applications, such as robotics, an unsafe action may cause catastrophic consequences in the agent or in the environment. In this paper, we present an approach that uses runtime monitoring to prevent the reinforcement learning agent to perform 'wrong' actions and to exploit prior knowledge to smartly explore the environment. Each monitor is de?ned by a property that we want to enforce to the agent and a context. The monitors are orchestrated by a meta-monitor that activates and deactivates them dynamically according to the context in which the agent is learning. We have evaluated our approach by training the agent in randomly generated learning environments. Our results show that our approach blocks the agent from performing dangerous and safety-critical actions in all the generated environments. Besides, our approach helps the agent to achieve its goal faster by providing feedback and shaping its reward during learning.
KW - LTL invariants
KW - Reinforcement learning
KW - Reward shaping
KW - Runtime monitoring
UR - http://www.scopus.com/inward/record.url?scp=85073167935&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85073167935&partnerID=8YFLogxK
U2 - 10.1109/RoSE.2019.00011
DO - 10.1109/RoSE.2019.00011
M3 - Conference contribution
AN - SCOPUS:85073167935
T3 - Proceedings - 2019 IEEE/ACM 2nd International Workshop on Robotics Software Engineering, RoSE 2019
SP - 5
EP - 12
BT - Proceedings - 2019 IEEE/ACM 2nd International Workshop on Robotics Software Engineering, RoSE 2019
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 2nd IEEE/ACM International Workshop on Robotics Software Engineering, RoSE 2019
Y2 - 27 May 2019
ER -