Abstract
Real-time search provides an attractive framework for intelligent autonomous agents, as it allows us to model an agent's ability to improve its performance through experience. However, the behavior of real-time search agents is far from rational during the learning (convergence) process, in that they fail to balance the efforts to achieve a short-term goal (i.e., to safely arrive at a goal state in the present problem solving trial) and a long-term goal (to find better solutions through repeated trials). As a remedy, we introduce two techniques for controlling the amount of exploration, both overall and per trial. The weighted real-time search reduces the overall amount of exploration and accelerates convergence. It sacrifices admissibility but provides a nontrivial bound on the converged solution cost. The real-time search with upper bounds insures solution quality in each trial when the state space is undirected. These techniques result in a convergence process more stable compared with that of the Learning Real-Time A* algorithm.
Original language | English |
---|---|
Pages (from-to) | 1-41 |
Number of pages | 41 |
Journal | Artificial Intelligence |
Volume | 146 |
Issue number | 1 |
DOIs | |
Publication status | Published - 2003 May |
Externally published | Yes |
Keywords
- Adaptive learning
- Convergence process
- Rational agent
- Real-time heuristic search
- Resource-boundedness
ASJC Scopus subject areas
- Language and Linguistics
- Linguistics and Language
- Artificial Intelligence