Recent driving assistance technologies such as Electronic Stability Control (ESC) and auto brake system release drivers from complicated driving tasks. On the other hand, there is concern that it reduces pleasure feelings of a driver if these system's behaviors are different from the driver's intention. To avoid such problem, it is important to evaluate the driver's intention and decision-making process, and design the assistance system to fit it. In this research, we propose an unsupervised reinforcement learning driver model based on human cognitive mechanism and human brain architecture. Because this study's objective is to analyze the process of driving decision making, we hire a simple actor-critic model as a driver model. We set learning parameters from the driver's decision making characteristics which are derived from the task execution process of the human brain, and set state space from driver's sensory characteristics. This driver model can predict lane change decision making adequately and shows high accuracy (ACC=94%) on verification tests with real driving data. This result is similar to unpublished results of a deep neural network driver model which use the same data as teaching data. From these results, we consider that the proposed reward function and learned state space represent the driver's decision making characteristics.