TY - JOUR
T1 - Reinforcement learning accounts for moody conditional cooperation behavior
T2 - experimental results
AU - Horita, Yutaka
AU - Takezawa, Masanori
AU - Inukai, Keigo
AU - Kita, Toshimasa
AU - Masuda, Naoki
N1 - Funding Information:
This research was supported by JST, ERATO, Kawarabayashi Large Graph Project. M.T. acknowledges financial supports provided through Grant-in-Aid for Scientfic Research (15K13111, 24653166) from the Japan Society for the Promotion of Science.
Publisher Copyright:
© The Author(s) 2017.
PY - 2017/1/10
Y1 - 2017/1/10
N2 - In social dilemma games, human participants often show conditional cooperation (CC) behavior or its variant called moody conditional cooperation (MCC), with which they basically tend to cooperate when many other peers have previously cooperated. Recent computational studies showed that CC and MCC behavioral patterns could be explained by reinforcement learning. In the present study, we use a repeated multiplayer prisoner's dilemma game and the repeated public goods game played by human participants to examine whether MCC is observed across different types of game and the possibility that reinforcement learning explains observed behavior. We observed MCC behavior in both games, but the MCC that we observed was different from that observed in the past experiments. In the present study, whether or not a focal participant cooperated previously affected the overall level of cooperation, instead of changing the tendency of cooperation in response to cooperation of other participants in the previous time step. We found that, across different conditions, reinforcement learning models were approximately as accurate as a MCC model in describing the experimental results. Consistent with the previous computational studies, the present results suggest that reinforcement learning may be a major proximate mechanism governing MCC behavior.
AB - In social dilemma games, human participants often show conditional cooperation (CC) behavior or its variant called moody conditional cooperation (MCC), with which they basically tend to cooperate when many other peers have previously cooperated. Recent computational studies showed that CC and MCC behavioral patterns could be explained by reinforcement learning. In the present study, we use a repeated multiplayer prisoner's dilemma game and the repeated public goods game played by human participants to examine whether MCC is observed across different types of game and the possibility that reinforcement learning explains observed behavior. We observed MCC behavior in both games, but the MCC that we observed was different from that observed in the past experiments. In the present study, whether or not a focal participant cooperated previously affected the overall level of cooperation, instead of changing the tendency of cooperation in response to cooperation of other participants in the previous time step. We found that, across different conditions, reinforcement learning models were approximately as accurate as a MCC model in describing the experimental results. Consistent with the previous computational studies, the present results suggest that reinforcement learning may be a major proximate mechanism governing MCC behavior.
UR - http://www.scopus.com/inward/record.url?scp=85009102985&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85009102985&partnerID=8YFLogxK
U2 - 10.1038/srep39275
DO - 10.1038/srep39275
M3 - Article
C2 - 28071646
AN - SCOPUS:85009102985
SN - 2045-2322
VL - 7
JO - Scientific Reports
JF - Scientific Reports
M1 - 39275
ER -