We propose a learning method that decides the period of activity according to environmental characteristics and the behavioral strategies in the multi-agent continuous cooperative patrol problem. With recent advances in computer and sensor technologies, agents, which are intelligent control programs running on computers and robots, obtain high autonomy so that they can operate in various fields without pre-defined knowledge. However, cooperation/coordination between agents is sophisticated and complicated to implement. We focus on the activity cycle length (ACL) which is the time length from when an agent starts a patrol to when the agent returns to a charging base in the context of a cooperative patrol where agents, like robots, have batteries with limited capacity. A long ACL will enable an agent to visit distant locations, but the agent will require a long rest time to recharge. The basic idea of our method is that if agents have long-life batteries, they can appropriately shorten the ACL, and thus can visit important locations with a short interval of time by recharging frequently. However, appropriate ACL must depend on many elements such as environmental size, number of agents, workload in an environment, and other agents’ behavior and ACLs. Therefore, we propose a method in which agents autonomously learn the appropriate ACL on the basis of the number of events detected per cycle. We experimentally indicate that our agents are able to learn appropriate ACL depending on established spatial divisional cooperation. We also report the details of the analysis of the experimental results to understand the behaviors of agents with different ACLs.