We propose a learning method that decides the appropriate activity cycle length (ACL) according to environmental characteristics and other agents' behavior in the (multi-agent) continuous cooperative patrol problem. With recent advances in computer and sensor technologies, agents, which are intelligent control programs running on computers and robots, obtain high autonomy so that they can operate in various fields without pre-defined knowledge. However, cooperation/coordination between agents is sophisticated and complicated to implement. We focus on the ACL which is time length from starting patrol to returning to charging base for cooperative patrol when agents like robots have batteries with limited capacity. Long ACL enable agent to visit distant location, but it requires long rest. The basic idea of our method is that if agents have long-life batteries, they can appropriately shorten the ACL by frequently recharging. Appropriate ACL depends on many elements such as environmental size, the number of agents, and workload in an environment. Therefore, we propose a method in which agents autonomously learn the appropriate ACL on the basis of the number of events detected per cycle. We experimentally indicate that our agents are able to learn appropriate ACL depending on established spatial divisional cooperation.