We propose a learning and negotiation method to enhance divisional cooperation and demonstrate its robustness to environmental changes in the context of the multi-agent cooperative problem. With the ongoing advances in information and communication technology, we now have access to a vast array of information, and everything has become more closely connected due to innovations such as the Internet of Things. However, this makes the tasks/problems in these environments complicated. In particular, we often require fast decision making and flexible responses to adapt to changes of environment. For these requirements, multi-agent systems have been attracting interest, but the manner in which multiple agents cooperate with each other is a challenging issue because of the computational cost, environmental complexity, and sophisticated interaction required between agents. In this work, we address a problem called the continuous cooperative patrol problem,which requires high autonomy, and propose an autonomous learning method with simple negotiation to enhance divisional cooperation for efficient work. We also investigate how this system can have high robustness, as this is one of the key elements in an autonomous distributed system. We experimentally show that agents with our method generate role sharing in a bottom-up manner for effective divisional cooperation. The results also show that two roles, specialist and generalist, emerged in a bottom-up manner, and these roles enhanced the overall efficiency and the robustness to environmental change.