We investigated whether a group of agents could learn the strategic policy with different sizes of input by deep Q-learning in a simulated takeout platform environment. Agents are often required to cooperate and/or coordinate with each other to achieve their goals, but making appropriate sequential decisions for coordinated behaviors based on dynamic and complex states is one of the challenging issues for the study of multi-agent systems. Although it is already investigated that intelligent agents could learn the coordinated strategies using deep Q-learning to efficiently execute simple one-step tasks, they are also expected to generate a certain coordination regime for more complex tasks, such as multi-step coordinated ones, in dynamic environments. To solve this problem, we introduced the deep reinforcement learning framework with two kinds of distributions of the neural networks, centralized and decentralized deep Q-networks (DQNs). We examined and compared the performances using these two DQN network distributions with various sizes of the agents’ views. The experimental results showed that these networks could learn coordinated policies to manage agents by using local view inputs, and thus, could improve their entire performance. However, we also showed that their behaviors of multiple agents seemed quite different depending on the network distributions.