We investigate the coordination structures generated by deep Q-network (DQN) with various types of input by using a distributed task execution game. Although cooperation and coordination are mandatory for efficiency in multi-agent systems (MAS), they require sophisticated structures or regimes for effective behaviors. Recently, deep Q-learning has been applied to multi-agent systems to facilitate their coordinated behavior. However, the characteristics of the learned results have not yet been fully clarified. We investigate how information input to DQNs affect the resultant coordination and cooperation structures. We examine the inputs generated from local observations with and without the estimated location in the environment. Experimental results show that they form two types of coordination structures—the division of labor and the targeting of near tasks while avoiding conflicts—and that the latter is more efficient in our game. We clarify the mechanism behind and the characteristics of the generated coordination behaviors.