Cooperation and coordination are sophisticated behaviors and are still major issues in studies on multi-agent systems because how to cooperate and coordinate depends on not only environmental characteristics but also the behaviors/strategies that closely affect each other. On the other hand, recently using the multi-agent deep reinforcement learning (MADRL) has received much attention because of the possibility of learning and facilitating their coordinated behaviors. However, the characteristics of socially learned coordination structures have been not sufficiently clarified. In this paper, by focusing on the MADRL in which each agent has its own deep Q-networks (DQNs), we show that the different types of input to the network lead to various coordination structures, using the pickup and floor laying problem, which is an abstract form related to our target problem. We also indicate that the generated coordination structures affect the entire performance of multi-agent systems.