The objective of this study is to understand the learned-strategy of neural network-controlled robots in relation to their physical learning environments by visualizing the internal layer of the neural network. During the past few years, neural network-controlled robots that are able to learn in physical environments are becoming more common. While they can autonomously acquire strategy without human supervisions, it is becoming difficult to understand their strategy, especially when the robots, their environments and their tasks are complicated. In the critical fields that involve human safety, as in self-driving vehicles or medical robots, it is important for human to understand the strategies of the robots. In this preliminary study, we propose a hierarchical neural network with a two-dimensional topographical internal representation for training robots in physical environments. The 2D representation can then be visualized and analyzed to allow us to intuitively understand the input-output strategy of the robots in the context of their learning environments. In this paper, we explain about the learning dynamics of the neural network and the visual analysis of some physical experiments.