We describe a means of human robot interaction based not on natural language but on "quasi symbols," which represent sensory-motor dynamics in the task and/or environment. It thus overcomes a key problem of using natural language for human-robot interaction - the need to understand the dynamic context The quasi-symbols used are motion primitives corresponding to the attractor dynamics of the sensory-motor flow. These primitives are extracted from the observed data using the recurrent neural network with parametric bias (RNNPB) model. Binary representations based on the model parameters were implemented as quasi symbols in a humanoid robot, Robovie. The experiment task was robot-arm operation on a table. The quasi-symbols acquired by learning enabled the robot to perform novel motions. A person was able to control the arm through speech interaction using these quasi-symbols. These quasi symbols formed a hierarchical structure corresponding to the number of nodes in the model. The meaning of some of the quasi-symbols depended on the context, indicating that they are useful for human-robot interaction.