We present an autonomous goal-directed strategy to learn how to control a redundant robot in the task space. We discuss the advantages of exploring the state space through goal-directed actions defined in the task space (i.e. learning by trying to do) instead of performing motor babbling in the joints space, and we stress the importance of learning to be performed online, without any separation between training and execution. Our solution relies on learning the forward model and then inverting it for the control; different approaches to learn the forward model are described and compared. Experimental results on a simulated humanoid robot are provided to support our claims. The robot learns autonomously how to perform reaching actions directed toward 3D targets in task space by using arm and waist motion, not relying on any prior knowledge or initial motor babbling. To test the ability of the system to adapt to sudden changes both in the robot structure and in the perceived environment we artificially introduce two different kinds of kinematic perturbations: a modification of the length of one link and a rotation of the task space reference frame. Results demonstrate that the online update of the model allows the robot to cope with such situations.