Changing the grasping posture of objects within a robot hand is hard to achieve, especially if the objects are of various shape and size. In this paper we use a neural network to learn such manipulation with variously sized and shaped objects. The TWENDY-ONE hand possesses various properties that are effective for in-hand manipulation: a high number of actuated joints, passive degrees of freedom and soft skin, six-axis force/torque (F /T) sensors in each fingertip and distributed tactile sensors in the soft skin. The object size information is extracted from the initial grasping posture. The training data includes tactile and the object information. After training the neural network, the robot is able to manipulate objects of not only trained but also untrained size and shape. The results show the importance of size and tactile information. Importantly, the features extracted by a stacked autoencoder (trained with a larger dataset) could reduce the number of required training samples for supervised learning of in-hand manipulation.