In this paper, the methodology for automatically generating an expressive performance on the anthropomorphic flutist robot is detailed. A feed-forward network trained with the error back-propagation algorithm was implemented to model the performance's expressiveness of a professional flutist. In particular, the note duration and vibrato were considered as performance rules (sources of variation) to enhance the robot's performance expressiveness. From the mechanical point of view, the vibrato and lung systems were re-designed to effectively control the proposed music performance rules. An experimental setup was proposed to verify the effectiveness of generating a new score with expressiveness from a model created based on the performance of a professional flutist. As a result, the flutist robot was able of automatically producing an expressive performance similar to the human one from a nominal score.