Communication between humans and robots is a very critical step for the integration of social robots into society. Emotion expression through a robotic face is one of the key points of communication. Despite the most recent efforts, no matter how much expression capabilities improve, facial expression recognition is often hampered by a cultural divide between subjects that participate in surveys. The purpose of this work is to take advantage of the 24 degrees of freedom head of the humanoid social robot KOBIAN-R for making it capable of displaying different versions of the same expressions, using face and neck, in a way that they are easy to understand for Japanese and for Western subjects. We present a system based on relevant studies of human communication and facial anatomy, as well as on the work of illustrators and cartoonists. The expression generator we developed can be adapted to specific cultures. Results confirmed the in-group advantage, showing that the recognition rate of this system is higher when the nationality of the subjects and the cultural characterisation of the shown expressions are coincident. We conclude that this system could be used, in future, on robots that have to interact in a social environment, with people with different cultural background.