Emotion display through facial expressions is an important channel of communication. However, between humans there are differences in the way a meaning to facial cues is assigned, depending on the background culture. This leads to a gap in recognition rates of expressions: this problem is present when displaying a robotic face too, as a robot's facial expression recognition is often hampered by a cultural divide, and poor scores of recognition rate may lead to poor acceptance and interaction. It would be desirable if robots could switch their output facial configuration flexibly, adapting to different cultural backgrounds. To achieve this, we made a generation system that produces facial expressions and applied it to the 24 degrees of freedom head of the humanoid social robot KOBIAN-R, and thanks to the work of illustrators and cartoonists, the system can generate two versions of the same expression, in order to be easily recognisable by both Japanese and Western subjects. As a tool for making recognition easier, the display of Japanese comic symbols on the robotic face has also been introduced and evaluated. In this work, we conducted a cross-cultural study aimed at assessing this gap in recognition and finding solutions for it. The investigation was extended to Egyptian subjects too, as a sample of another different culture. Results confirmed the differences in recognition rates, the effectiveness of customising expressions, and the usefulness of symbols display, thereby suggesting that this approach might be valuable for robots that in the future will interact in a multi-cultural environment.
ASJC Scopus subject areas
- コンピュータ サイエンス（全般）