Multi-level speech emotion recognition based on Fisher criterion and SVM

Li Jiang Chen*, Xia Mao, Mitsuru Ishizuka

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

2 Citations (Scopus)


To solve the speaker independent emotion recognition problem, a multi-level speech emotion recognition system is proposed to classify 6 speech emotions, including sadness, anger, surprise, fear, happiness and disgust from coarse to fine. The key is that the emotions divided by each layer are closely related to the emotional features of speech. For each level, appropriate features are selected from 288 candidate features by Fisher ratio which is also regarded as input parameter for the training of support vector machine (SVM). Based on Beihang emotional speech database and Berlin emotional speech database, principal component analysis (PCA) for dimension reduction and Artificial Neural Network (ANN) for classification are adopted to design 4 comparative experiments, including Fisher+SVM, PCA+SVM, Fisher+ANN, PCA+ANN. The experimental results prove that Fisher rule is better than PCA for dimension reduction, and SVM is more expansible than ANN for speaker independent speech emotion recognition. Good cross-cultural adaptation can be inferred from the similar results of experiments based on two different databases.

Original languageEnglish
Pages (from-to)604-609
Number of pages6
JournalMoshi Shibie yu Rengong Zhineng/Pattern Recognition and Artificial Intelligence
Issue number4
Publication statusPublished - 2012 Aug
Externally publishedYes


  • Fisher criterion
  • Speaker independent
  • Speech emotion recognition
  • Support vector machine

ASJC Scopus subject areas

  • Artificial Intelligence
  • Computer Vision and Pattern Recognition
  • Software


Dive into the research topics of 'Multi-level speech emotion recognition based on Fisher criterion and SVM'. Together they form a unique fingerprint.

Cite this