GIF-SP: GA-based informative feature for noisy speech recognition

Satoshi Tamura, Yoji Tagami, Satoru Hayamizu

Research output: Chapter in Book/Report/Conference proceedingConference contribution

6 Citations (Scopus)

Abstract

This paper proposes a novel discriminative feature extraction method. The method consists of two stages; in the first stage, a classifier is built for each class, which categorizes an input vector into a certain class or not. From all the parameters of the classifiers, a first transformation can be formed. In the second stage, another transformation that generates a feature vector is subsequently obtained to reduce the dimension and enhance recognition ability. These transformations are computed applying genetic algorithm. In order to evaluate the performance of the proposed feature, speech recognition experiments were conducted. Results in clean training condition shows that GIF greatly improves recognition accuracy compared to conventional MFCC in noisy environments. Multi-condition results also clarifies that out proposed scheme is robust against differences of conditions.

Original languageEnglish
Title of host publication2012 Conference Handbook - Asia-Pacific Signal and Information Processing Association Annual Summit and Conference, APSIPA ASC 2012
Publication statusPublished - 2012
Externally publishedYes
Event2012 4th Asia-Pacific Signal and Information Processing Association Annual Summit and Conference, APSIPA ASC 2012 - Hollywood, CA, United States
Duration: 2012 Dec 32012 Dec 6

Publication series

Name2012 Conference Handbook - Asia-Pacific Signal and Information Processing Association Annual Summit and Conference, APSIPA ASC 2012

Other

Other2012 4th Asia-Pacific Signal and Information Processing Association Annual Summit and Conference, APSIPA ASC 2012
Country/TerritoryUnited States
CityHollywood, CA
Period12/12/312/12/6

ASJC Scopus subject areas

  • Information Systems

Fingerprint

Dive into the research topics of 'GIF-SP: GA-based informative feature for noisy speech recognition'. Together they form a unique fingerprint.

Cite this