In this paper we introduce a novel approach that robust automatic speech features recognition of one's emotion is achieved in a classification model named decision forest. The 13th order of Mel-frequency ceptstrum coefficients (MFCC) vector is processed as the multivariate data that will be imported to our classifier. In order to draw underlying and inductive information behind the MFCC feature, our decision forest classifier contains two stages to make classification, a supervised clustering based pattern extraction stage and a soft discretization based decision forest stage. Finally, a Japanese emotion corpus used for training and evaluation is described in detail. The results in recognition of six discrete emotions exceeded a mean value of 81% recognition rate.