The multiple linear model is used successfully to extend the linear model to nonlinear problems. However, the conventional multilinear models fail to learn the global structure of a training data set because the local linear models are independent of each other. Furthermore, the local linear transformations are learned in the original space. Therefore, the performance of multilinear methods is strongly dependent on the results of partition. This paper presents a kernel approach for the implementation of the local linear discriminant analysis for face recognition problems. In the original space, we utilize a set of local linear transformations with interpolation to approximate an optimal global nonlinear transformation. Based on the local linear models in the original space, we derive an explicit kernel mapping to map the training data into a high-dimensional transformed space. The optimal transformation is learned globally in the transformed space. Experimental results show that the proposed method is more robust to the partition results than the conventional multilinear methods. Compared with the general nonlinear kernels that utilize a black-box mapping, our proposed method can reduce the negative effects caused by the potential overfitting problem.
|ジャーナル||IEEJ Transactions on Electrical and Electronic Engineering|
|出版ステータス||Published - 2017 1月 1|
ASJC Scopus subject areas