A mixture of multiple linear classifiers is famous for its efficiency and effectiveness to tackle nonlinear classification problems. Each classifier contains one linear function multiplied with a gated function, which restricts its corresponding classifier to a local region. Previous researches mainly focus on the partition of local regions, since its quality directly determines the performance of mixture models. However, in real-world data sets, imbalanced and insufficient labeled data are two frequently encountered problems, which also have large influences on the performance of learned classifiers but are seldom considered or explored in the context of mixture models. In this paper, these missing components are introduced into the original formulation of mixture models, namely, a sample weighting scheme for imbalanced data distributions and a manifold regularization to leverage unlabeled data. Then, two solutions with closed form are provided for parameter optimization. Experimental results in the end of our paper exhibit the significance of the added components. As a result, a mixture of multiple linear classifiers can be extended to imbalanced and semi-supervised learning problems.