Linear regression for Hidden Markov Model (HMM) parameters is widely used for the adaptive training of time series pattern analysis especially for speech processing. This paper realizes a fully Bayesian treatment of linear regression for HMMs by using variational techniques. This paper analytically derives the variational lower bound of the marginalized log-likelihood of the linear regression. By using the variational lower bound as an objective function, we can optimize the model topology and hyper-parameters of the linear regression without controlling them as tuning parameters; thus, we realize linear regression for HMM parameters in a non-parametric Bayes manner. Experiments on large vocabulary continuous speech recognition confirm the generalizability of the proposed approach, especially for small quantities of adaptation data.