The performance of automatic speech recognition suffers from severe degradation in the presence of noise or reverberation. One conventional approach for handling such acoustic distortions is to use a speech enhancement technique prior to recognition. However, most speech enhancement techniques introduce artifacts that create a mismatch between the enhanced speech features and the acoustic model used for recognition, therefore limiting the improvement in recognition performance. Recently, there has been increased interest in methods capable of compensating for such a mismatch by accounting for the feature variance during decoding. In this paper, we propose to estimate the feature variance using an adaptation technique based on a discriminative criterion. In an experiment using the Aurora2 database, the proposed method could achieve significant digit error rate reduction compared with a spectral subtraction pre-processor, and using a discriminative criterion for adaptation provided further improvement compared with maximum likelihood estimation.