This paper discusses a deep neural network (DNN)-based minimum variance (MV) beamformer suitable for the case where the target sound source moves slightly in front of the microphones. In practical applications of speech enhancement, such as a guidance terminal installed in a train station, the target sound source can be assumed to be located approximately in front of the microphones, although it may move slightly. Speech enhancement techniques used under such conditions can be classified into two types: one is to enhance the sound source while adaptively estimating its location, and the other is to enhance the area in front of the microphone array. The former requires localization of the target source but has a high degree of freedom of the beamformer, which can lead to high noise suppression performance, while the latter does not require the source localization but has a low degree of freedom of the beamformer. Speech enhancement experiments conducted to compare the performance of these approaches demonstrated that the MV beamformer based on adaptive sound source localization can provide more accurate enhancement than that based on area enhancement even when the sound source is moving.