For generating motion-compensated predictive frames, which is one of the video coding processes, there has been a lot of studies on using DNN without using motion vectors. Conventional methods of generating motion-compensated predictive frames using DNN use only the source frames for prediction in the forward direction. However, in the ever-standardized video coding schemes, it has been confirmed that the bi-directional prediction, e.g., B-picture, improves coding efficiency. Thus, for generating motion-compensated predictive frames to be used in video coding, we propose to apply PredNet bidirectionally, that is a future frame generation model using DNN based on the prediction process of visual input stimuli in brain. In this paper, the performance of the predictive frames generated by the proposed method is evaluated by using MSE and SSIM compared with the prediction accuracy applying PredNet only to the forward direction. In addition, we also investigate whether the prediction accuracy of the predicted frames can be improved by increasing the amount of training frames in videos chosen from YouTube-8M. The results show the effectiveness of the proposed method in terms of less prediction error compared with the forward-only PredNet, as well as the performance increasing by more training data.