A method for reducing the time required to train a deep neural network (DNN) via deep reinforcement learning (DRL) to enable a robot to conduct anchor-bolt insertion, a peg-in-hole task for holes in concrete, is proposed. The proposed method is also intended to reduce task execution time. The method consists of two steps. The first step involves creating a map of state observations and search results for holes opened in a concrete wall and using this map to train the DNN via DRL in an offline manner. The second step involves training the DNN with a curriculum that involves gradually increasing the step-size options the DNN can output to command the robot. Experimental evaluations of the method demonstrate that the offline training reduces DNN training time by about 87.5%, while enabling task execution with success rates and execution times that are similar to those obtained with a DNN trained online. Moreover, the evaluations show that curriculum training reduces task execution time, and enables execution of the peg-in-hole task for unknown holes with success rate of 97.5% and execution time of 7.77 s. This result represents a 12.8% higher success rate and a 4.71 s shorter execution time than those obtained with a DNN trained online. These results demonstrate the effectiveness of the proposed method and its applicability to the construction industry. Although the proposed method was applied to anchor-bolt insertion, it can be extended to any other peg-in-hole tasks conducted in discrete steps.