In this paper, two methods for one-dimensional reduction of data by hyperplane fitting are proposed. One is least α-percentile of squares, which is an extension of least median of squares estimation and minimizes the α-percentile of squared Euclidean distance. The other is least k-th power deviation, which is an extension of least squares estimation and minimizes the k-th power deviation of squared Euclidean distance. Especially, for least k-th power deviation of 0 < k ≤ 1, it is proved that a useful property, called optimal sampling property, holds in one-dimensional reduction of data by hyperplane fitting. The optimal sampling property is that the global optimum for affine hyperplane fitting passes through N data points when an -dimensional hyperplane is fitted to the N-dimensional data. The performance of the proposed methods is evaluated by line fitting to artificial data and a real image.