An appropriate dimension reduction of raw data helps to reduce computational time and to reveal the intrinsic structure of complex data. In this paper, a dimension reduction method for regression is proposed. The method is based on the well-known sliced inverse regression and conditional entropy minimization. Using entropy as a measure of dispersion of data distribution, dimension reduction subspace is estimated without assuming regression function form nor data distribution, unlike conventional sliced inverse regression. The proposed method is shown to perform well compared to some conventional methods through experiments using both artificial and real-world data sets.