Finding rare information behind big data is important and meaningful for outlier detection. However, to find such rare information is extremely difficult when the notorious curse of dimensionality exists in high dimensional space. Most of existing methods fail to obtain good result since the Euclidean distance cannot work well in high dimensional space. In this paper, we first perform a grid division of data for each attribute, and compare the density ratio for every point in each dimension. We then project the points of the same area to other dimensions, and then we calculate the disperse extent with defined cluster density value. At last, we sum up all weight values for each point in two-step calculations. After the process, outliers are those points scoring the largest weight. The experimental results show that the proposed algorithm can achieve high precision and recall on the synthetic datasets with the dimension varying from 100 to 10000.