Usually long documents contain many sections and segments. In Wikipedia, one article can usually be divided into sections and one section can be divided into segments. But although one article is already divided into smaller segments, one segment can still be too long to read. So, we consider that segments should have a short summary for readers to grasp a quick view of the segment. This paper discusses applying neural summarization models including Seq2Seq model and pointer generator network model to segment summarization. These models for summarization can take target segments as the only input to the model. However, in our case, it is very likely that the remaining segments in the same article contain descriptions related to the target segment. Therefore, we propose several ways to extract an additional sequence from the whole article and then combine with the target segment, to be supplied as the input for summarization. We compare the results against the original models without additional sequences. Furthermore, we propose a new model that uses two encoders to process the target segment and additional sequence separately. Our results show our two-encoder model outperforms the original models in terms of ROGUE and METEOR scores.