Various language services exist to support the listening comprehension of non-native speakers (NNSs). One important service is to provide NNSs with real-time transcripts generated by automatic speech recognition (ASR) technologies. The goal of our research is to explore the effects of ASR transcripts on the listening comprehension of NNSs and consider how to support NNSs with ASR transcripts more effectively. To reach our goal, we ran three studies. The first study investigates the comprehension problems faced by NNSs, and the second study examines how ASR transcripts impact their listening comprehension, e.g., what types of comprehension problems could and could not be solved by reading ASR transcripts. Finally, the third study explores the potential of using eye-tracking data to detect their comprehension problems. Our data analysis identified thirteen types of listening comprehension problems. ASR transcripts helped the NNSs solve certain problems, e.g., “failed to recognize words they know.” However, the transcripts did not solve problems such as “lack of vocabulary,” and indeed NNS burden was increased. Results also show that from eye-tracking data we can make reasonably accurate predictions (83.8%) about the types of problems encountered by NNSs. Our findings provide insight into ways of designing real-time adaptive support systems for NNSs.