Multilingual communities using machine translation to overcome language barriers are showing up with increasing frequency. However, when a large number of translation errors get mixed into conversations, users have difficulty completely understanding each other. In this paper, we focus on misconceptions found in high volume in actual online conversations using machine translation. We first examine the response patterns in machine translation-mediated communication and associate them with misconceptions. Analysis results indicate that response messages to include misconceptions posted via machine translation tend to be incoherent, often focusing on short phrases of the original message. Next, based on the analysis results, we propose a method that automatically predicts the occurrence of misconceptions in each dialogue. The proposed method assesses the tendency of each dialogue including misconceptions by calculating the gaps between the regular discussion thread (syntactic thread) and the discussion thread based on lexical cohesion (semantic thread). Verification results show significant positive correlation between actual misconception frequency and gaps between syntactic and semantic threads, which indicate the validity of the method.