Online surveys using crowdsourcing services have been widely adopted in academic research projects aimed at understanding human perception and behavior. Because there is a concern that online surveys may include dishonest or careless responses by crowdworkers who perform a large number of tasks, or responses by bots, several screening methods have been proposed to discard such low-quality responses. However, in security research, especially in phishing research where the attention of participants is considered to influence the results, the elimination of careless responses may lead to the removal of participants who should be included in the research. In this study, we address the following research question: "Does the adoption of existing screening methods bias the results of security surveys?"Using Amazon Mechanical Turk and Prolific Academic, two popular crowdsourcing platforms used in online surveys, we conducted online user studies (N = 600) on security knowledge, security behavior, and phishing email detection performance to elucidate the influence of screening methods on the results. The obtained results indicate that the adoption of the instructional manipulation check (IMC) screening method triggers bias in the demographics of the participants, as well as differences in the results of phishing email detection performance. In addition, the degree of these differences depends on the crowdsourcing platform. We also demonstrated that it is non-trivial to determine the correlation between screening methods and factors that can influence the results of a survey on security behavior. These findings suggest that caution should be exercised when applying screening methods such as attention checks and IMC in studies where the extent of user attention could have a significant impact on the results.