Though we have recently witnessed the “exponential production of digital data to measure, analyze, and predict educational performance” (Salajan & Jules, this volume), there has not been sufficient attention given to the quantitative methods that are used to process and transform this data in order to arrive at findings related to “what works”. This chapter addresses this gap by discussing a range of constraints that affect the main methods used for this purpose, with these methods being known as “impact evaluation.” Specifically, this chapter addresses its purpose, first, by making explicit the methodological assumptions, technical weaknesses, and practical shortcomings of the two main forms of impact evaluation—regression analysis and randomized controlled trials. Although the idea of Big Data and the ability to process it is receiving more attention, the underlying point here is that these new initiatives and advances in data collection are still dependent on methods that have serious limitations. To that end, not only do proponents of Big Data avoid or downplay discussion of the methodological pitfalls of impact evaluation, they also fail to acknowledge the political and organizational dynamics that affect the collection of data. To the extent that such methods will increasingly be used to guide public policy around the globe, it is essential that stakeholders inside and outside education systems are informed about their weaknesses—methodologically and in terms of their inability to take the politics out of policymaking. While the promises of Big Data are seductive, they have not replaced the human element of decision making.