Massive Online Open Courses (MOOCs) have the potential to revolutionize higher education with their wide outreach and accessibility. One of key challenges in MOOCs is the student evaluation: The large number of students makes it infeasible for instructors or teaching assistants (TAs) to grade all assignments. Peer grading-having students assess each other-is a promising approach to tackling the problem of evaluation at scale. The user evaluations are then used directly, or aggregated into a consensus value. However, lacking an incentive scheme, users have no motive in making effort in completing the evaluations, providing inaccurate answers instead. To address the above issues, we propose and implement a peer grading scheme, RankwithTA. Specifically, considering that the quality of a student determines both her performance in the assignment and her grading ability, RankwithTA makes the grade each student received depend on both the quality of the solution they submitted, and on the quality of their review and grading work to incentivize students' correct grading, Furthermore, the ground truth is incorporated, which utilizes external calibration by having some students graded by instructors or TAs to provide a basis for accuracy. The simulation results illustrate that RankwithTA performs better than the existing schemes.