%0 Journal Article %T RD or Not RD: Using Experimental Studies to Assess the Performance of the Regression Discontinuity Approach %A Alexandra Resch %A Jillian Berk %A Philip Gleason %J Evaluation Review %@ 1552-3926 %D 2018 %R 10.1177/0193841X18787267 %X This article explores the performance of regression discontinuity (RD) designs for measuring program impacts using a synthetic within-study comparison design. We generate synthetic RD data sets from experimental data sets from two recent evaluations of educational interventions¡ªthe Educational Technology Study and the Teach for America Study¡ªand compare the RD impact estimates to the experimental estimates of the same intervention. This article examines the performance of the RD estimator with the design is well implemented and also examines the extent of bias introduced by manipulation of the assignment variable in an RD design. We simulate RD analysis files by selectively dropping observations from the original experimental data files. We then compare impact estimates based on this RD design with those from the original experimental study. Finally, we simulate a situation in which some students manipulate the value of the assignment variable to receive treatment and compare RD estimates with and without manipulation. RD and experimental estimators produce impact estimates that are not significantly different from one another and have a similar magnitude, on average. Manipulation of the assignment variable can substantially influence RD impact estimates, particularly if manipulation is related to the outcome and occurs close to the assignment variable¡¯s cutoff value %K methodological development %K content area %K quasi-experimental design %K methodology (if appropriate) %K outcome evaluation (other than economic evaluation) %K design and evaluation of programs and policies %K education %U https://journals.sagepub.com/doi/full/10.1177/0193841X18787267