Experimental design is the method of choice for establishing whether social interventions have the intended effects on the populations they are presumed to benefit. Experience with field experiments, however, has revealed significant limitations relating chiefly to (a) practical problems implementing random assignment, (b) important uncontrolled sources of variability occurring after assignment, and (c) a low yield of information for explaining why certain effects were or were not found. In response, it is increasingly common for outcome evaluation to draw on some form of program theory and extend data collection to include descriptive information about program implementation, client characteristics, and patterns of change. These supplements often cannot be readily incorporated into standard experimental design, especially statistical analysis. An important advance in outcome evaluation is the recent development of statistical models that are able to represent individual-level change, correlates of change, and program effects in an integrated and informative manner.