From Shanker Blog:

A recent Mathematica report on the performance of KIPP charter schools expands and elaborates on their prior analyses of these schools’ (estimated) effects on average test scores and other outcomes (also here). These findings are important and interesting, and were covered extensively elsewhere.

As is usually the case with KIPP, the results stirred the full spectrum of reactions. To over-generalize a bit, critics sometimes seem unwilling to acknowledge that KIPP’s results are real no matter how well-documented they might be, whereas some proponents are quick to use KIPP to proclaim a triumph for the charter movement, one that can justify the expansion of charter sectors nationwide.

Despite all this controversy, there may be more opportunity for agreement here than meets the eye. So, let’s try to lay out a few reasonable conclusions and see if we might find some of that common ground.

When KIPP works, it works very well, but it’s not a sure bet. Regardless of what you think of KIPP’s approach, the evidence thus far suggests that students who attend these schools tend to improve more quickly on tests compared with similar regular public school students. They show meaningfully large relative gains in all major subjects and on multiple assessments, as well as in other types of outcomes, such as student and parent satisfaction (as is often the case, longer-term outcomes remain an open question). We can and should discuss the possibility of unmeasured factors such as peer effects, but it seems unlikely that these factors would come close to explaining away the estimated impacts. In general, KIPP schools are well-run and they do a good job with their students. At this point, arguing otherwise is unsupportable. At the same time, it’s worth noting that Mathematica’s analysis found that roughly one in three KIPP middle schools did not produce significantly better results in reading, and one in four failed to do so in math (though, in both subjects, virtually none showed significant negative results). Even for KIPP, some “failure” is part of the game. Running schools is difficult.

KIPP’s approach only works for some students, but these are among the students who need the most help to catch up. Through 8-9 hour days and summer school, KIPP adds the regular public school equivalent of about 3-4 months to the school calendar. In addition, exceedingly rigid disciplinary policies and parental contracts are used to enforce high-bar academic and behavioral standards. You might characterize such practices as the careful implementation of a “blunt force” approach to education. It’s very difficult to put a number on this, but it’s safe to say that this model is not a good fit for a very large proportion of students, regardless of background. Most of these students won’t apply to KIPP in the first place. Those who are admitted, but don’t thrive in this environment, can (and probably should) seek out alternatives; and, indeed, many do. So, KIPP’s high-intensity approach won’t work for most students, but that doesn’t change the fact that the students who apply and remain are predominantly from lower-income families living in urban areas, and many of them are well behind their peers elsewhere. For this sub-subgroup, KIPP’s approach is often very effective in helping them catch up.

KIPP is very expensive, but they may get a worthwhile return on that additional investment. The existing evidence, though still a bit scarce (due mostly to data availability), suggests that the average KIPP school spends substantially more money than comparable regular public schools. For instance, in our analysis of CMO spending in New York, Ohio and Texas (co-sponsored by NEPC and the Great Lakes Center), the authors – Bruce Baker, Ken Libby and Katy Wiley – find that KIPP spends roughly 30-35 percent more, and in a few cases up to 50 percent more (also see this article). Accordingly, KIPP cannot exist (or expand) without ample private donations. These schools seem like they might be getting a worthwhile return on that considerable investment, but it’s difficult to draw such conclusions at this point. In any case, KIPP doesn’t do “more with less.” They do more with more (and sometimes not even that).

Whether or not KIPP is “scalable” depends on how you define that term. KIPP’s detractors often argue that KIPP cannot be scaled up. This is not quite true. If successfully scaling up requires majority market share in a large school district, then it’s true that KIPP serves only a tiny fraction of students in the districts where they operate. And, due to reasons such as those discussed above (e.g., cost, students, etc.), as well as others (e.g., the teacher labor supply), it’s questionable at best to argue that they could expand very much beyond this super-minority share in any given area without getting significantly diminishing returns. At the same time, however, a single charter chain “taking over” most or all of a district was never on the table; indeed, it’s antithetical to the charter school concept. New KIPP schools are opening up every year – they started in 1994 and currently operate about 125 schools. Most of them seem to be successful in following the model and maintaining a consistent level of quality. There is almost certainly a ceiling, probably a low one, in any given location, but we shouldn’t act as if KIPP hasn’t shown that it can be replicated successfully.

Using KIPP’s results to draw conclusions about charter schools in general is no more supportable than insisting KIPP doesn’t work at all. Charter school sectors, whether in any one place or nationally, are quite diverse. That’s baked into the charter school concept. Charter advocates are clearly justified in holding up KIPP as an example of a successful model, but it’s a whole different ballgame to imply, however subtly, that the 40 or so KIPP middle schools in this report speak to the success or failure of charter schools in general, or justify their expansion. Frankly, if anything, KIPP highlights the disparity between their test-based results and those of the vast majority of charters (including schools run by other established CMOs), which do no better (and often worse) than comparable public schools.

In summary, then, the points above suggest that part of the controversy surrounding KIPP stems from some supporters making too much of it, and some opponents making too little. KIPP’s success is not a fluke, but it’s not a miracle either.

Moreover, the important policy question here is why a small handful of approaches, most prominently KIPP’s, get results, whereas most do not. Charter supporters’ responses to this question tend to focus on innovation, flexibility and personnel policies. That’s all worth considering, and identifying the reasons why some schools work and some don’t is extraordinarily difficult. However, the research thus far suggests that what sets these models apart may have more to do with their reliance on higher spending, massive amounts of extended time, rigid disciplinary policies and high-dosage tutoring programs.

So, if KIPP and similar models offer any lessons for regular public schools (and other charters), they might speak to the need for smartly increasing investment and services in the schools that our most disadvantaged students attend. And this is an agenda that even the most adamant charter critics can support.