Measurement for Better Philanthropy and Better Practice

“…a specter is haunting the nonprofit world, and it is the specter of measurement.”

A recent article featured in the Chronicle of Philanthropy makes a case against the latest push for a more ‘data-driven’ philanthropy. The article, titled Measurement is a Futile Way to Approach Grant Making, traces the history of philanthropic and government efforts to use measures of investment outcomes to guide their funding decisions.  The author, William Schambra, makes some important and provocative points.  He argues that the considerable effort it takes to collect, analyze and report data from these measures — especially on the part of practitioners — simply isn’t worth it – a) because findings from these studies are rarely used by their funders; and b) because the measures used in these studies tend to be idiosyncratic and transient and therefore can’t contribute useful information to the broader field.  We agree on these points and, like the author, are disturbed by them.

Where we differ from Mr. Schambra is in how to remedy this situation.   He calls for abandoning or radically curtailing efforts to measure the outcomes of education investments.  We say change the way it’s done: measure things that really matter to practitioners and investors; report results in more timely and consistent ways; and build capacity both of practitioners and investors to use these data more effectively.

Most of IRRE’s work has been with struggling high schools trying to transform their practices to produce better results for their students.  Our partner investors and practitioners in this work care about results – but what results?  Two come to mind: students making timely progress toward successful graduation and learning what they need to move on successfully to post-secondary training or college.  So let’s measure these things and let’s make sure the data we’re collecting can be used to help students, educators and investors make decisions before the students we’re measuring and the money we’re investing are gone.  Let’s also make sure we all measure these outcomes and their leading indicators similarly enough across initiatives so we can learn something from each others’ results.

Some Examples

IRRE and our partners use several leading indicators of graduation rates and post-secondary success:  1) progress toward graduation (the number of required credits for graduation divided by the number of credits the student should have accumulated by that point in their high school career); and, for college and career readiness, successful completion of gateway courses to post-secondary opportunities (i.e., passing or better grades in courses required for entry into 2-year and 4-year colleges and/or into defined post-secondary employment and training opportunities).  These indicators are neither exhaustive nor optimally precise; but, they illustrate how we can measure something meaningful to practitioners and investors (and students themselves) on a regular basis; they are available from existing student records (low burden on practitioners); and they can be calculated similarly enough across education initiatives to have results be compared for knowledge building purposes.

We have seen the utility of these kinds of measures enhanced by adding indicators of key practices known to affect these outcomes.  In particular, measures of classroom teaching and learning that provide a common definition and focus for instructional improvement.  Admittedly the burden on practitioners – instructional leaders and classroom teachers – increases by adding these measures to the initiative’s outcomes “dashboard.”  But the payoff can be significant — both to practitioners and to investors and ultimately to the field if these implementation measures are used across multiple initiatives.

IRRE and its school and district partners use a classroom visit protocol (loaded on PDAs, iPads and smart phone devices) that measures levels of engagement, alignment and rigor (EAR) in the teaching and learning observed during 20-minute classroom visits. With this system, school and district leaders gain immediate access to summary and individual classroom reports on the results; and are using these data to enrich and focus data-driven dialogue around student learning outcomes to design professional development and evaluate its effectiveness.   Data collected on this protocol in our rural, urban and ex-urban partner schools and districts are generating findings on how teaching and learning can be improved in these diverse contexts.

With actionable data on student outcomes and effective practices available the next step is to use it and use it effectively. In IRRE’s partnerships with schools and districts across the country we have learned how important it is to provide intensive supports for effective use of data at all levels of educational practice – teacher, administrator and district staff.  Building this capacity at all levels of the system has dramatically increased the perceived value of collecting the data in the first place.  By extension, similar kinds of professional development supports should and can be made available to investors as well.

In response to Mr. Schambra’s legitimate concerns with the state of outcomes-driven philanthropy, we argue for the positive benefits (relative to the burden on practitioners) of collecting certain kinds of data, reporting them in a timely fashion and supporting practitioners and investors to use them to guide educational practice and funding.  These data include: a common and limited set of long-term student outcomes of clear import to students, education practitioners and policy makers; leading indicators of these outcomes available from most student information systems; and assessments of a few critical practices contributing to those outcomes.  All three sets of measures can use common metrics so results on these outcomes can be used in comparative studies across different initiatives and time points.

 

Resources

Mr. Schambra correctly warns readers about “experts” who too often claim, “… all those earlier efforts failed, because they didn’t do it my way!”   The strategies we propose here are meant to be illustrative of the more general  point that taking small numbers of meaningful, and common measures of progress in a timely fashion with support provided to users of their results actually can be done and, we argue,  should be done for the benefit of practitioners and investors alike.  Below we provide a list of other sources describing different strategies to get to this same goal.

“Getting Ideas Into Action: Building Networked Improvement Communities in Education,” a Carnegie Foundation supported report. Available online at: http://www.carnegiefoundation.org/sites/default/files/bryk-gomez_building-nics-education.pdf

The Annenberg Institute for School Reform’s “Leading Indicator Series,” available at:  http://www.annenberginstitute.org/Products/LeadingIndicatorsSeries.php

 

The Consortium on Chicago School Research’s work identifying “On-Track” indicators for youth: http://ccsr.uchicago.edu/publications/07%20What%20Matters%20Final.pdf and their publication titled “A New Model for the Role of Research in Supporting Urban School Reform” http://ccsr.uchicago.edu/publications/CCSR%20Model%20Report-final.pdf

 

The FSG report “Breakthroughs in Shared Measurement and Social Impact” available at: http://www.fsg.org/tabid/191/ArticleId/87/Default.aspx?srpush=true

Researchers Without Borders’ work on measuring implementation fidelity and program enactment: http://www.researcherswithoutborders.org/projects/measuring-enactment

Leave a comment