How Do We Scale Up the Good Things that We Do?

Five Critical Conditions for Scaling up High Quality Educational Practices

How do we scale-up high quality educational practices? This basic question has challenged educators, policy makers, and technical assistance providers (like us) for decades. Today, the transition to Common Core State Standards (CCSS) gives this question even more urgency. Recently, we contributed to a series of discussion pieces called “Implementing the Common Core State Standard” that focused on sharing professional development approaches and ideas designed to support the implementation of the CCSS in mathematics. This series was sponsored by the Carnegie Corporation and the Institute of Advanced Study and is hosted by The Opportunity Equation. You can access our piece (authored by Drs. James Connell and Julie Broom) and the entire series here.

Of course, the question of how to scale-up isn’t original to education. In every sector there are examples of organizations who take a high quality product or service and achieves significant and sustained growth in its sale and use. In 1987 Starbucks was a single coffee shop in Seattle, WA – and, today it boasts more than 16,000 locations across 50 countries. What enabled Starbucks to consistently deliver a high quality product and service at such an immense scale? Let’s be clear – we aren’t comparing the coffee roasting industry to the field of education. However, we are asking the same basic question that faced Starbucks in 1987 – how do we scale-up the good things that we do?

If you aren’t familiar with our work (see here for background), a significant part of what we do is to provide professional development, technical assistance, and strategic consultation to schools and districts designed to initiate meaningful and lasting changes in critical educational practices. The cornerstone of this work is providing educators with intensive and sustained supports that offer our partners ample opportunities to experience, observe, practice, and receive consistent feedback on new practices. The challenge we – and any other educational organization working to initiate and support change – continually face is how do we bring these kinds of intensive supports to a much greater scale?

We have come to realize that simply expanding our capacity to provide these “thick supports” to more and more districts is not the most efficient or sustainable path to scaling up – either for us or our school and district partners.  Neither will the powerful efficiencies and reach of technology fully address the need to convey the highly complex adult learning processes involved in changing core professional practice across widely diverse individuals and work settings. Instead, we seek a methodology to transfer our support approach and intellectual capital to other entities working with and within these districts—such as curriculum and instruction departments within larger districts, regional intermediaries and service centers, and state departments of education. We think that building this “local” capacity is the key to scaling up and sustaining changes in educational practice. It allows local education agencies – with limited and fleeting access to funding for outside supports such as ours – to become innovation support organizations themselves. In this way, a limited initial investment in capacity building provides significant return on investment as they become equipped to support, strengthen and sustain effective practices in their own systems.

From our experiences thus far we have seen very clearly (and painfully at times) that five critical conditions will either make or break this kind of capacity building effort. In this blog series we will be outlining these conditions and discussing the important lessons we have learned in our efforts to get them in place and successfully build the capacity of our partners. In this entry (Part One) we will discuss the first two conditions, and the final three will be covered in Part Two.

Condition One: Provide rigorous, meaningful and responsive learning opportunities to participants in the capacity building process; and

Condition Two: Ensure that your staff – including teacher trainers, leadership coaches and strategic consultants – commits the necessary time to deliver these opportunities effectively and thoroughly.

In all of our previous capacity building efforts, the will and skill was there on our part (along with our great modesty) to put these first two conditions in place. We developed and were prepared to offer three complementary learning opportunities to build our partners capacity: didactic seminars, field-based observation and reflection, and apprenticeship. In all three cases, we provided some of each type of learning opportunity, but at nowhere near the intensity, frequency or persistence required to meet the ambitious capacity building goals we had set out for ourselves and our local partners. Why not? In each case we used a local district we were working with as a “teaching hospital” where our capacity building partners could experience and observe the work firsthand. The challenge we continually ran into was that our focus on supporting our district partners in overcoming implementation challenges eroded our commitment to teaching the work to our capacity building partners.

LESSON LEARNED: If you’re having guests over for dinner use a recipe you’ve made before. When selecting the training sites for capacity building efforts, conduct a thorough needs assessment first to be as sure as possible that this partner site will present familiar implementation challenges. This will mitigate the risk of getting too focused on troubleshooting and supporting implementation at that site and short-shifting the time needed to provide high quality learning opportunities for your capacity building partners.

Coming up in Part Two: Identifying and retaining high quality capacity building partners – and more…

Why High School Turnaround Models Will Not Scale Up and What Will

Three of the four federally mandated high school reform models (turnaround, restart and close) seek to change the mix of people surrounding students currently  in struggling high schools. With any of these three models implemented, these students will presumably have more contact with more competent and motivated folks than in their previous school setting. I call these re-sorting strategies — they move people around to effect change in practice. The fourth federally mandated model (transformation) relies on the existing school staff and district leaders (and sometimes a new principal) to come up with a better way to do their business with the same people a lot of federal money. All four of these models  — three that re-sort the people and one that doesn’t —  are guided by some well-known but rarely implemented principles of effective practice — strong leadership, high quality teaching, data-driven improvement.  

My experience supporting and now evaluating these reform efforts is that neither re-sorting commitment and talent within a system nor having struggling schools come up with their own suggestions for how to improve themselves, qualify as scaleable responses to underperforming schools and systems; much less to our national failure to ensure universal access to quality education.

It gets worse. Re-sorting strategies — without an immediate, massive influx of highly qualified, student-ready educators well beyond anything currently and temporarily funded by federal dollars — will exacerbate inequities rather than address them. Concentrating more motivated and competent people in fewer places depletes the rest of the system, making it “worse” than it was for those left behind. Likewise, expecting that three years of even lavish funding loosely hitched to research-based principles of good practice will somehow “transform” the thinking and the practice of the educators currently involved with struggling schools ignores the meager results of Annenberg’s and the Gates’ investments in very similar approaches.  

Here’s the good news and a way to act on it. We know what educational practices will bring student success — high quality instruction in literacy (within and across content areas) and math; and deep personalization of student learning experience focused on college and work readiness. We have a pretty good idea of what strategies can get these practices to replace our current ones — smaller learning communities for adults and students with certain structural and functional qualities; high quality professional development supports and internal commitment of qualified instructional leaders to work with teachers to get new learning implemented; and good data systems with training for all — students, educators, families — to use these systems to track and intervene to sustain progress in student outcomes and implementation of effective practices.  

What we don’t have yet are mechanisms and the political will to set standards of care for students — what they deserve to learn and how they are treated and taught; and standards of practice and support for educators — what they do and how they are supported to do it. The common core standards move us forward on filling the first of these gaps — what students deserve to know and be able to do — but we haven’t articulated the rest of these standards (of care, practice and supports) or committed to do what it takes to meet them.

We shouldn’t wait, and our students can’t afford to wait for the inevitable disappointment our re-sorting and minor revisions of business as usual will bring. District, state and federal education leaders supported by expert practitioners, reform support organizations and researchers should lead a concerted effort to bring the “standards of care, practice and support” to reality at least in some representative districts and states.

In my next blog entry, I will lay out how we can learn from other disciplines of practice to move this work forward.

About the author of this blog entry: Dr. James P. Connell is the President of the Institute for Research and Reform in Education.

A New Measure for Classroom Quality

A recent Op-Ed in the New York Times titled, “A New Measure for Classroom Quality” (read it here: sparked a dialogue among our colleagues here at IRRE this Monday morning.  This blog entry features the email exchange that resulted. We welcome you to join the conversation in the comments section below.

Subject: “A New Measure for Classroom Quality”

Interesting article…how do we respond to this most effectively?

Jim Connell, IRRE President

Re: “A New Measure for Classroom Quality”


I thought it was telling that these discussions about “evaluating” teachers/teaching so often end at diagnosing teachers/teaching as ‘good’ or ‘bad’ as if that is the real aim of the evaluation. It seems to me (and ‘us’ — I think) that a response should try to deepen this discussion…i.e…that evaluation should accomplish (at least) two goals: 1) an evaluation of classroom instructional quality; and 2) providing the kind of insight/information about teaching needed to design effective PD to support teachers through the difficult process of changing and improving their practice.  I for one, would like to hear Julie and Anissa’s take on this.

Todd Lacher, IRRE Research Manager

Re: “A New Measure for Classroom Quality”


Todd,  You hit on the exact two points that stuck for me although a bit of expansion on number two as well as a couple of other ideas.

Most important point Todd brought up and is my greatest push back on the Instructional Rounds approach; our tool is not about diagnosing and then “fixing or healing” a “sick” teacher it is about growth and support for all teachers.  It is a tool when used well provides leaders with the data that are being demanded by the public but at the same time provides a basis for all additional instructional support to grow.

1)      As you have said the measure should identify the quality of instruction or state of teaching and learning in the building, across the district, with in departments, by course level, by grade level and individually – making visible the trends that are taking place.

2)      Identifying the PD supports needed: topics, types (small group, one on one coaching, modeling, peer visits, etc) and how much

3)      Tracking and monitoring the implementation of the new learning and effectiveness of the supports provided

4)      Increasing the instructional leaders’ repertoire of instructional strategies by learning from those they are observing and having the opportunity to give credit and share those new learnings across the school.

5)      Provide an impetus for teachers to engage in reflective dialogue and work with their coaches and peers to improve their practice.

My thoughts.

Julie Broom, IRRE Director of Instruction

Re: “A New Measure for Classroom Quality”

It struck me as I was reading that the point seemed to be only to determine who was a “good” teacher and who was “bad” without any thought as to support for growth.  Too often, articles focus on teachers teaching rather than students learning.  I had a conversation with a literacy coach this past week about how the title “teacher” no longer reflects our true goal which is to help facilitate learning of students.

I think another piece is that there seems to be a lack of definition and agreement often in what “good teaching” looks like.  That’s why it always made sense to me to look more at the learning behaviors of students.  While the instructional strategies can vary and look differently from one class or one teacher to another, learning behaviors are more universal.  They are also the true end result.  It also seems that teachers react more positively and can have much more effective and lasting conversations that change their own practice when the focus is on impacting student learning and what that should look like rather than saying their own behaviors should like a specific way.

Those are my thoughts.

Anissa Collins, IRRE Associate Director of Instruction

We here at The Forum would love to hear what YOU think.  Come join the conversation.

Measurement for Better Philanthropy and Better Practice

“…a specter is haunting the nonprofit world, and it is the specter of measurement.”

A recent article featured in the Chronicle of Philanthropy makes a case against the latest push for a more ‘data-driven’ philanthropy. The article, titled Measurement is a Futile Way to Approach Grant Making, traces the history of philanthropic and government efforts to use measures of investment outcomes to guide their funding decisions.  The author, William Schambra, makes some important and provocative points.  He argues that the considerable effort it takes to collect, analyze and report data from these measures — especially on the part of practitioners — simply isn’t worth it – a) because findings from these studies are rarely used by their funders; and b) because the measures used in these studies tend to be idiosyncratic and transient and therefore can’t contribute useful information to the broader field.  We agree on these points and, like the author, are disturbed by them.

Where we differ from Mr. Schambra is in how to remedy this situation.   He calls for abandoning or radically curtailing efforts to measure the outcomes of education investments.  We say change the way it’s done: measure things that really matter to practitioners and investors; report results in more timely and consistent ways; and build capacity both of practitioners and investors to use these data more effectively.

Most of IRRE’s work has been with struggling high schools trying to transform their practices to produce better results for their students.  Our partner investors and practitioners in this work care about results – but what results?  Two come to mind: students making timely progress toward successful graduation and learning what they need to move on successfully to post-secondary training or college.  So let’s measure these things and let’s make sure the data we’re collecting can be used to help students, educators and investors make decisions before the students we’re measuring and the money we’re investing are gone.  Let’s also make sure we all measure these outcomes and their leading indicators similarly enough across initiatives so we can learn something from each others’ results.

Some Examples

IRRE and our partners use several leading indicators of graduation rates and post-secondary success:  1) progress toward graduation (the number of required credits for graduation divided by the number of credits the student should have accumulated by that point in their high school career); and, for college and career readiness, successful completion of gateway courses to post-secondary opportunities (i.e., passing or better grades in courses required for entry into 2-year and 4-year colleges and/or into defined post-secondary employment and training opportunities).  These indicators are neither exhaustive nor optimally precise; but, they illustrate how we can measure something meaningful to practitioners and investors (and students themselves) on a regular basis; they are available from existing student records (low burden on practitioners); and they can be calculated similarly enough across education initiatives to have results be compared for knowledge building purposes.

We have seen the utility of these kinds of measures enhanced by adding indicators of key practices known to affect these outcomes.  In particular, measures of classroom teaching and learning that provide a common definition and focus for instructional improvement.  Admittedly the burden on practitioners – instructional leaders and classroom teachers – increases by adding these measures to the initiative’s outcomes “dashboard.”  But the payoff can be significant — both to practitioners and to investors and ultimately to the field if these implementation measures are used across multiple initiatives.

IRRE and its school and district partners use a classroom visit protocol (loaded on PDAs, iPads and smart phone devices) that measures levels of engagement, alignment and rigor (EAR) in the teaching and learning observed during 20-minute classroom visits. With this system, school and district leaders gain immediate access to summary and individual classroom reports on the results; and are using these data to enrich and focus data-driven dialogue around student learning outcomes to design professional development and evaluate its effectiveness.   Data collected on this protocol in our rural, urban and ex-urban partner schools and districts are generating findings on how teaching and learning can be improved in these diverse contexts.

With actionable data on student outcomes and effective practices available the next step is to use it and use it effectively. In IRRE’s partnerships with schools and districts across the country we have learned how important it is to provide intensive supports for effective use of data at all levels of educational practice – teacher, administrator and district staff.  Building this capacity at all levels of the system has dramatically increased the perceived value of collecting the data in the first place.  By extension, similar kinds of professional development supports should and can be made available to investors as well.

In response to Mr. Schambra’s legitimate concerns with the state of outcomes-driven philanthropy, we argue for the positive benefits (relative to the burden on practitioners) of collecting certain kinds of data, reporting them in a timely fashion and supporting practitioners and investors to use them to guide educational practice and funding.  These data include: a common and limited set of long-term student outcomes of clear import to students, education practitioners and policy makers; leading indicators of these outcomes available from most student information systems; and assessments of a few critical practices contributing to those outcomes.  All three sets of measures can use common metrics so results on these outcomes can be used in comparative studies across different initiatives and time points.



Mr. Schambra correctly warns readers about “experts” who too often claim, “… all those earlier efforts failed, because they didn’t do it my way!”   The strategies we propose here are meant to be illustrative of the more general  point that taking small numbers of meaningful, and common measures of progress in a timely fashion with support provided to users of their results actually can be done and, we argue,  should be done for the benefit of practitioners and investors alike.  Below we provide a list of other sources describing different strategies to get to this same goal.

“Getting Ideas Into Action: Building Networked Improvement Communities in Education,” a Carnegie Foundation supported report. Available online at:

The Annenberg Institute for School Reform’s “Leading Indicator Series,” available at:


The Consortium on Chicago School Research’s work identifying “On-Track” indicators for youth: and their publication titled “A New Model for the Role of Research in Supporting Urban School Reform”


The FSG report “Breakthroughs in Shared Measurement and Social Impact” available at:

Researchers Without Borders’ work on measuring implementation fidelity and program enactment:

Motivation and Autonomy: Returning to Our Roots

There’s a great conversation going on over at Justin Baeder’s On Performance blog in Education Week (  What started as a blog about curriculum-based assessment has transformed into a thought-provoking discussion about the nature and role of professional autonomy and motivation in teaching.  As we’ve been following along, it got us thinking about our own “roots” in research on human motivation and autonomy.  So, we thought we would take this opportunity to revisit the ideas that initially inspired the work that we do.  As we take this trip down memory lane, we would like to extend the invitation to you to join us and to return to your roots as well.  Reflect on the ideas, theories, concepts, or research that motivates the work that you do – and take a few minutes to share them with us.

Three Basic Psychological Needs

The intent of IRRE’s work is to create conditions in schools and school systems (for students and adults) that most powerfully support their meeting three fundamental psychological needs:

  • competence,
  • autonomy and
  • relatedness.

These three needs are the cornerstone of Self‐determination Theory (SDT) – as developed by Edward Deci and Richard Ryan – and mentioned in the On Performance discussion (see also

For students, our core strategies are designed to create conditions that increase their sense that they know what it takes to do well in school and can pull it off (competence), that give them an important role in shaping their own learning – learning aligned with things they see as important and worthwhile (autonomy) and that let them know they are connected to other people in school (peers and educators) and at home (family) in ways that support their learning and development (relatedness).

According to SDT, with these needs being met, students should engage emotionally, behaviorally and cognitively in the work of school and, by doing so, do better, like school more and stay with it even when challenged. When these needs are not met, students will become disaffected – avoid cognitive challenges, withdraw emotionally or act out and do as little as possible to get by. With disaffection comes decreases in attendance, drop‐out, and poorer academic performance; with engagement, students are more likely to show up, graduate and perform better academically. These hypotheses drawn from SDT have been supported repeatedly in empirical studies of diverse student populations at elementary, middle and high school levels using measures specific to SDT and instruments and methodologies drawn from other theoretical perspectives as well.

What Does it Look Like in Schools?

Returning now to the “conditions” in schools and school systems that can promote or undermine students’ motivation and learning, SDT hypothesizes that three aspects of students’ experience in school are critical. Do schools provide their students:

  • High, clear and fair expectations, and effective supports to meet them (structure);
  • Opportunities to make meaningful choices in school and to make meaning of their experiences in school (autonomy support); and
  • People (adults and peers) who know, respect, trust and care about them

Again, a significant body of literature exists that confirms these and closely related dimensions of schooling make a difference in students sense of competence, autonomy and relatedness; their engagement; and their doing what it takes to finish school and perform well academically

Over the past 15 years, IRRE’s reform framework First Things First (FTF) has evolved from these basic tenets. This work has articulated the changes needed in school structures and functions to strengthen these three conditions in support of student motivation, engagement and learning. The theory and research that initially inspired our work focused on students – their experiences in school, their motivation, and their engagement and learning. However, our current work includes processes that support individual-, school‐ and system level change in adult behavior. This extension of FTF has been shaped by:

  • What earlier reform frameworks and their results showed were practices tied to improving the structure, autonomy support and involvement experienced by students in school;
  • What motivational theory and research (and other related literatures) had to say about engaging adults to change their own and their systems’ behavior and beliefs about schools and schooling; and
  • What IRRE and our partner schools and districts did that seemed to work or not work to produce desired change in students’ and adults’ motivation, engagement and learning.

An Invitation to Share Your Roots

These are our roots. Motivational theory and research provided the spark that led to bringing educators and researchers together to develop the strategies that became First Things First. We invite practitioners and reform support organization members to share with us – briefly or at length – where you came from in your work to improve schools.

Come join the conversation.

Excerpts from:

Connell, J.P., Klem, A.M., Lacher, T., Leiderman, S., & Moore, W. (2009). First Things First: Theory, Research and Practice. Toms River, NJ: Institute for Research and Reform in Education. Available online:

The Active Ingredients of School Improvement

Three Common Strategies for Improvement

Federal policy makers, and increasingly the American public through vehicles such as “Waiting for Superman” recognize the urgent need to make rapid and meaningful change in the educational experiences of millions of American public school students now attending low performing schools.

Unfortunately, commonly advocated responses to this need, both old and new, have shown little evidence thus far of being able to address it at scale. The federal government is currently investing billions of dollars in a variety of strategies to improve these schools through School Improvement Grants, the Race to the Top program and the Investing In Innovation (i3) Fund. These huge investments acknowledge both the need to find sustainable and scalable school improvement solutions and the nascent state of our efforts to do so.

Historically the field has looked to three approaches to meet this need. Targeted approaches focus on slices of the school improvement problem or on small percentages of struggling schools’ students. Even the best of these programs, have not, by themselves, turned many struggling schools into thriving ones. More ambitious comprehensive approaches (locally or externally initiated) have recorded some successes; but failures to implement them consistently with fidelity or sustain them across leadership transitions weaken their case for scalability. The strong wind now filling the sails of new turnaround approaches such as charter schools, replacing principals and staff, and school reconstitution comes in significant part from investor enthusiasm and local success stories, rather than consistent or widespread results. Emerging evidence reveals these turnaround approaches’ are vulnerable to the same implementation and sustainability challenges as comprehensive reform approaches.


The Active Ingredients of School Improvement


IRRE’s 15‐year history spans this entire evolution of reform thinking. We continue to believe there are ways to meaningfully improve large numbers of chronically underperforming schools; and that the active ingredients of these solutions are known. In IRRE’s view, they are:

  • a rigorous, complete and continuous diagnosis of what educational practices are missing, insufficiently implemented or doing harm;
  • sufficient initial and ongoing capacity to provide educators high quality training and supports for implementation of the educational practices needed; and
  • accountability and management conditions for all involved that bring these two ingredients into large numbers of struggling schools with sufficient focus, force and persistence to generate and sustain meaningful change.

Each of these ingredients shows up in one or more of the prevailing reform approaches. For example, great training and supports for teachers and building leaders characterize the best of all three approaches. Several comprehensive reform models identify and seek to implement the full array of supports needed by struggling schools and their students to be successful. Flexible staff recruitment and strong accountability characterize the best of the new turnaround approaches. What remains elusive is the recipe for combining these ingredients in ways that produce consistent and sustainable results for large numbers of struggling schools.


Combining Ingredients


If there is a modus operandi for this blog it will be to provoke honest conversations about how schools, districts, foundations, governments, and organizations like ours can work together and be smarter about how these key ingredients get implemented and combined.  Too often the work of school improvement remains compartmentalized because of competition, ideology, and politics.  We believe common ground exists that can be leveraged to move this critical work forward and improve our young peoples’ educational experiences and outcomes.

Come join the conversation.