Collective Impact Accelerator: Using Data for Advancing Progress in Collective Impact (Apply by July 26)

Posted 3 months ago at 11:30 am

You are invited to participate in a 12-month action learning cohort focused on using data for advancing progress in collective impact. The Collective Impact Forum, an initiative of FSG and the Aspen Institute Forum for Community Solutions, is developing a “Collective Impact Accelerator” to improve how collective impact funders, backbone teams, and other partners use data to learn and strengthen their work in collaboration with others, ultimately contributing to achieving greater impact in communities.

The goals of the Collective Impact Accelerator are to:

  • Build the capacity of backbone leaders, funders, and other partners to effectively use data as a key strategy in collective impact contributing to improved results for communities.
  • Create a supportive peer learning community where backbone teams, funders and/or partners have candid conversations and learn with one another about using data in collective impact.
  • Identify promising practices that will be shared broadly with the field to support backbone leaders, funders, and other practitioners interested in using data in collective impact.

This Collective Impact Accelerator will be limited to participants from 10 separate collaboratives (including one funder and up to two backbone/data partners from each collaborative). Participants will meet for three in-person working sessions from November 2019 to November 2020, take advantage of individual coaching support from Collective Impact Forum’s staff facilitators, and also join three peer learning calls during months when there is not a working session. Participants will identify an area of their collaborative where they will focus on using data, and participants are expected to commit time in between the Accelerator meetings and calls to make progress on their identified action learning project for their collaborative.

The participation fee is $10,000 for each collaborative (for two representatives) or $12,500 for each collaborative (for three representatives). This fee covers the meeting costs and staff time to plan for and facilitate all calls and meetings. Participants will cover their own travel and accommodation costs.

Mark your calendar for these key dates:

  • Applications opened on May 29 and will close on July 26, 2019.
  • Join an informational call from 4-5pm on June 26 to learn more before applying. (note: you can listen to the recording of the June 18 informational call here; attached to this post are the informational call discussion slides).
  • We will select 10 participating collaboratives (including one funder and up to two backbone/data partners per collaborative) by August 16, 2019.
  • The first full-day in-person meeting will take place on Tuesday, Nov. 12, 2019 in Chicago, IL (with reception and dinner the night before).
  • The second in-person meeting will take place on Tuesday, May 5, 2020, in Minneapolis, MN (with reception and dinner the night before).
  • The third in-person meeting will take place in October/November 2020 in Washington, D.C. (date will be confirmed by late 2019 based on selected accelerator participants’ availability).

See attached or click on this link to read "frequently asked questions" about the Collective Impact Accelerator.

Webinar

Using Data and Shared Measurement in Collective Impact

Data gathering and shared measurement systems are key elements for collective impact initiatives to better understand and assess their work, but can be also very challenging to start and sustain. What can we learn from other initiatives about their practices related to gathering and sharing data, and what impact it had on their outcomes?

In this virtual coffee, we're talking about gathering and sharing data with Emily Bradley and Michael Nailat, program officers at Home for Good, an initiative that works collaboratively on systems and solutions to end homelessness.

This virtual coffee was held on August 14, 2018 from 3pm – 4pm ET.

Note: For the first 2-3 minutes of the session, the audio goes in and out a bit. After this short period, it evens out and is audible for the rest of the 60-minute session.


Virtual Coffee Resources:

Presentation: Download a copy of the presentation used for this virtual coffee at the link on the right of this page. (Logging in to your Collective Impact Forum account will be necessary to download materials.)

Home for Good was one of 25 sites that participated in the research study When Collective Impact has an Impact. This new study, more than a year in the making, looks at the question of “To what extent and under what conditions does the collective impact approach contribute to systems and population changes?”


Listen to past Collective Impact Virtual Coffee Chats

Virtual Coffee archive

Success Measures Data System

Posted 2 years ago at 11:30 am

Hi everyone,

I'm looking at creating a report on shared measurement for an internship this summer (focus of the CI initiative is reconciliation between Indigenous and non-Indigenous folks in Canada). I've come across "Sucess Measures Data System" as a potential paid option for creating a shared measurement system (http://www.successmeasures.org/data-system).

I'm wondering...
a) has anyone has any experience working with this system, and if so, what was it like? 
b) does anyone know of similar systems for shared measurement?

Any help is greatly appreciated,

Haw'aa,
(Thank you),

Iloradanon Efimoff 

To Validate or Elevate? Measuring Community Impact in an Actionable Way

Posted Wednesday, March 29, 2017 at 11:49 pm

Last November, Matt Forti and Kim Siegal penned an article titled Actionable Measurement: Getting from “Prove” to “Improve” in the Stanford Social Innovation Review. The article calls upon the social sector to unite around “common questions” that “nonprofits ought to answer about their impact so that they can maximize learning and action around their program models.”

Forti and Siegal depart from ongoing debates in the social sector’s measurement community over the appropriateness of experimental evaluations (i.e., randomized trials)—the industry’s gold standard—to prove a program’s impact. Such large-scale evaluations may be suitable in some instances, but Forti and Siegal thoughtfully argue, instead, that most practitioners would be better served through a more immediate focus on improvement.

We agree. Experimental evaluations are valuable tools to test whether a program works—when programs are applied consistently across similar settings.

But community-level interventions pose significant limitations to experimental evaluation. Ethics aside, providers are quick to point out their community’s uniqueness from all others, confounding an apples-to-apples comparison across sites. Moreover, an average study timeline of three to five-years, coupled with a price tag in the hundreds of thousands of dollars, or more, pose serious hurdles to those who must not only maximize the value to their clients and funders, but also demonstrate that value in short order.

Instead, Forti and Siegal pose a guiding question that closely mirrors our Institute’s approach to community-level evaluation: “what common insights would allow nonprofit leaders to make decisions that generate more social good for clients they serve?”

There is an old Army saying that goes, “what gets checked gets done.” So too, Forti and Siegal’s idea of actionable measurement is to use insights now—in the midst of doing the work itself—to learn, adapt, improve program service delivery, increase social good, and maximize impact over time.

Actionable measurement, or “shared measurement” in collective impact parlance, is a major driver within our AmericaServes initiative, an effort to build local coordinated networks of service organizations that improve how our nation’s military-connected members and their families access a wide range of services and resources in their communities.

Put simply, AmericaServes helps communities create networks of service providers and improve how they operate as a system. Analogous to health care coordination models (e.g., accountable care organizations, patient centered medical homes), AmericaServes strengthens local nonprofit coordination by providing initial funding for a backbone coordination center and the technology to manage—and measure—a referral-based system of care. Accordingly, for both health care and human service delivery, system-level measurement focused on continuous quality improvement is critical to test and implement changes that address the complex or changing needs of the client.

Standard system outcome and satisfaction measures allow AmericaServes communities to monitor and improve their performance. These insights provide the basis for community planning sessions, on-the-ground relationship building, and quarterly in-progress reviews.

As new insights continually emerge, communicating our advances (and setbacks) takes on increasing importance. Additionally, there are new aspects of our work—some we believe followers may have missed—that we want to expand upon to promote a greater awareness and understanding of IVMF’s community-based efforts.

Forti and Siegal, following a comprehensive review of a decade’s worth of their organization’s field studies and research, established “four categories of questions that drove the greatest learning, action, and impact improvement.” We apply the Forti and Siegal framework to the AmericaServes initiative and find that it provides a helpful basis upon which to consider our current outcomes and future actions in the coming years.


1. Impact Drivers: Are there particular conditions or program components that disproportionately drive results?

While there are multiple performance indicators, two stand out above all others: case referral timeliness and appropriateness. As a coordinated network, AmericaServes’ theory of change is centered on assisting clients to the right point of service, resource, or care, in the shortest time possible. This is consistent with what the heath care field defines as quality of care.

Often, those seeking services present multiple, co-occurring (i.e., comorbid) needs. Consequently, service providers within AmericaServes communities—operating as a comprehensive support network, rather than fragmented collection of services—are best-incentivized to address the specific need(s) presented to their organization. Here, their limited resources are put to their first and best use—a hallmark of superior performance and sustainability.

As human service providers, we all know the disproportionate amount of time and energy spent on attempts to address needs beyond our organization’s boundaries. More often than not, these efforts to connect people and their needs beyond our capacity or expertise results in not only organizational failure, but extreme client frustration and unmet expectations. Getting the right client to the right point of service in a timely fashion—streamlined access—while critical, is, at times a herculean feat.

It is often said that communities are not capacity-poor, but rather fragmented-rich. Additionally, the veteran-serving nonprofit sector is rife with patchy eligibility criteria (each uniquely exclusive or inclusive in their approach) and layered on top of membership rules that subsequently underpin the very programs put in place to help. To combat these factors, AmericaServes communities work carefully to digitally connect their clients to the most appropriate provider in a timely fashion, mitigating the deep fragmentation across the social sector. If done disproportionately well enough, we can open the all-too-often locked doors of any community’s capacity to serve human needs and drive greater innovation within human services overall.


2. Impact distribution: Does the program generate better results for a particular sub-group?

Apparently so. The greatest early gains appear to be in networks with strong, active coordination centers—the backbone organizations that manage and monitor case referrals between network providers.

We see a pattern emerging in our AmericaServes networks. Those that report the greatest share of positive case outcomes (e.g., client received housing services) and levels of provider engagement (i.e., making and receiving case referrals), also tend to have coordination centers that:

(1) focus on equitable referral distribution across many providers and

(2) have built strong relationships with the local VA.

For example, the PAServes-Greater Pittsburgh coordination center, based within the Pittsburgh Mercy Health System, has a longstanding relationship with the local VA. To date, the Pittsburgh network reports the highest share of providers making and receiving referrals, and of positive overall case outcomes in the first year of operation. Having witnessed the success in Pittsburgh, other networks are actively building and expanding their relationships with local VA offices, and we will be monitoring the resulting provider engagement and outcomes over the coming months.

Strong coordination centers with knowledgeable intake specialists are able to navigate the complex eligibility criteria and make appropriate client referrals. In other words, this generates “smart” referrals to them, consisting of pre-screened clients who eligible for the services they provide. More importantly, accurate referrals eliminate wasted time, resources, and most importantly, the negative interactions that occur when providers are forced to turn away ineligible clients.


3. Impact persistence: How does a given client’s impact change over time?

While AmericaServes ultimately aims to demonstrate a positive long-term impact on the well-being of each community’s local military-connected population, it is, foremost, a care coordination intervention on a system of human service providers. The initiative’s immediate outcomes—adapted from health care—are centered on the activities and experiences of those coordinating and receiving coordinated services.

Forti and Siegal’s work revealed that clients that experience good outcomes tend to engage with the program more over time.

AmericaServes aims to ensure that clients who access coordinated services see similar benefits. If working as intended, long-term impact at the client level should loosely follow a needs hierarchy. That is, over time, clients should use the network less frequently as needs are met. Moreover, longer-tenured or repeat clients’ needs should resemble a pattern that transitions from basic physiological needs (food and water), to security (housing, employment, healthcare), social (education, relationships, love), and esteem (hobbies, volunteering) needs.

Early data suggests that a select number of program participants return to the network for additional services. While further analysis is underway, early thinking suggests three possible explanations:

(1) the initial provider’s service intervention failed to take root sufficiently, thus creating an opportunity to improve and reattempt to solve the individual’s problem;

(2) a tertiary need (a related aspect of co-occurrence) was discovered after the initial provider’s service intervention was introduced, creating a secondary network demand; or

(3) the client returned to the network for additional services to satisfy higher-order social or esteem needs, following successful resolution of prior basic physiological or security needs.

Regardless of the root cause, one constant is clear: clients are viewing the network as a resource to help address their needs. And as Forti and Siegal found, client impact may be measured and improved upon through a greater emphasis on client retention.


4. Impact externalities: What are the positive and negative effects on the people and communities not directly accessing the program?

While we aim to in time, we have yet to explore the unintended consequences—both positive and negative—on the communities and individuals not directly accessing AmericaServes. Consider, for example, does AmericaServes, by addressing the social determinants of health and well-being, generate positive returns to VA health care system (e.g., improved health markers, reductions in hospitalization, prescription drugs, cost avoidance, etc.)? This is a fantastic research question, notwithstanding that AmericaServes is barely two years old, operating in just a handful of communities, and still evolving.

Learning from what gets measured—“checked” in Army-speak—and the actions taken in light of that learning, may be, as Forti and Siegal concluded, the more important boost in social good needed to serve our veterans and military better today. Certainly, understanding these externalities is crucial to prove the efficacy of our approach in the long-term, and we continue to explore opportunities for an AmericaServes randomized trial or quasi-experiment.

We will get there eventually. For now, however, we remain strongly focused on improving the AmericaServes model to create more social good in these communities today.


What do you think? How have you worked with public, philanthropic, and nonprofit stakeholders to reconcile the tensions and timing of both proving and improving system-level collective impact initiatives? How are you using insights today to drive greater understanding and dialogue around the impact drivers, distribution, persistence, and unintended benefits and consequences of your work?

Article

Culture Matters: Using A Culture of Adaptive Learning to Implement Collective Impact

This article explores how the First 2000 Days Network has used a culture of adaptive learning to implement our Collective Impact initiative.   We share how this approach has impacted our structure, governance, leadership capacity, and shared measurement approach.    Also included are some key considerations for Collective Impact implementors when supporting cultures which value continuous improvement and adaptive learning.

This article is a supplement to our case study "Establishing Pre-Conditions for Systems Change in Early Childhood Development", linked below.

Your comments and feedback welcome!

How Do Cross-Sector Collaborations for Education Present Data to the Public?

Posted Wednesday, May 18, 2016 at 5:59 pm

The collective impact model of cross-sector collaboration emphasizes the use of shared measurement systems for identifying problems and needs, tracking progress, and measuring results. But to what extent are cross-sector collaborations around the country promoting data as an integral part of their work? With support from The Wallace Foundation, our research team at Teachers College, Columbia University set out to understand the characteristics of a national array of cross-sector collaborations for education, taking an aerial view to analyze information presented on their public websites. What we have learned is that despite the emphasis on data, only 40% of the 182 initiatives identified by our nationwide scan devote a separate section of their websites to data, statistics, or outcomes.


What data are collaborations tracking?

The most common indicators on initiatives’ websites are student performance on standardized tests (43%) and high school graduation rates (35%). Many of the collaborations are “cradle to career” initiatives, designed to support students from pre-kindergarten through college and career entry, so it is not surprising to see that roughly one-quarter track indicators of early childhood care and learning. Post-secondary enrollment (20%) and completion rate (18%) data are also somewhat prevalent on public websites. When it comes to data about student experiences and well-being, far fewer initiatives track such measures. For example, only 5% of the initiatives report some kind of indicator for social and emotional development, which has been recognized as crucial for 21st-century learning and attainment. 

It may be the case that initiatives choose to use certain indicators because they are important markers for academic success and college attainment, but it is also likely that some data are presented because they are fairly easy to obtain from state and/or local data platforms. Common indicators like high school graduation rates can also be aggregated to a city or regional level where separate public, private, and charter school sectors are involved, making it easier to draw points of comparison. Less conventional indicators, such as social-emotional learning, might not be as common due to a lack of agreement on measurement. It seems plausible that convenience, rather than intentionality about program goals or community needs, marks the standard for choosing indicators. While a quarter of the collaborations show data patterns over time, only 17% provide indicators disaggregated by race/ethnicity or social class on their websites. This can help collaborations monitor how well they are ensuring equity in services and outcomes. The disaggregation of data by racial/ethnic group and/or social class will likely grow as initiatives mature and pay attention more systematically to equity concerns.


Which collaborations promote data the most?

The StriveTogether network, which inspired and continues to rely on the collective impact model of collaboration, places considerable emphasis on the use of data for agenda setting and continuous improvement. The average number of indicators tracked by initiatives in the StriveTogether network is 4.5, more than twice the average number tracked in non-Strive initiatives.

The 2011 article by Kania and Kramer in the Stanford Social Innovation Review introduced collective impact to a broad audience. In our nationwide scan, we found that collaborations established before that article tend to track slightly more indicators than the newer initiatives. This might suggest that the current emphasis on data is not possible or a priority for many collaborations. On the other hand, it may be that it takes time to build trust among many partners to share potentially sensitive data, to agree on appropriate indicators, and to locate reliable sources of data for them.


What does this mean for cross-sector collaborations?

Despite the heavy emphasis on data in the collective impact literature and the potential availability of new kinds of data for incorporation into multi-indicator systems, it appears that the data indicators in use by cross-sector collaborations are fairly conventional and limited in scope. Measuring third-grade reading proficiency might not tell us everything we need to know about how children are progressing in their learning. Moreover, outcome measurements like third grade reading often cannot convey an elaborated theory of action for the process steps needed to produce particular outcomes. In addition, most data reports on websites do not illustrate how multiple organizations and agents work together to produce results, so there is often a lack of evidence about how the collaborations themselves are making a difference.

These patterns raise a number of questions that are worth thinking about. How were data indicators selected? Were indicators suggested by national network affiliations or were they decided locally? What are the theories of action by which cross-sector collaborations are expected to meet their goals, and can data be used to monitor interim steps? How do cross-sector collaborations address issues of causality in their data, so it’s clear how they influence and/or take credit for the outcomes that truly matter?

We will be exploring questions like these more deeply in our intensive case studies of three cross-sector collaborations across the country – Say Yes to Education in Buffalo, N.Y., Milwaukee Succeeds in Wisconsin, and All Hands Raised in Portland, Ore. We invite you to contact us with your ideas and perspectives. For those interested in accessing our report, Collective Impact and the New Generation of Cross-Sector Collaborations for Education, you can find it here.


Note: The ongoing study of cross-sector collaborations for education at Teachers College, Columbia University, was commissioned by The Wallace Foundation in 2014. The principal investigators are Jeffrey Henig, Professor of Political Science and Education, and Carolyn Riehl, Associate Professor of Sociology and Education Policy. Iris Daruwala is a graduate research assistant and doctoral candidate in the Sociology and Education Program. The research team also includes Professor Michael Rebell, Jessica Wolff, Melissa Arnold, Constance Clark, and David Houston.


What do you think? Share your comments and questions below.

How Well Does Grantmaking Practice Support Collective Impact?

Posted Tuesday, January 20, 2015 at 7:35 pm

Findings from a national study

The five conditions for collective impact offer guidance not only for collective impact initiatives but for other forms of collaboration as well. In an effort to see more effective collaborations happening among grantmakers and grantees and across communities, Grantmakers for Effective Organizations advocates for changes in grantmaking practice that better support collaboration. Our hope is that more grantmakers will adopt practices conducive to collaboration, and many of the practices GEO advocates are aligned with the five conditions of collective impact — common agenda, shared measurement, mutually reinforcing activities, continuous communication and strong backbone.

Every three years, GEO conducts a national survey of staffed foundations to track progress on practices that both grantmakers and nonprofits agree are critical to achieving better results. We have a keen interest in understanding the extent to which grantmakers embrace the attitudes and practices we know are essential for collaboration because without the right kind of support collaborations won’t have the resources they need to survive and thrive.

The data from GEO’s 2014 field study show more grantmakers are adopting practices that are aligned with some of the conditions for collective impact, but in a couple of areas the data suggest more progress is needed.
 

1. Common Agenda — coming together to collectively define the problem and shape the solution

Let’s face it, foundations don’t typically have a reputation for being open and collaborative when it comes to setting strategy. Adopting a common agenda in partnership with a cross-sector range of organizations would be a radical shift for many foundations. The good news is an increasing number of grantmakers recognize the need for grantee input to inform policies, practices, program areas and strategy. In fact, the majority of funders seek input and advice from grantees, and, as the table below shows, this number has grown significantly in the past three years. This movement is a step in the right direction toward more grantmakers being comfortable with co-creating a common agenda.

2. Shared Measurement agreeing to track progress in the same way, which allows for continuous improvement.

Grantmakers and nonprofits alike want to know if our work is making a difference and how we can improve our work over time. Three-quarters of grantmakers in our survey evaluate their work, an all-time high since our survey began 10 years ago. However, the data suggest that grantmakers for the most part are not using evaluation in a way that is conducive to the shared learning and continuous improvement that is critical for effective collaboration.

From these findings, it is clear that grantmakers are using data primarily for internal purposes, such as informing internal strategy and communicating with the board. Less than half of grantmakers are sharing what they’re learning with others, such as grantees, community members or policymakers. Not only does keeping this data for internal eyes only present a missed opportunity for learning and improvement outside the walls of the foundation, it also suggests the field still has significant work to do to get the majority of foundations to buy into shared measurement.


3. Mutually Reinforcing Activities coordinating collective efforts to maximize the end result.

Grantmakers recognize that far greater impact is achieved by working with others rather than alone. Eighty percent of survey respondents said it was important to coordinate resources with other grantmakers working on similar issues, a practice which can support aligning funding to support a collaborative’s common agenda. And better yet, most of these grantmakers are walking the talk. The majority (69 percent) developed a strategic relationship with other funders in the past two years, with the primary reason for doing so being to achieve greater impact (99 percent).


4. Continuous Communications building trust and relationships among all participants

Recognizing that Money = power, the onus is on grantmakers to work proactively to build trusting, open relationships with grantees and other stakeholders. Results from GEO’s survey show that funders are increasingly seeking feedback (anonymous or nonanonymous) from grantees — 53 percent report doing so in 2014, up from 44 percent in 2011. This, plus the growth in grantmakers seeking input mentioned above, suggest that more grantmakers are taking deliberate steps to build strong relationships with grantees.

However, data from GEO’s field survey and a Nonprofit Finance Fund study suggest a gap in perception between nonprofits and grantmakers about how open grantmakers really are. Nonprofit Finance Fund, in a recent survey, asked nonprofits if the majority of their funders were willing to engage in open dialogue on a range of key financial issues. In GEO’s survey, we asked grantmakers if they were open to discuss the same issues with their grantees. As the table below shows, we found a sizeable gap between nonprofits’ perception of openness and how open grantmakers say they are.

These findings raise the question: Are grantmakers overly confident about how well they build trust and relationships with grantees and other stakeholders? One way to test this is to solicit anonymous feedback from grantees; our survey found that only about one-third (34 percent) of grantmakers currently do so.


5. Strong Backbone having a team dedicated to orchestrating the work of the group.

Collective impact initiatives have a dedicated backbone, and all forms of collaboration require some level of infrastructure and coordination. These things cost money. A key way grantmakers can support collaboration among nonprofits is by supporting this backbone or infrastructure.

GEO’s survey findings present somewhat of a mixed bag when it comes to grantmaker support for a strong backbone. Unfortunately, the majority of grantmakers (53 percent) say they rarely or never support the costs of collaboration.

Grantmakers could benefit from more education and advocacy about the importance of supporting the costs of collaboration. On the brighter side, among those grantmakers that do support collaboration, 72 percent say they fund the infrastructure or operational costs of collaboration.

How Are We Doing?

So how well do grantmaking practices align with the conditions of collective impact? For the most part, GEO’s survey shows that grantmakers have made significant progress over the years in practices that are aligned with a common agenda, mutually reinforcing activities, and continuous communications. However, there is still room for improvement. The field needs more funding for the backbone and infrastructure required to keep collaborations running. Grantmakers conducting evaluations primarily for internal audiences are missing a great opportunity for field-building learning and improvement. And while grantmakers by and large are making efforts to build strong and trusting relationships with their grantees, grantmakers and nonprofits seem to have mixed perceptions about how well those efforts are working.

Grantmakers are often key catalysts of collective impact efforts, so it is important to see alignment between grantmaker practices and the conditions of success. While GEO’s study shows areas of progress worth celebrating, the data also highlight a need for further education and advocacy for ways grantmakers can both be more collaborative — such as in buying into a common agenda and shared measurement or building trusting relationships with grantees and stakeholders — and better support collaboration by funding the costs of a backbone or infrastructure. Our hope is that as organizations like GEO, Collective Impact Forum, and others continue to reinforce the importance of these practices, we will see further progress from grantmakers in our next field study in 2017.

Question for Forum members: How does your experience compare with our findings above? Please share with us your thoughts in the comments.

To read more about GEO’s 2014 study, Is Grantmaking Getting Smarter?, click here.

 

Want Greater Impact? Have a Conversation First

Posted Wednesday, April 23, 2014 at 1:31 pm

This essay was originally posted to Philanthropy Northwest on April 2, 2014.

Collective Impact, philanthropy’s flavor of the day, has entered its back-biting season — a positive sign, given that push back is often a signal that a creative disruption is working. In this blog post, I’ll parse why the funder community has so enthusiastically embraced Collective Impact and how it has already produced learnings that we shouldn’t throw out when its season as the shiny new thing inevitably ends.

Above all, Collective Impact has helped us understand that deeply listening together into complex systems is the first step towards understanding our most intractable social problems. This directly challenges two of American philanthropy most persistent flaws: a preference for academic theory over front line engagement and a preference for the straight-arrow of “how-to” action planning over “what’s really going on here” iterative inquiry and dialogue.

Collective Impact’s second big contribution is elevating the role of evidence and data in designing strategy. Shared metrics and evidence-based practices have helped us realize that we can aim the buckshot of atomized funders, fragmented nonprofit providers and even personally impacted individuals towards a common solution. Acknowledging shared direction allows us to move to greater impact by creating a shared theory of change and benchmarks that transcend the personal and often-anecdotal frames that a marginalized and undercapitalized third sector has too often allowed itself to lapse into. When a community or a field defines “success” and endorses several promising pathways up the mountain, we establish a common road map that leads to greater alignment around shared direction. This increase in traffic and shared use in turn leads to more explicit, widely distributed rules of the road: evaluation and accountability measures.

This morning I read an article by Nicholas Kristoff on the progress the domestic violence movement has made over the last two decades. It upended the opinions and working assumptions I gained while serving as an interim executive director of a large domestic violence service provider. Kristoff writes, “Based on victimization surveys, it seems that violence by men against their intimate partners has fallen by two thirds since 1993. Attitudes have also changed as well. In 1987, only half of Americans said that it was always wrong for a man to beat his wife with a belt or a stick; a decade later, 86 percent said that it was always wrong.”

Kristoff goes on to recommend that “offenders should be required to attend mandatory training programs like the one run by Men Stopping Violence,” a position in direct opposition to the general disbelief in the possibility of offender rehabilitation I have seen reflected by local domestic violence leadership. And Kristoff doesn’t even begin to address the recent sea change away from building confidential domestic violence shelters to the new paradigm of keeping families and victims at home and in their communities while removing the offender.

Sometimes the need for service providers to maintain a case for ongoing funding support can muzzle evidence and outcomes. Or perhaps our shared narrative is so entrenched that we write off evidence that it’s wrong as exceptional or non-reproducible. Collective Impact has highlighted the need for us all to ground our shared strategies in evidence and proven intervention. By seeing ourselves as a network and understanding our roles in a shared system, we are forced to evaluate our practices against shared benchmarks for effectiveness, adding the calculus of “social benefit” to the warm fuzzy of “charitable intention.” Alignment is the secret sauce, the compass star whose shared direction is so essential in a system where 80% of foundations are run by volunteers with no paid staff and a system of nonprofit service providers which continues to grow exponentially and fragment into more and more independent and niche players.

“Attention is the purest form of generosity,” wrote Simone de Beauvoir. Alignment represents the outcome of a collective attention that gives us shared tools of evidence-based facts, theory and practice. If the essential argument behind Collective Impact is that problems are growing faster than community solutions can scale, then is it in fostering a stronger sense of shared purpose and emphasizing the value of explicit relationships that Collective Impact may provide its most lasting change. Practicing listening and the hard work of creating common language and movement through dialogue with representatives from all aspects of a system may create the boots-on-the-ground engagement needed that to correct for philanthropy’s  historical preference for white paper theories-of-change that too often encourage “doing to” rather than ”doing with.”

Video

Evaluating Collective Impact and Developing Shared Measurement Systems (Champions for Change 2014)

Fay Hanleybrown (FSG) and Jennifer Splansky Juster (Collective Impact Forum) present on "Evaluating Collective Impact and Developing Shared Measurement Systems" at the 2014 Champions for Change workshop in San Francisco in February 2014.

Article

Featured Story: The Road Map Project

This short story is about The Road Map Project's impact on closing the achievement gap in Seattle.

The numbers never lie – but sometimes they hide the truth. Consider: the rate of educational achievement in the Seattle metro region. In 2010, nearly half of all residents had earned at least a bachelor’s degree – a striking number, made all the more striking by the fact that nationally, only about 30% of Americans are college graduates. But dig a little deeper into the data, and you find that the region’s numbers are skewed by out-of-staters who move to the area. In fact, only about 25% of youth who came through the local public school system hold college degrees, and when we look solely at people of color, that number plummets to 10%. Stark statistics, stark truths – both of which are being confronted via collective impact.

The Road Map Project hopes to foster large-scale change by implementing a four-pronged approach: aligning cross-sector actors, engaging parents and community members in the development of solutions, building stronger and more seamless systems, and leveraging the power of data to fuel improvement. This last element has proven to be especially powerful to date. By harnessing the power of numbers, the Road Map Project has changed the conversation about education and catalyzed collective action. 

Stakeholders recognized early on in 2010 that focusing solely on Seattle and South King County’s high school students wouldn’t be enough to solve the underlying problem; instead, the Road Map Project adopts a “cradle to career” approach intended to double the number of students on track to graduate with a college-level credential by 2020 while simultaneously closing achievement gaps for low-income students and students of color. Having attracted high-profile local support for its mission (from, among others, the City of Seattle and the Bill & Melinda Gates Foundation), the Project’s next step was to create a system of shared measurement. The initiative selected several indicators where progress can be tracked from year-to-year (or as often as possible), and are linked to student educational success.

With its indicators in place, the Road Map Project is able to leverage data in a number of ways. Most immediately, the data show whether students are meeting their achievement goals, and the strategies are continually reviewed and revised accordingly. The initiative goes further, however, in an effort to hold itself accountable to the Seattle and South King County community – it releases the indicators, and current progress toward those indicators, on the Road Map Project website and through an annual report. Publicizing the data has helped to spawn friendly competition between school districts: as one administrator has said, “we were seeing how other districts around us were doing…we don’t want to look worse than them.”

The numbers never lie – sometimes they highlight the truth. Even though the Road Map Project is early in its implementation, several gains have been made. Partners in the region collaborated to increase the number of students receiving the state’s College Bound Scholarship – giving students a free-ride to college – raising the number of eligible low-income students enrolled in the program to 93% in 2013, up from 53% just three years ago. In addition, in 2012, Road Map Project partners competed in the U.S. Department of Education’s Race to the Top competition. Only two groups were awarded the maximum grant possible. The Road Map Project partners were one of them. The $40 million they received infuses the initiative with significant new funding – and it provides evidence that the Road Map Project finds itself on the right path.

Pages