Collective Impact Accelerator: Using Data for Advancing Progress in Collective Impact (Apply by July 26)

Posted 10 days ago at 4:29 am

You are invited to participate in a 12-month action learning cohort focused on using data for advancing progress in collective impact. The Collective Impact Forum, an initiative of FSG and the Aspen Institute Forum for Community Solutions, is developing a “Collective Impact Accelerator” to improve how collective impact funders, backbone teams, and other partners use data to learn and strengthen their work in collaboration with others, ultimately contributing to achieving greater impact in communities.

The goals of the Collective Impact Accelerator are to:

  • Build the capacity of backbone leaders, funders, and other partners to effectively use data as a key strategy in collective impact contributing to improved results for communities.
  • Create a supportive peer learning community where backbone teams, funders and/or partners have candid conversations and learn with one another about using data in collective impact.
  • Identify promising practices that will be shared broadly with the field to support backbone leaders, funders, and other practitioners interested in using data in collective impact.

This Collective Impact Accelerator will be limited to participants from 10 separate collaboratives (including one funder and up to two backbone/data partners from each collaborative). Participants will meet for three in-person working sessions from November 2019 to November 2020, and will also join three peer learning calls during months when there is not a working session. Participants will identify an area of their collaborative where they will focus on using data, and participants are expected to commit time in between the Accelerator meetings and calls to make progress on their identified action learning project for their collaborative.

The participation fee is $10,000 for each collaborative (for two representatives) or $12,500 for each collaborative (for three representatives). This fee covers the meeting costs and staff time to plan for and facilitate all calls and meetings. Participants will cover their own travel and accommodation costs.

Mark your calendar for these key dates:

  • Applications opened on May 29 and will close on July 26, 2019.
  • Join an informational call from 4-5pm EST on June 18 or 4-5pm EST on June 26 to learn more before applying. Call-in details are included at the end of frequently asked questions section below.
  • We will select 10 participating collaboratives (including one funder and up to two backbone/data partners per collaborative) by August 16, 2019.
  • The first full-day in-person meeting will take place on Tuesday, Nov. 12, 2019 in Chicago, IL (with reception and dinner the night before).
  • The second in-person meeting will take place on Tuesday, May 5, 2020, in Minneapolis, MN (with reception and dinner the night before).
  • The third in-person meeting will take place in October/November 2020 in Washington, D.C. (date will be confirmed by late 2019 based on selected accelerator participants’ availability).

See attached or click on this link to read "frequently asked questions" about the Collective Impact Accelerator.

Webinar

Using Data and Shared Measurement in Collective Impact

Data gathering and shared measurement systems are key elements for collective impact initiatives to better understand and assess their work, but can be also very challenging to start and sustain. What can we learn from other initiatives about their practices related to gathering and sharing data, and what impact it had on their outcomes?

In this virtual coffee, we're talking about gathering and sharing data with Emily Bradley and Michael Nailat, program officers at Home for Good, an initiative that works collaboratively on systems and solutions to end homelessness.

This virtual coffee was held on August 14, 2018 from 3pm – 4pm ET.

Note: For the first 2-3 minutes of the session, the audio goes in and out a bit. After this short period, it evens out and is audible for the rest of the 60-minute session.


Virtual Coffee Resources:

Presentation: Download a copy of the presentation used for this virtual coffee at the link on the right of this page. (Logging in to your Collective Impact Forum account will be necessary to download materials.)

Home for Good was one of 25 sites that participated in the research study When Collective Impact has an Impact. This new study, more than a year in the making, looks at the question of “To what extent and under what conditions does the collective impact approach contribute to systems and population changes?”


Listen to past Collective Impact Virtual Coffee Chats

Virtual Coffee archive

Webinar

Using Data for a Collective Impact Refresh

In Project U-Turn’s 10th year of collective impact work, data has played a crucial part in the continuous improvement process. Both quantitative student data and qualitative stakeholder data helped refine the goals, structure, and collaborative processes aimed at stronger, equitable outcomes for a new three-year action plan.

In this online training, Project U-Turn presenters will highlight each point in the renewal process and have attendees reflect on how they would approach stages of change given their current community conditions and desired outcomes. Participants will leave with a series of concrete tools to guide similar processes for their collective impact work.

Training Materials: Download the training presentation, worksheet, and referenced resources at the links on the left of this page. (Logging into your CIF member account will be needed to download resources.)

How to watch: To participate in this training, please register ahead of the training time of 2pm ET on December 6, 2017. A recording will be shared with registrants 24-48 hours following the event. Register now.

For those undable to register ahead, the video of this training will be made broadly available in early 2018 on the Collective Impact Forum.


TRAINING LEADS

  • Roxolana Barnebey, senior associate of External Relations, Philadelphia Youth Network (PYN)
  • Meg Long, president, Equal Measure
  • Bilal Taylor, senior consultant, Equal Measure.


ABOUT THE SPEAKERS

Roxolana Barnebey

Roxolana is senior associate of External Relations at Philadelphia Youth Network (PYN)

Roxolana manages Project U-Turn for PYN, which aims to engage and re-engage students who are at-risk of disconnecting or already disconnected from high school to support their secondary and post-secondary success. In this role, Roxolana ensures that the collective efforts across systems and organizations all drive toward the overarching goal of increasing Philadelphia’s high school graduation rate.

As the backbone staff for Project U-Turn, she works to secure resources that allow for Project U-Turn related efforts and has carried out the first Project U-Turn Fellowships, as well as a PYN Stoneleigh Fellow, who will work to expand post-secondary access more broadly across Philadelphia.

Prior to her time at PYN, Roxolana worked at Public Citizens for Children and Youth (PCCY), southeastern Pennsylvania’s child advocacy organization. In her nearly 9 years there, she led strategy, external relations, and mobilization as part of the statewide Campaign for Fair Education Funding, which achieved its goal of gaining Pennsylvania legislature approval for a fair funding formula for the state’s public schools. She raised attention to the need for improved access to children’s behavioral health services and managed PCCY’s day of free children’s dental care—increasing the program from about 10 dental practices serving a few hundred children in Philadelphia to nearly 30 dentists throughout southeastern Pennsylvania serving nearly 1,000 children.

Roxolana received her Master of Science in Social Policy from the University of Pennsylvania (August 2012) and her Bachelor of Arts from the University of Miami (May 2006).


Meg Long

Meg is president of Equal Measure.

Meg has nearly 20 years of evaluation, philanthropic strategy, program management, organizational development, and leadership experience. Over the course of her career, Meg has worked on a wide range of domestic and international issues, including righting educational disparities, building individuals’ economic security, and improving the communities in which they live. Meg leads Equal Measure’s postsecondary success and asset building portfolio, bringing her extensive experience in cradle-to-career and place-based evaluation to initiatives such as the Lumina Foundation’s Community Partnership for Attainment, the Irvine Foundation’s Linked Learning investment, the Aspen Institute’s Opportunity Youth Incentive Fund, and the Kellogg Foundation’s Family Economic Security portfolio. Meg also provides strategic and evaluation support to the Goddard Riverside Community Center’s Options-NYCDOE training program, the Stoneleigh Foundation, the Philadelphia Youth Network, and the Helmsley Charitable Trust to help increase the impact of their programs and investments.

In each engagement, Meg has helped her clients translate ambitious, complex change strategies into successful interventions. She plays numerous key roles, such as designing strategy, managing the relationships of multiple partners, facilitating the inclusion of all stakeholder voices, and leading the communication of evaluation findings to clients and their grantees.

Before joining Equal Measure, Meg was the coordinator for Volunteer Recruitment, Training and Marketing for Experience Corps Philadelphia, a national intergenerational tutoring program. In that role, she worked with 22 inner-city elementary schools and 2 after-school programs to address the literacy needs of children reading below grade level. She also worked with community members and stakeholders to improve educational services to children and their families in Philadelphia. Her experience at the United Nations, the World Bank Institute, and the International Longevity Center included conducting analyses of poverty alleviation policies in Kenya and assessing socioeconomic indicators of older New Yorkers to improve service delivery in intergenerational programs.
Bilal Taylor


Bilal Taylor

Bilal is a senior consultant at Equal Measure.

Bilal has a more than 15-year background in planning, implementing, and evaluating high-quality youth development programs, particularly in schools and community settings serving older youth during out-of-school time. He also has extensive experience in developing theories of change and in qualitative evaluation methodologies, including focus group facilitation.

At Equal Measure, Bilal works on a diverse set of national and local evaluation projects, including evaluations of the W.K. Kellogg Foundation’s Family Economic Security Workforce Development Pilots, the Helmsley Charitable Trust’s investment to enhance teaching practices and establish institutional incentives to increase the number of STEM college graduates, the Irvine Foundation’s Linked Learning Regional Hubs, Living Cities’ City Accelerator initiative, and the Stoneleigh Foundation’s Fellowship programs.

Prior to joining Equal Measure, Bilal was a program officer at the American Friends Service Committee, an international Quaker peace-building organization headquartered in Philadelphia. In this role, Bilal oversaw a portfolio of 50 civic engagement and youth organizing programs aimed at helping youth find their power to challenge systems of oppression in their cities and nations. His direct service experience coordinating grant-funded out-of-school time programs and serving as a dean of students at a high-performing charter school in Philadelphia leave him uniquely positioned to understand the challenges of funders, grantees, and public sector leaders searching for innovative ways to help youth transition successfully to adulthood.

Webinar

Using Data in Collective Impact

In this virtual coffee chat, JaNay Queen Nazaire and Jeff Raderstrong from Living Cities will share what they have learned about how to use data within collective impact efforts to help change behavior and achieve better outcomes.

Part of this chat also goes over Living Cities' Data and Collective Impact resource series.

This virtual coffee chat was held on June 6, 2017.

Resources referenced in this episode:

Data and Collective Impact

About the Collective Impact Virtual Coffee Chats: The CI Virtual Coffee talks are free online webinar chats where we talk with collective impact practitioners from around the field to hear about their work and see what they're learning. Each Virtual Coffee chat will include Q&A time where attendees can ask their questions and find answers.

Listen to past Collective Impact Virtual Coffee Chats

Virtual Coffee archive

Presentation

Equity and Collective Impact: Lessons from Disaggregating Data (COP Learning Call - May 2016)

This Community of Practice Learning Group Call was held on May 11, 2016.

Access the presentation and audio from the call with the download links on the left of this page.

Learning Group Call Agenda

1. Introductions and Overview (10 min)

2. Equity and Collective Impact: Lessons from Disaggregating Data (75 minutes)

  • Presentation and discussion with Kantahyanee Murray, Annie E. Casey Foundation
  • Presentation and discussion with Junious Williams, Urban Strategies Council
  • Presentation / discussion with Nicole Jolly, EMPLOY, Cowen Institute at Tulane University

3. Next Steps (5 Minutes)

To Validate or Elevate? Measuring Community Impact in an Actionable Way

Posted Wednesday, March 29, 2017 at 11:49 pm

Last November, Matt Forti and Kim Siegal penned an article titled Actionable Measurement: Getting from “Prove” to “Improve” in the Stanford Social Innovation Review. The article calls upon the social sector to unite around “common questions” that “nonprofits ought to answer about their impact so that they can maximize learning and action around their program models.”

Forti and Siegal depart from ongoing debates in the social sector’s measurement community over the appropriateness of experimental evaluations (i.e., randomized trials)—the industry’s gold standard—to prove a program’s impact. Such large-scale evaluations may be suitable in some instances, but Forti and Siegal thoughtfully argue, instead, that most practitioners would be better served through a more immediate focus on improvement.

We agree. Experimental evaluations are valuable tools to test whether a program works—when programs are applied consistently across similar settings.

But community-level interventions pose significant limitations to experimental evaluation. Ethics aside, providers are quick to point out their community’s uniqueness from all others, confounding an apples-to-apples comparison across sites. Moreover, an average study timeline of three to five-years, coupled with a price tag in the hundreds of thousands of dollars, or more, pose serious hurdles to those who must not only maximize the value to their clients and funders, but also demonstrate that value in short order.

Instead, Forti and Siegal pose a guiding question that closely mirrors our Institute’s approach to community-level evaluation: “what common insights would allow nonprofit leaders to make decisions that generate more social good for clients they serve?”

There is an old Army saying that goes, “what gets checked gets done.” So too, Forti and Siegal’s idea of actionable measurement is to use insights now—in the midst of doing the work itself—to learn, adapt, improve program service delivery, increase social good, and maximize impact over time.

Actionable measurement, or “shared measurement” in collective impact parlance, is a major driver within our AmericaServes initiative, an effort to build local coordinated networks of service organizations that improve how our nation’s military-connected members and their families access a wide range of services and resources in their communities.

Put simply, AmericaServes helps communities create networks of service providers and improve how they operate as a system. Analogous to health care coordination models (e.g., accountable care organizations, patient centered medical homes), AmericaServes strengthens local nonprofit coordination by providing initial funding for a backbone coordination center and the technology to manage—and measure—a referral-based system of care. Accordingly, for both health care and human service delivery, system-level measurement focused on continuous quality improvement is critical to test and implement changes that address the complex or changing needs of the client.

Standard system outcome and satisfaction measures allow AmericaServes communities to monitor and improve their performance. These insights provide the basis for community planning sessions, on-the-ground relationship building, and quarterly in-progress reviews.

As new insights continually emerge, communicating our advances (and setbacks) takes on increasing importance. Additionally, there are new aspects of our work—some we believe followers may have missed—that we want to expand upon to promote a greater awareness and understanding of IVMF’s community-based efforts.

Forti and Siegal, following a comprehensive review of a decade’s worth of their organization’s field studies and research, established “four categories of questions that drove the greatest learning, action, and impact improvement.” We apply the Forti and Siegal framework to the AmericaServes initiative and find that it provides a helpful basis upon which to consider our current outcomes and future actions in the coming years.


1. Impact Drivers: Are there particular conditions or program components that disproportionately drive results?

While there are multiple performance indicators, two stand out above all others: case referral timeliness and appropriateness. As a coordinated network, AmericaServes’ theory of change is centered on assisting clients to the right point of service, resource, or care, in the shortest time possible. This is consistent with what the heath care field defines as quality of care.

Often, those seeking services present multiple, co-occurring (i.e., comorbid) needs. Consequently, service providers within AmericaServes communities—operating as a comprehensive support network, rather than fragmented collection of services—are best-incentivized to address the specific need(s) presented to their organization. Here, their limited resources are put to their first and best use—a hallmark of superior performance and sustainability.

As human service providers, we all know the disproportionate amount of time and energy spent on attempts to address needs beyond our organization’s boundaries. More often than not, these efforts to connect people and their needs beyond our capacity or expertise results in not only organizational failure, but extreme client frustration and unmet expectations. Getting the right client to the right point of service in a timely fashion—streamlined access—while critical, is, at times a herculean feat.

It is often said that communities are not capacity-poor, but rather fragmented-rich. Additionally, the veteran-serving nonprofit sector is rife with patchy eligibility criteria (each uniquely exclusive or inclusive in their approach) and layered on top of membership rules that subsequently underpin the very programs put in place to help. To combat these factors, AmericaServes communities work carefully to digitally connect their clients to the most appropriate provider in a timely fashion, mitigating the deep fragmentation across the social sector. If done disproportionately well enough, we can open the all-too-often locked doors of any community’s capacity to serve human needs and drive greater innovation within human services overall.


2. Impact distribution: Does the program generate better results for a particular sub-group?

Apparently so. The greatest early gains appear to be in networks with strong, active coordination centers—the backbone organizations that manage and monitor case referrals between network providers.

We see a pattern emerging in our AmericaServes networks. Those that report the greatest share of positive case outcomes (e.g., client received housing services) and levels of provider engagement (i.e., making and receiving case referrals), also tend to have coordination centers that:

(1) focus on equitable referral distribution across many providers and

(2) have built strong relationships with the local VA.

For example, the PAServes-Greater Pittsburgh coordination center, based within the Pittsburgh Mercy Health System, has a longstanding relationship with the local VA. To date, the Pittsburgh network reports the highest share of providers making and receiving referrals, and of positive overall case outcomes in the first year of operation. Having witnessed the success in Pittsburgh, other networks are actively building and expanding their relationships with local VA offices, and we will be monitoring the resulting provider engagement and outcomes over the coming months.

Strong coordination centers with knowledgeable intake specialists are able to navigate the complex eligibility criteria and make appropriate client referrals. In other words, this generates “smart” referrals to them, consisting of pre-screened clients who eligible for the services they provide. More importantly, accurate referrals eliminate wasted time, resources, and most importantly, the negative interactions that occur when providers are forced to turn away ineligible clients.


3. Impact persistence: How does a given client’s impact change over time?

While AmericaServes ultimately aims to demonstrate a positive long-term impact on the well-being of each community’s local military-connected population, it is, foremost, a care coordination intervention on a system of human service providers. The initiative’s immediate outcomes—adapted from health care—are centered on the activities and experiences of those coordinating and receiving coordinated services.

Forti and Siegal’s work revealed that clients that experience good outcomes tend to engage with the program more over time.

AmericaServes aims to ensure that clients who access coordinated services see similar benefits. If working as intended, long-term impact at the client level should loosely follow a needs hierarchy. That is, over time, clients should use the network less frequently as needs are met. Moreover, longer-tenured or repeat clients’ needs should resemble a pattern that transitions from basic physiological needs (food and water), to security (housing, employment, healthcare), social (education, relationships, love), and esteem (hobbies, volunteering) needs.

Early data suggests that a select number of program participants return to the network for additional services. While further analysis is underway, early thinking suggests three possible explanations:

(1) the initial provider’s service intervention failed to take root sufficiently, thus creating an opportunity to improve and reattempt to solve the individual’s problem;

(2) a tertiary need (a related aspect of co-occurrence) was discovered after the initial provider’s service intervention was introduced, creating a secondary network demand; or

(3) the client returned to the network for additional services to satisfy higher-order social or esteem needs, following successful resolution of prior basic physiological or security needs.

Regardless of the root cause, one constant is clear: clients are viewing the network as a resource to help address their needs. And as Forti and Siegal found, client impact may be measured and improved upon through a greater emphasis on client retention.


4. Impact externalities: What are the positive and negative effects on the people and communities not directly accessing the program?

While we aim to in time, we have yet to explore the unintended consequences—both positive and negative—on the communities and individuals not directly accessing AmericaServes. Consider, for example, does AmericaServes, by addressing the social determinants of health and well-being, generate positive returns to VA health care system (e.g., improved health markers, reductions in hospitalization, prescription drugs, cost avoidance, etc.)? This is a fantastic research question, notwithstanding that AmericaServes is barely two years old, operating in just a handful of communities, and still evolving.

Learning from what gets measured—“checked” in Army-speak—and the actions taken in light of that learning, may be, as Forti and Siegal concluded, the more important boost in social good needed to serve our veterans and military better today. Certainly, understanding these externalities is crucial to prove the efficacy of our approach in the long-term, and we continue to explore opportunities for an AmericaServes randomized trial or quasi-experiment.

We will get there eventually. For now, however, we remain strongly focused on improving the AmericaServes model to create more social good in these communities today.


What do you think? How have you worked with public, philanthropic, and nonprofit stakeholders to reconcile the tensions and timing of both proving and improving system-level collective impact initiatives? How are you using insights today to drive greater understanding and dialogue around the impact drivers, distribution, persistence, and unintended benefits and consequences of your work?

Recommended Platforms to Share Data with Partners?

Posted 3 years ago at 4:29 am

What platform/software do you like to use to share data/benchmarks with your partners?

Some CI efforts contacted us looking for recommendations, and we wanted to see what you resources you found most useful. What works for you?

Lessons on Using Data for Collective Impact

Posted Friday, June 24, 2016 at 9:07 pm

Using shared measures to track progress toward goals and understand where partnerships are making progress and where improvement is needed is increasingly emphasized as essential for addressing complex issues and improving our communities. However, our understanding of exactly how to harness the power of data is limited. One thing, though, is abundantly clear: Making good on the commitment to use data is hard—and the challenges aren’t just about technology.

There’s much that collective-impact efforts can learn about using data from The Wallace Foundation’s Next Generation Afterschool System Building Initiative. This multi-year initiative involves support for nine cities where city agencies, schools, and nonprofit organizations are working to better coordinate access to high-quality afterschool opportunities for children, along with independent research to learn from their experiences. These place-based collaborative efforts in Baltimore, Denver, Fort Worth, Grand Rapids, Jacksonville, Louisville, Nashville, Philadelphia, and Saint Paul share features with collective impact initiatives as they work across sectorial silos to leverage resources to affect children’s lives, use data to diagnose needs, engage and sustain partner engagement, and improve the quality of services.

A new research report on their efforts, the first of two planned volumes from Chapin Hall at the University of Chicago commissioned by Wallace, offers insights into what it takes to put data systems in place and use them to reach common goals. Key conclusion: While we typically focus on addressing the technology needs of data use, in reality, two other components—people and processes — are just as crucial, as shown in the diagram below.  These findings echo and expand upon the research conducted by the RAND Corporation and published in Hours of Opportunity Volume II, which found based on a study of eight afterschool systems that using MIS data can help improve access and services but that it requires careful planning.

The new Chapin Hall study finds that often, as these collaborative efforts begin, the focus is on technology, but that leaders are caught short by the challenges posed by the people and the processes that are required to transform the data into useful information. But as the triangle suggests, the components are inter-related and each is equally necessary.

As the  researchers write: ……as important as technology is, most of the factors that appear to facilitate or inhibit data use in city afterschool systems—norms and routines, partner relationships, leadership and coordination, and technical knowledge—have to do with the people and process aspects of a data system.

Not surprisingly, that’s also one of the findings from researchers at Teachers College, Columbia University, who are studying collective impact communities participating in the Ford Foundation’s Corridors to College Success initiative. In this recent brief, the researchers described “myriad challenges associated with their organizations’ capacity for data collection, data-sharing agreements, third-party data warehousing or merging, data privacy and storage, and staff capacity for meeting technical data management and analytic needs.”

The Three Legs of the Triangle

The nine cities in Wallace’s effort that are using data in their afterschool systems are finding paths that are deepening their capacity for data use. Their experiences as captured by Chapin Hall, we believe, can help people involved in other collective-impact and system building efforts.

  • Start small to learn what works. A number of cities intentionally started with a limited set of measures for data collection and use, and/or a limited set of providers piloting a new data system, with plans to scale up gradually. For a collective impact effort, that might mean starting the data work with one working group, learning how to enable the people, processes, and technology work together successfully before scaling across all strategic areas of focus.  
     
  • Leverage existing data expertise. Expertise came from within as well as outside the organization coordinating the initiative. Some cities are working with a research partner who participates in all phases of the development of their data systems, providing ongoing support. Others leveraged the relationship primarily for access to data, analysis, and reporting of data collected by providers. Still others did not engage an external research partner, but identified internal staff who are capable analysts who can provide these supports to the system. Many collective-impact efforts might tap research partners, or communities have research institutions, committed to developing knowledge to positively affect lives.
     
  • Provide ongoing training. System stakeholders learned that they needed to provide ongoing introductory trainings in using both the management information systems and the data to enable use.

Taken together, these lessons suggest that cities should acknowledge upfront and plan for the challenges of data use and that there are steps they can take toward success, namely prepare and provide ongoing support for the multifaceted interplay of people, processes, and technology.

What do you think? Share your comments and questions below:

How Do Cross-Sector Collaborations for Education Present Data to the Public?

Posted Wednesday, May 18, 2016 at 5:59 pm

The collective impact model of cross-sector collaboration emphasizes the use of shared measurement systems for identifying problems and needs, tracking progress, and measuring results. But to what extent are cross-sector collaborations around the country promoting data as an integral part of their work? With support from The Wallace Foundation, our research team at Teachers College, Columbia University set out to understand the characteristics of a national array of cross-sector collaborations for education, taking an aerial view to analyze information presented on their public websites. What we have learned is that despite the emphasis on data, only 40% of the 182 initiatives identified by our nationwide scan devote a separate section of their websites to data, statistics, or outcomes.


What data are collaborations tracking?

The most common indicators on initiatives’ websites are student performance on standardized tests (43%) and high school graduation rates (35%). Many of the collaborations are “cradle to career” initiatives, designed to support students from pre-kindergarten through college and career entry, so it is not surprising to see that roughly one-quarter track indicators of early childhood care and learning. Post-secondary enrollment (20%) and completion rate (18%) data are also somewhat prevalent on public websites. When it comes to data about student experiences and well-being, far fewer initiatives track such measures. For example, only 5% of the initiatives report some kind of indicator for social and emotional development, which has been recognized as crucial for 21st-century learning and attainment. 

It may be the case that initiatives choose to use certain indicators because they are important markers for academic success and college attainment, but it is also likely that some data are presented because they are fairly easy to obtain from state and/or local data platforms. Common indicators like high school graduation rates can also be aggregated to a city or regional level where separate public, private, and charter school sectors are involved, making it easier to draw points of comparison. Less conventional indicators, such as social-emotional learning, might not be as common due to a lack of agreement on measurement. It seems plausible that convenience, rather than intentionality about program goals or community needs, marks the standard for choosing indicators. While a quarter of the collaborations show data patterns over time, only 17% provide indicators disaggregated by race/ethnicity or social class on their websites. This can help collaborations monitor how well they are ensuring equity in services and outcomes. The disaggregation of data by racial/ethnic group and/or social class will likely grow as initiatives mature and pay attention more systematically to equity concerns.


Which collaborations promote data the most?

The StriveTogether network, which inspired and continues to rely on the collective impact model of collaboration, places considerable emphasis on the use of data for agenda setting and continuous improvement. The average number of indicators tracked by initiatives in the StriveTogether network is 4.5, more than twice the average number tracked in non-Strive initiatives.

The 2011 article by Kania and Kramer in the Stanford Social Innovation Review introduced collective impact to a broad audience. In our nationwide scan, we found that collaborations established before that article tend to track slightly more indicators than the newer initiatives. This might suggest that the current emphasis on data is not possible or a priority for many collaborations. On the other hand, it may be that it takes time to build trust among many partners to share potentially sensitive data, to agree on appropriate indicators, and to locate reliable sources of data for them.


What does this mean for cross-sector collaborations?

Despite the heavy emphasis on data in the collective impact literature and the potential availability of new kinds of data for incorporation into multi-indicator systems, it appears that the data indicators in use by cross-sector collaborations are fairly conventional and limited in scope. Measuring third-grade reading proficiency might not tell us everything we need to know about how children are progressing in their learning. Moreover, outcome measurements like third grade reading often cannot convey an elaborated theory of action for the process steps needed to produce particular outcomes. In addition, most data reports on websites do not illustrate how multiple organizations and agents work together to produce results, so there is often a lack of evidence about how the collaborations themselves are making a difference.

These patterns raise a number of questions that are worth thinking about. How were data indicators selected? Were indicators suggested by national network affiliations or were they decided locally? What are the theories of action by which cross-sector collaborations are expected to meet their goals, and can data be used to monitor interim steps? How do cross-sector collaborations address issues of causality in their data, so it’s clear how they influence and/or take credit for the outcomes that truly matter?

We will be exploring questions like these more deeply in our intensive case studies of three cross-sector collaborations across the country – Say Yes to Education in Buffalo, N.Y., Milwaukee Succeeds in Wisconsin, and All Hands Raised in Portland, Ore. We invite you to contact us with your ideas and perspectives. For those interested in accessing our report, Collective Impact and the New Generation of Cross-Sector Collaborations for Education, you can find it here.


Note: The ongoing study of cross-sector collaborations for education at Teachers College, Columbia University, was commissioned by The Wallace Foundation in 2014. The principal investigators are Jeffrey Henig, Professor of Political Science and Education, and Carolyn Riehl, Associate Professor of Sociology and Education Policy. Iris Daruwala is a graduate research assistant and doctoral candidate in the Sociology and Education Program. The research team also includes Professor Michael Rebell, Jessica Wolff, Melissa Arnold, Constance Clark, and David Houston.


What do you think? Share your comments and questions below.

Collective Impact Principles of Practice: Putting Collective Impact into Action

Posted Sunday, April 17, 2016 at 5:09 pm

We have been inspired watching the field of collective impact progress over the past five years, as thousands of practitioners, funders, and policymakers around the world employ the approach to help solve complex social problems at a large scale. The field’s understanding of what it takes to put the collective impact approach into practice continues to evolve through the contributions of many who are undertaking the deep work of collaborative social change, and their successes build on decades of work around effective cross-sector collaboration. Accomplished practitioners of collective impact continue to affirm the critical importance of achieving population-level change in the five conditions of collective impact that John Kania and Mark Kramer originally identified in the Stanford Social Innovation Review in winter 2011. (For an explanation of the conditions, see the end of this post.) Many practitioners tell us that the framework developed in the original article has helped to provide the field with a shared definition and useful language to describe core elements of a rigorous and disciplined, yet flexible and organic, approach to addressing complex problems at scale.

Successful collective impact practitioners also observe, however, that while the five conditions Kania and Kramer initially identified are necessary, they are not sufficient to achieve impact at the population level. Informed by lessons shared among those who are implementing the approach in the field, this post outlines additional principles of practice that we believe can guide practitioners about how to successfully put collective impact into action. While many of these principles are not unique to collective impact, we have seen that the combination of the five conditions and these practices contributes to meaningful population-level change. We hope that these principles help funders, practitioners, and policymakers consider what it takes to apply the collective impact approach, and that they will bolster existing efforts to overcome challenges and roadblocks in their work. We also hope these principles can help guide those who aspire toward collective impact, but may not yet be implementing the approach fully, to identify possible changes that might increase their odds of success. As we continue to apply the conditions and principles of collective impact, we fully expect that, over time, our shared understanding of what constitutes good practice will evolve further.


1. Design and implement the initiative with a priority placed on equity. For collective impact initiatives to achieve sustainable improvements in communities, it is critical that these initiatives address the systemic structures and practices that create barriers to equitable outcomes for all populations, particularly along the lines of race and class. To that end, collective impact initiatives must be intentional in their design from the very outset to ensure that an equity lens is prominent throughout their governance, planning, implementation, and evaluation. In designing and implementing collective impact with a focus on equity, practitioners must disaggregate data and develop strategies that focus on improving outcomes for affected populations.


2. Include community members in the collaborative. Members of the community—those whose lives are most directly and deeply affected by the problem addressed by the initiative—must be meaningfully engaged in the initiative’s governance, planning, implementation, and evaluation. Community members can bring crucial (and sometimes overlooked) perspectives to governance bodies and decision-making tables, can contribute to refining the collective impact initiative’s evolving goals, strategies, and indicators, can help co-create and implement solutions that are rooted in lived experience and have the potential for significant uptake, can participate in building communities’ capacity to lead and sustain change, and can participate in data interpretation and continuous learning processes. Sometimes, decision-makers or other stakeholders may inadvertently face power dynamics or other structural barriers that can hinder particular partners from participating candidly and fully; true inclusion requires intentional examination of group needs and processes to ensure that all stakeholders have full opportunity to contribute to the process. Engaging community in these ways helps collective impact efforts address the issues most important to those most directly affected, builds capacity and enables community participation in and ownership of solutions, and helps embed the work in the community so that it will be more effective and sustainable.


3. Recruit and co-create with cross-sector partners. Collective impact collaboratives are created by and composed of actors from across sectors and parts of the community, including nonprofits, government, private sector, philanthropy, and residents. While not all initiatives will engage all sectors actively at the same time, collaboratives made up of only one or two types of actors (e.g., all nonprofits, all funders) do not have the diversity of actors required to create the systems-level view that contributes to a robust collective impact initiative. These cross-sector partners, who all have a role to play in the solution, share in co-creating the common agenda, identifying shared measures, and implementing the work required to achieve the effort’s goals.


4. Use data to continuously learn, adapt, and improve. Collective impact is not a solution, but rather a collaborative problem-solving process. This process requires partners to remain aware of changes in context, to collect and learn from data, to openly share information and observations with others, and to adapt their strategies quickly in response to an evolving environment. To accomplish this, initiatives should have clear learning priorities, build strong structures and processes for learning, and create a learning culture that enables the group to use meaningful, credible, and useful qualitative and quantitative data for continuous learning and strategic refinement. Many initiatives find it valuable to use a disciplined and formalized process to guide their use of data.


5. Cultivate leaders with unique system leadership skills. For collective impact initiatives to achieve transformational change, leaders must possess strong facilitation, management, and convening skills. They must be able to create a holding space for people to come together and work out their disparate viewpoints, they must possess the capacity to foster shared meaning and shared aspirations among participants, they must be able to help participants understand the complexity and non-linearity of system-level change, they must be dedicated to the health of the whole and willing to change their own organizations in service of the group’s agenda, and they must be adept at building relationships and trust among collaborators. These system leadership skills are essential for the backbone, and also other leaders in the collaborative such as steering committee members, community leaders, and action team leaders.


6. Focus on program and system strategies. The mutually reinforcing activities that the initiative takes on to achieve its goals should focus on collective program and system change strategies rather than individual programs or organizations. System strategies include strategies that increase communication and coordination across organizations, change the practices and behavior of professionals and beneficiaries, shift social and cultural norms, improve services system wide (by spreading techniques that already work within the community across organizations, or by bringing a new evidence-based practice into the community), and change policies.


7. Build a culture that fosters relationships, trust, and respect across participants. Collective impact partnerships require participants to come to a common understanding of the problem and shared goals, to work together and align work in new ways, and to learn from each other. Authentic interpersonal relationships, trust, respect, and inclusion are key elements of the culture that is required for this difficult work to occur. The backbone and other initiative leaders must be proactive in their efforts to create this culture.


8. Customize for local context. While the five conditions are consistent across collective impact initiatives, and initiatives benefit a great deal by learning from each other, customizing the initiative for the local context is essential. Initiatives can do their best work when they deeply understand the problem they are trying to solve locally—both from the data and input from the community and from understanding the existing work and coalitions that may be working on similar issues. Customizing the work to fit the local community context enables the coalition to honor, build on, and/or align with existing work and pursue system and program strategies that are most relevant to local needs.


These principles of practice were identified based on the work of the field of practitioners by the Collective Impact Forum in partnership with the Aspen Institute Forum for Community Solutions, FSG, the Forum for Youth Investment, Grantmakers for Effective Organizations, Living Cities, PolicyLink, the Tamarack Institute, and United Way Worldwide.


Five Conditions of Collective Impact

While our understanding of how to put collective impact into practice has deepened and expanded, the five conditions outlined in the original article Collective Impact remain the core of the approach.

  • Common Agenda: All participants have a shared vision for change that includes a common understanding of the problem and a joint approach to solving the problem through agreed-upon actions.
     
  • Shared Measurement: Agreement on the ways success will be measured and reported, with a short list of common indicators identified and used across all participating organizations for learning and improvement.
     
  • Mutually Reinforcing Activities: Engagement of a diverse set of stakeholders, typically across sectors, coordinating a set of differentiated activities through a mutually reinforcing plan of action.
     
  • Continuous Communication: Frequent and structured open communication across the many players to build trust, assure mutual objectives, and create common motivation.
     
  • Backbone Support: Ongoing support by independent, funded staff dedicated to the initiative, including guiding the initiative’s vision and strategy, supporting aligned activities, establishing shared measurement practices, building public will, advancing policy, and mobilizing funding. Backbone staff can all sit within a single organization, or they can have different roles housed in multiple organizations.


Share yout thoughts

We would look forward to hearing what you think about these principles, and what practices have been core to your collective impact work.


Download the Collective Impact Principles of Practice

A copy of this post is also available in the Forum's Resource Library.

Pages