Posted Wednesday, July 30, 2014 at 7:33 pm

Making the Choice to Use Developmental Evaluation

When the Missouri Foundation for Health decided to focus on decreasing infant mortality rates, we knew the approach was going to be different from what we had done before. Supporting effective programs and services hadn’t solved the problem – something bolder was needed.

I’m sure many of you have had that moment, where you realize the problem you’re facing needs a whole new type of solution. For the Foundation, understanding we needed to do things differently also made us think about how we could use evaluation differently than in the past. If our goal is to let new solutions emerge, to be open to the wide variety of ways a collective impact approach might go about solving the problem, initially we would need an evaluation that was set up not to pay attention to predefined metrics, but instead  help us and our initiative partners generate new pathways to success.

For all of these reasons, we decided to engage developmental evaluators. Developmental evaluation is a different approach to evaluation, one that focuses on using evaluative information to guide decisions about what needs to happen and explore whether and how it’s working.

Finding the right developmental evaluators was a serious undertaking for us – they aren’t on every street corner at least not in the Midwest! We talked to our peers at other foundations, reached out to leading evaluators nationally who work in complex, messy settings, and developed a request for concept papers to share our vision but also elicit ideas from practitioners for what the role could be. We reviewed the responses and asked for input from our initiative partners and advisors. Then we asked a group of evaluators to respond to the concept paper and interviewed our top candidates.

Applying Developmental Evaluation in a Collective Impact Context

If we were going to use this evaluation approach, we believed we all had to understand it, own it and use it, including the backbone organizations.  This meant skill building became a large part of what our selected developmental evaluators, Spark Policy Institute and the Center for Evaluation Innovation would provide for us. An important first step in this skill building was to work together to breakdown the problem of high infant mortality into the aspects of the problem that are simple, complicated, and complex, for example:

  • Complex (ongoing, moving parts to which there will never be a perfect set of practices/protocols): The fragmented systems of care, disparities, the conflux of systemic problems with the individual choices that people make, and the need to change public policy and public will.
     
  • Complicated (something we can solve with the right set of practices/protocols, but we have to figure them out): Access/utilization of existing services, including evidence-based practices and the different experiences women from different backgrounds have in the service delivery system.
     
  • Simple (something that a simple protocol/practice can solve): Identifying and implementing evidence based programs, collecting and using data to understand the problem, and leveraging existing relationships.

Going through this process helped us think about what parts of our work we could move faster on and other parts that we need to be thoughtful, pull in additional perspectives, and leverage developmental evaluation to help figure out.

We emerged from that dialogue with evaluation questions that our developmental evaluator helped us to answer, questions like:

  • How do potential partners (including within the backbone organizations) view and prioritize infant mortality, including such things as their values, views about causes of infant mortality, needs and barriers, and experiences working on infant mortality.
     
  • How can outside influences be harnessed to develop the strategy in new ways?
     
  • What is a process and structure for engaging stakeholders – how to stage the engagement and how to motivate participation?

I’ll leave the description of how we answered these questions for a second blog from our evaluation partners, but I do want to close with the recognition that we learned some critical lessons in this process.

Lessons Learned

  • Because developmental evaluation is a fairly new approach, how you introduce it to the participants is important. It’s critical to provide them with something they can concretely engage with and use, versus providing mostly theory up front.
     
  • Participants need to build their understanding of developmental evaluation in order to know how to engage with it. It can't be happening in isolation, off to the side, but rather needs to steadily intersect with the work.
     
  • Make sure your funders are on board with the idea of emerging strategy and an evaluation that helps with that, rather than a primary focus on accountability to a predefined plan.
     
  • Select an evaluator who has experience being adaptive and flexible, open to changing scopes of work and able to be your thought-partner in the effort, rather than your contractor or an outside observer.

Sometimes in philanthropy we develop strategies and implement activities based on our own experiences and knowledge, without allowing ourselves to admit we don't have all the answers.  This is true of both funders and grantees; we don't want to look like we don't know. The truth is if we did have all the answers, the problem of high infant mortality rates in our communities would be solved by now.  This initiative of the Missouri Foundation for Health has given foundation staff and community partners an opportunity to allow the right solutions to emerge and developmental evaluation provides the structure to keep the work on track.

Read the second half of this blog series that focuses on development evaluation and collective impact, published on August 5 and authored by Jewlya Lynn (Spark Policy Institute).