Evaluating the System

New York City is like no other place in the world. So why would its systems be any different? Perhaps the “city that never sleeps” is too busy to rest 
because it is trying to figure out how to protect and serve its 8.4 million residents.  It requires a group effort to take on such a challenge. This need for 
collaboration calls for the government to rely heavily on for-profit and nonprofit entities to meet the demands of one of the world’s largest cities.  With a 
system as complex as that of New York City's, hundreds of thousands service providers are involved. This makes it hard to gauge the true value or 
impact these NGOs provide. While the for-profit world has been rooted in research and evaluation, the nonprofit sector has not fully transitioned into a 
data-driven model just yet. That said, it should be noted that there is a very well articulated debate examining if this is even the right direction for the 
nonprofit world. Although the details of that debate are beyond the scope of this article, what can be addressed is that New York City is not the only 
jurisdiction in which organizations struggle to measure their impact. Smaller cities or even some tiny towns still struggle to determine the true impact that 
their services and programs have on the communities they serve.

Program evaluation and the use of data in the nonprofit world is a hot topic at the moment. With the movement of big data and analytics gaining traction
in so many other industries, the nonprofit sector is still deciding whether or not to buy into the idea of using metrics. This sense of skepticism is only 
natural, but can be addressed. In that sense, New York City makes for a perfect case study not only because of the sheer quantity of organizations, but
because of the diversity of the programs. As Director of Evaluate for Change, a company that provides the training that nonprofit leaders need in order 
to conduct program evaluation, I have worked with numerous nonprofits looking to use evaluation to prove their impact. At the same time, within the five
boroughs, I have also met many individuals hesitant and resistant to evaluation. Whether an agency has welcomed the idea of evaluation or not, one 
trend I noticed is that they all are passionate about improving the communities around them.

At Evaluate for Change we are constantly explaining that our overall goal is to help empower nonprofits to become more data-driven, but before we can
achieve that we have to shift the current culture and provide these agencies the tools they need to execute sound evaluations. The tools needed to 
execute an evaluation is important. People argue that it is essential - and in many ways they are right.  However, before this can happen we must 
address the underlying issue of the fear associated with data. At Evaluate for Change, this is also known as what we like to call the Intuition vs. Analysis
Argument. As practitioners, we are trained to trust our intuition and through our years of experience we develop an intuition that becomes more 
valuable over time. This skill helps us serve our clients to the best of our ability. Well what if we are thinking about using data? What happens when you 
throw that wrench in the mix?  We often think using data will negate the years of experience and expertise we have developed. This is one of the 
common myths that lead to people’s hesitation when it comes to using data. However, what needs to be kept in mind is that using intuition and analysis 
together strengthens programming. The two complement each other and should not be seen as rivals. The following is a real world example of how 
together, intuition and analysis, can help increase the impact a program has. As a reader of this magazine, you may find a child welfare program 
evaluation case study as helpful and interesting. I hope that I am right.

Let’s examine a program that teaches adolescents crucial life skills, formerly known as independent living type programs. If you have any experience 
working on this type of initiative, you already know how loaded and difficult it can be. There are many different skills we must master within our lifetime. 
As a practitioner in the field you have learned what is effective in how adolescents can attain these skills with specific interventions. That intuition adds a
perspective to the programming that data alone would not. Using only this intuition without measuring the program through data, we couldn’t know its 
true impact. In some ways, we are programmed to use our senses to evaluate events. You observe a few adolescents graduate the program and 
secure a wonderful job.  This translates to, “Wow, this program must work!  They must have received the skills needed to secure this job and now they 
are going to save money to get their own apartment and enroll in school and (well you get the point).”  Without measuring all outcomes in the program 
you can’t - and you shouldn’t - use these few examples as an indicator of an effective program. Many times when we take a closer look, we find more 
outcomes that are not obvious from the surface. For example, after looking at the results we realize that although a number of participants found jobs, 
less than 15% of them were able to retain that job. Through taking a closer look at the participant’s individual outcomes we can to determine the 
barriers to retaining employment.

On the other hand, if we used only data to design our programs we would lose the personal expertise and perspective of practitioners that cannot be 
captured quantitatively. To reiterate this point, Evaluate for Change strives to articulate that it is only through the combination of intuition and analysis 
that we as a field can incorporate the use of data and metrics. The social sector must learn to embrace the data movement and let their passion fuel the 
advancement of the field.