Measuring Performance

| by Bert Armstrong

Today's blog post serves as an introduction to Armstrong McGuire's newest team member.  Staci Barfield joins our firm this month as a Senior Advisor, bringing over 25 years of leadership experience in both the non-profit and for-profit sectors, with significant successes in maximizing an organization’s operational abilities. Her expertise lies in strategic planning, process improvement, performance measurement, organizational design, project management, program delivery, technology implementation, board development, risk mitigation, and change management. Staci also has a strong track record in fundraising, event planning, and marketing and public relations.  Read Staci's biography here and join us in welcoming her to our team. 

We invite you to enjoy Staci's first blog as a member of our team:

I have an addiction to podcasts. In the car, working in the yard, or cleaning the house, you’re likely to find me listening my way through the 400+ episode backlog of podcasts to which I subscribe. I recently came across one show which really made me think about how we measure the work of nonprofits.

In When Helping Hurts (first aired July 12, 2017), Freakonomics Radio host Stephen Dubner explored the Cambridge-Summerville Youth Study, a program initiated in the 1930’s by Harvard physician Richard Clark Cabot. Intended to be a longitudinal research project to test the value of mentoring (or “directed friendship”) on at-risk boys and young men, program administrators found (spoiler alert) that those who were mentored performed worse than the control group in all seven of the life categories tracked.

One sound bite in particular, voiced by University of Maryland criminology professor Denise Gottfredson, caught my attention: “People just assume that if you do something that sounds good, it will have positive effects.” In the nonprofit world, this is often called the domain of cherished theory, our categorization for beliefs beyond measure, because how on earth can you track the efficacy of a program over a lifetime? But the Cambridge-Summerville Youth Study did and found, surprisingly, an answer that few expected.

This reminded me that, too often, we evaluate the social value of nonprofits by outputs (the number of kids mentored) rather than impact (how lives were affected). Perhaps we do so because outputs are easier to track, or because our funders like to see numbers, or because factors outside our control can influence impact. Regardless, to be a true barometer of social value, the metrics by which we track the performance of programs must address both the quantitative and qualitative because, sometimes, the numbers just don’t add up.


Comments

There are no comments yet.

Leave a Comment