It’s time to chip away at an often taken for granted assumption in the social sector: that top quality measurement is and only is outcomes measurement. This assumption has ballooned to costly and harmful levels. Move over Dan Pallotta, time to share your platform to roll back the flawed idea equating improving social sector data practices with measuring outcomes. Let’s call it “outcomes-superiority.” Change is already afoot.
Here are the top 5 reasons for less outcomes-superiority in measurement.
1. It over-concentrates resources on certain issues.
The way the story was told or sold years ago, our sector needed better information to select the most effective, change-making nonprofit organizations. Better information may have been referred to as outcomes, ROI, or moving the needle. The Social Impact Bond and Pay for Success models particularly entrenched outcomes-superiority around the 2010's as though outcomes data was equally accessible on all issues. So many Bond/Success examples focus on one: workforce development. Workforce development programs frequently use outcomes measurement because employment and wage data are commonly understood and reasonably accessible. Contrast that with less well understood and less accessible data on mental health or addiction, reducing violence against women or supporting refugees in transition. Outcomes-data-accessible issues became a magnet for funding. This draw was so strong it is at the expense of organizations focused on other issues. Other issues that cause their organizations to be less quick to respond when someone in a position of power says “Prove it!” It causes funders to pick the issues with outcomes data rather than uplift individually exceptional organizations among their peers with similar missions. In short, issue selection rather than organization selection.
2. It consolidates power for funders, takes power from the people.
Because outcomes-superiority is a currently accepted norm, funders feel justified in requesting outcomes data and the nonprofit leader feels they’ve missed the mark if it’s not on hand. This common interaction is the opposite of redistributing power from funders to nonprofit staff, much less the communities they serve. Conversational use of theoretical constructs, such as evaluation design and logic models, contributes to elitism. Elitism damages inclusion. Some funders extend their critique to refer to certain nonprofit organizations as unsophisticated, immature or data illiterate. Power plays often use patronizing language. It's not that theory and expertise shouldn't have a role, it's how and when they're wielded, and by whom.
3. It creates profit for the wrong groups.
Chasing outcomes measurement has fueled firms building their companies through a business model promising outcomes-focused measurement. These measurement firms’ incentives are to grow their company, with (ironically) less incentive to support the achievement of outcomes. “Measurement for measurement’s sake” (MfMS) has spread infectiously because of so many funders’ blanket assumption of outcomes measurement as top quality measurement. So many nonprofit leaders have stories about MfMS. These efforts take time and money away from using currently available data to its full extent.
4. It incorrectly positions nonprofits as the sole responsible party.
Outcomes measurement is well-suited for certain issues (e.g., employment), as noted above. When it is applied to issues and organizations for which it is not well suited (hunger, disaster response, volunteering, mental health, anti-racist action…) it implies and further embeds a false culpability of nonprofits as the organizations/sector responsible to single-handedly solve society’s injustices. Example: expecting a food bank to prove reduced rates of food insecurity focuses attention (and budget) on an ill-advised M&E effort, rather than focusing attention (and budget) on turning the tide on the systemic problems causing food insecurity, like legal but harmfully low wages.
5. It’s disconnected from the real-deal social sector data used daily.
The nonprofit sector is full of data, insights, reports, and the like. Check out the (outdated, but I still find interesting) Common Results Catalog showing, from the nonprofit sector’s perspective the range of information collected. It’s time to shift from continual, negative critique to more acceptance. Social sector data discussions are so frequently framed to begin with what's wrong. Increasing how often we begin with positive acceptance of currently available nonprofit/social sector data focuses us on how we can more intentionally use what information we have now. Here are some common, current examples: people served, clients represented, job placements, and dollars of aid awarded. If we say we center communities, that centering must include how communities would define top-quality measurement for themselves. Further, "communities" are not a monolith, so it requires flexibility to accept that definitions will vary.
In Conclusion
Outcomes measurement has its uses, but that’s another article. The nonprofit sector is flush with professionals dedicating their careers to abate heavy and complicated societal issues. When someone implies these professionals aren’t focused on outcomes because they don’t have outcomes data, pause. Was the person implying the sector’s negative intent (that they are not focused enough on the mission) a privileged, white-collar, man? Before joining the critique, turn your attention to the speaker’s incentive for saying such a ridiculous thing. Of course the vast majority of nonprofit teams strive for their work to achieve the bigger vision... dare I say, the outcome. Demanding proof and accepting outcomes data as the only acceptable form of proof sprouts from the same roots as the rampant inequities in our society. It’s time we agree on a new norm that takes outcomes data off a pedestal. Good measurement is that which doesn't cause harm and has a purpose that serves the people. As the saying goes: There is much to be done. And undone.