top of page

Bridging the Research-to-Practice Divide: Why Funders Should Invest in Evaluation Learning Capacity

By Kelly Feltault, Ph.D.


The Real Research-to-Practice Gap

Across the arts and health field, the evidence is clear: creative engagement improves well-being, belonging, social connection, and resilience. From dance programs supporting Parkinson’s patients to community choirs reducing loneliness, the clinical research keeps growing. Yet when we look at how that research translates into program evaluation within community practice, we find a stark gap.


Community-based arts organizations are still being asked to “prove” their impact with the same methods and metrics used in clinical or academic settings. And these metrics often miss the heart of what community arts programs actually achieve while the methods are completely unfeasible. But the real issue isn’t just a mismatch of tools. It’s a lack of capacity.


Most small nonprofits don’t have the staff time, systems, or expertise to collect and use data in meaningful ways because their funders rarely pay for it. Grants are structured (and therefore written) to support programs, not the infrastructure that sustains them. That means no budget for professional development, learning systems, or evaluation staff time, all of which are essential to building evidence-based ecosystems. Even within the shift to trust-based philanthropy (TBP), little infrastructure is provided to build the capacity of nonprofits and empower them to collect, learn from, and use their own outcome data—whether that’s in narrative or numerical form.


This is the real research-to-practice gap: we’ve built a funding system that prizes evidence and evidence-based decision making, but doesn’t invest in the conditions that make learning possible.


When funders expect nonprofits to produce measurable results without resourcing the learning capacity to do so, it’s like asking a business to turn a profit without paying staff or maintaining its equipment. We would never invest in a company that ignored its infrastructure, yet we routinely expect nonprofits to do exactly that.


“The problem isn’t that community organizations resist evaluation; it’s that we haven’t resourced them to measure what matters on their own terms.”


From Data Burden to Learning Infrastructure

At Outcome Studio, we use a framework called Ways of Knowing to help organizations move from intuitive, experiential knowledge to measurable, actionable learning. It’s a process that mirrors how knowledge evolves in human systems: we begin with experience, make meaning through creativity, translate those insights into frameworks and models, and finally apply what we’ve learned to make change.


This cyclical process turns learning into infrastructure. It becomes what we call the infrastructure of equity, a system that values multiple ways of knowing and gives community organizations the structure, language, and tools to make their own evidence visible.


When we build MEL (Monitoring, Evaluation, and Learning) capacity, we’re not just creating better data we’re creating more equitable ecosystems for knowledge. We’re saying that community-based practitioners deserve the same investment in professional growth, data systems, and strategic learning that any research institution receives.


“MEL isn’t about proving; it’s about learning. When we treat it that way, we expand what counts as evidence.”


A Real-World Example: The Creative Forces® MEL Consultations Project

In 2021, Americans for the Arts, through a National Endowment for the Arts and Department of Defense cooperative agreement and in partnership with the Pabst Steinmetz Foundation, launched a bold experiment in partnership with the Creative Forces®: NEA Military Healing Arts Network. The goal was simple but radical: what would happen if we paid arts nonprofits not just to deliver programs, but to learn how to evaluate them?


Twelve community arts organizations participated in the MEL Consultations Project, where they received paid time, training, and coaching to create their own evaluation frameworks. Over twelve weeks, each organization developed a theory of change, a logic model, and a data collection plan with instruments, and translated their own experiential and artistic knowledge into measurable outcomes on their terms.


The results were powerful.

  • Confidence and clarity increased. Staff began seeing evaluation not as a compliance exercise but as a creative, reflective practice.

  • Funders took notice. Organizations reported being “invited to the table” for the first time by health partners and local governments.

  • Evaluation became strategy. Executive directors used the models for planning and board education, freeing program staff to focus on implementation.

  • The ripple effects continue today. Several of these nonprofits are now key partners in county mental health initiatives, collaborating with VA hospitals, and securing expanded funding because they can clearly articulate—and measure—their impact.


This project proved what many of us already know: when funders invest in learning capacity, they don’t just strengthen individual organizations—they build durable systems for collaboration, advocacy, and innovation.


ree

Why Funders Must Invest in Capacity, Not Just Data Systems

Too often, “investing in evaluation” is equated with buying software or outsourcing analysis. Technology plays an important role—it can automate collection, reduce burden, and surface insights faster—especially when paired with the reflective, relational practices that help teams interpret and act on what the data is showing. Learning happens when people have the time, confidence, and context to interpret what the data means.


The most effective systems are hybrid systems: tools that make data accessible, paired with people who know how to use that data to make meaning, build shared understanding, and refine programming in ways software alone cannot. Software can surface patterns, but only practitioners can translate lived experience into practice-based evidence that reflects community realities.


Staff evaluation capacity building is what turns data into insight and insight into action. It’s what allows an organization to:

  • Design meaningful data collection aligned with its mission.

  • Reflect on what the data means for programs and people.

  • Use findings to inform strategy, partnerships, and storytelling.


When funders support this kind of capacity building, they move from transactional reporting to transformational learning. And that shift changes everything—from accountability to sustainability.


Imagine if funders saw MEL not as overhead but as the core infrastructure of change. A regenerative ecosystem begins to form: staff have time to learn, reflect, and adapt; programs evolve based on feedback; and organizations become more resilient and strategic. Over time, this makes nonprofits regenerative—able to renew themselves through learning—rather than extractive systems that burn out staff and rely on short-term funding cycles.


When funders resource both—the human learning capacity and the technological systems that sustain it—they create regenerative organizations where insight, not exhaustion, drives improvement.


Reframing Learning as Infrastructure

If we want to close the research-to-practice gap for arts and health, we have to stop treating learning as optional. It is the foundation that makes every other investment work.


Funders can start by asking three simple questions:

  1. Have we funded the staff time and systems needed to learn from this work?

  2. Are we building evaluation into the grant, or simply asking for data after the fact?

  3. Are we defining evidence narrowly—or creating space for community knowledge to count?


When the answer to those questions is yes, we move from accountability to alignment, from compliance to collaboration. We create a culture of shared learning—one where evidence grows from experience and capacity is nurtured, not extracted.


We would never invest in a business that refused to pay its employees or maintain its infrastructure. Nonprofits deserve the same logic. When we invest in learning capacity, we build the infrastructure that keeps impact alive long after a grant cycle ends.


About the Author

Kelly Feltault, Ph.D., is co-founder of Outcome Studio and the DataStory Workshops, where funders and creative nonprofits learn how to measure what matters and tell their stories with confidence. She is also an artist and certified Therapeutic Arts Facilitator. You can follow her on LinkedIn.

 

Comments


bottom of page