Please note, this post was originally published on ArtsFwd.org
At its core, evaluation is about learning. Learning can be defined as knowledge that supports one’s capacity to address emerging challenges and opportunities, in a context. This capacity can be built among individuals and groups of individuals (for example, an organization or community). Meaningful evaluation asks questions, collects data, and interprets and communicates findings that build our collective capacity to learn.
However, there are broad assumptions about the purpose of learning, which inform how evaluation is designed and executed. For example, as expanded upon in Jamie Gamble’s post on ArtsFwd, “Evaluating Innovation: An Introduction to Developmental Evaluation,” the purpose of formative and summative evaluations are to optimize, standardize, and judge the effectiveness of an approach to a challenge or opportunity. The assumption here is that a standardized approach is desirable. But, what happens when a context is complex and emergent and a standardized approach is unhelpful? Enter Developmental Evaluation.
One useful way of framing the key features of Developmental Evaluation is laid out in Michael Quinn Patton’s 2016 article, “What is Essential in Developmental Evaluation? On Integrity, Fidelity, Adultery, Abstinence, Impotence, Long-Term Commitment, Integrity, and Sensitivity in Implementing Evaluation Models.” In this article, Patton lays out eight essential Developmental Evaluation principles. Below, I use Patton’s principles of Developmental Evaluation (named in the parentheses in each point) to explore:
Why is a Developmental Evaluation approach helpful in complex, systems-change work?
A Developmental Evaluation approach is designed to adapt with shifting contexts (Developmental Principle): In complex and emergent contexts, there is no formulaic approach to “executing” innovation. Strategies develop as the context shifts and is better understood. Developmental Evaluation pays specific attention to context and the processes used to adapt to these changes.
Developmental Evaluation provides a mechanism to identify and draw on broader patterns (Evaluation Rigor Principle): Complex processes are messy. Like other evaluation approaches, developmental evaluation relies on systematically collecting and reflecting on broader patterns by collecting data from a variety of stakeholders, using diverse data collection methods (e.g. observation, interviews, surveys), and having a dual focus on elements of the process that have already stood out as important and those that emerge. One tension in developmental evaluation is the role of quick feedback loops and maintaining this evaluation rigor: To support these loops, data will need to be continuously aggregated and analyzed for general themes. At the same time, the data that any observation is based on must be clear to lend itself to future revisiting, refining, or even, rejecting of patterns previously identified.
Overall, the data collected and patterns identified act as a touchstone for understanding complexity outside of one’s individual impressions of experiences, which in turn form the foundation to revisit, revise or reject our strategic assumptions.
Developmental Evaluation is built to inform strategic design and decision making (Utilization-focused principle; Cocreation Principle; Timely feedback): In contrast to other forms of evaluation, which may focus on assessing a final product, meeting benchmarks, or supporting accountability, the data generated through developmental evaluation is explicitly intended for strategic decision making. In complex contexts, the path that connect the innovation with the intended long-term outcome(s) will be unclear and indirect. However, investing in evaluation recognizes that it is possible to learn about our strategies and their impact, despite this level of complexity. Co-creating an evaluation strategy that generates feedback loops (or as I like to call them, learning loops) supports innovators’ ability to adapt in complex contexts.
Moreover, as evaluative thinking capacity is actively built among team members, the evaluator can encourage ownership over sharing reflections and critical questions as a team (rather than being a separate activity led by the evaluator). This supports the evaluation strategy’s sustainability and builds evaluative and critical thinking capacity, which support planning and implementation in complex contexts, broadly.
Developmental Evaluation pays explicit attention to innovation, complexity and systems dynamics (Innovation niche principle; Complexity perspective principle; Systems thinking principle): Complex systems are dynamic, emergent, and interconnected. Developmental evaluation strives to be aware of the characteristics of complex systems so that what is being paid attention to support innovation and strategic decision making in these contexts. A helpful article that connects characteristics of complex systems to a developmental evaluation approach is Preskill and Gopal’s “Evaluating Complexity: Propositions for Improving Practice”. Some of the key developmental evaluation strategies that explicitly respond to characteristics of complexity include: paying attention to how elements of the system interact and influence with each other; identifying instances when there is momentum or stagnation in the system; and exploring the non-linear relationships between cause and effect.
Overall, developmental evaluation is an approach to learning that recognizes the specific needs of designing strategic innovations in complex, emergent contexts. While any evaluation design will be grounded in specific circumstances, my next post will explore broad considerations of ‘how’ to embed an approach to developmental evaluation in an initiative.