EBL 101
Research Methods: The Most Significant Change
Technique
Virginia
Wilson
Director,
Centre for Evidence Based Library and Information Practice (C-EBLIP)
University
Library
University
of Saskatchewan
Saskatoon,
Saskatchewan, Canada
Email:
virginia.wilson@usask.ca
Originally published in:
Evidence
Based Library and Information Practice, 9(3), 121–123. https://ejournals.library.ualberta.ca/index.php/EBLIP/article/view/22918/17169
Received: 11 Aug. 2014 Accepted: 29 Aug. 2014
2016 Wilson. This is an Open Access article
distributed under the terms of the Creative Commons-Attribution-Noncommercial-Share
Alike License 4.0 International (http://creativecommons.org/licenses/by-nc-sa/4.0/), which permits unrestricted use,
distribution, and reproduction in any medium, provided the original work is
properly attributed, not used for commercial purposes, and, if transformed, the
resulting work is redistributed under the same or similar license to this one.
In
this EBL101 column I am exploring a technique that has largely been used to
evaluate international development programs. The Most Significant Change
technique (MSC) was developed by Rick Davies and Jessica Dart in the early
2000s to evaluate complex interventions. It takes place during the lifespan of
the intervention program, so it is a process of continuous evaluation. While I
have not encountered this method used in library and information studies, it
strikes me that a technique such as this would be useful in variety of
situations: evaluating instruction, appraising public programs, assessing
various initiatives in any area of the library (client services, technical
services, etc.), and others that I am probably just not seeing right now.
The
methodology is participatory, so its use would be a good chance to have direct
contact and conversation with various stakeholders. For example, library
patrons, library staff, higher administration, the public—whoever is involved
with whatever is being looked at and changed. While this method could perhaps
be used on smaller projects in the library, I see its usefulness as being
centred more upon large, organization-wide developments and changes, as a way
to continuously monitor the situation and make adjustments as the project
progresses. Examples of larger projects include the implementation of a
different organizational structure, the design, development, and building of a
library facility, advancement initiatives, or a large-scale research project
that is national in scope.
Dart
and Davies (2003) refer to MSC as a “dialogical, story-based evaluation tool”
that eschews “conventional monitoring against quantitative indicators” in
favour of the “collection and participatory interpretations of ‘stories’ about
change” (p. 138). These stories are “examples of significant program outcomes
are collected and presented to designated groups of stakeholders who deliberate
on the value of these outcomes in a systematic and transparent manner” (Dart,
2005, p. 261). Though stories are the focus of the analysis, Dart and Davies
(2003) indicate that “the central aspect of the technique is not the stories
themselves, but the deliberation and dialogue that surrounds the process of
selecting significant changes” (p. 138).
On
his news website, Davies (2008) states that MSC is most useful in the following
situations:
MSC
consists of seven key steps (although the guideline document listed below under
resources by the same authors outlines 10 steps):
Unlike performance indicators,
which are specific and focused, the domains of change are broad and loose,
allowing for program participants to define them for themselves. The domains
are identified by stakeholders.
“Stories of significant change
are collected from those most directly involved” over a time period decided
upon at the start of the project. The time period can be extended if more
stories are needed.
The participants are those who
are involved with the program in question, such as beneficiaries, clients, and
field staff.
Stories are gathered by using one
simple question: “During the last month, in your opinion, what was the most
significant change that took place in the program?”
The stories are then analyzed and
“filtered up through the levels of authority typically found within an
organization or program”, with each level selecting the most significant change
stories to be sent on up the ladder.
Continuous communication amongst
stakeholders participating in and reviewing the stories is a key component, so
that the feedback can be incorporated into each subsequent round of story
collection.
Verification can take place by
visiting the sites of the events described in the stories for follow up.
(Adapted from Dart & Davies, 2003, pp. 138-139)
For
example, the above steps might be mapped on to a library or information
management process in the following way when a complex intervention is chosen
for continuous evaluation, e.g., the move to a programmatic approach to
academic library instruction. Domains to be monitored are selected by
participants in this change process, and by those who the change affects, such
as librarians, students, library staff, and others from the larger institution
(professors from various colleges who utilize library instruction, administrators
from colleges who are participating in the construction of a programmatic
approach to instruction, etc.). Stories are gathered from the participants
regularly over the academic year to provide continuous monitoring of the change
intervention. Stories are the result of asking a simple question, such as “what
was the most significant change that happened this month as a result of the
programmatic approach to instruction recently implemented?” Those monitoring
the change analyse and examine the stories at every level, with the different
analyses going up the organizational structure to be further analysed. All
participants provide and share continuous feedback, providing more information
about the change occurring. Follow up should happen, with various participants
talking to other participants about the change taking place, verifying the
analyses of the stories.
The
MSC technique could be an interesting framework with which to assess and
evaluate our professional practice. Dart and Davies (2003) state that the key
strength of MSC “lies in its ability to facilitate a dynamic dialogue between
designated stakeholders” (p. 152). As librarians and information professionals,
we should be looking for new and innovative ways to communicate with our users,
clients, patrons, and with each other as we strive to provide the best services
possible.
Other resources
and examples of MSC in action
Davies, R.,
& Dart, J. (2005). The ‘Most
Significant Change’ (MSC) technique: A guide to its use. Retrieved from http://www.alnap.org/resource/8102
Lunch, C.
(2007). The Most Significant Change: Using participatory video for monitoring
and evaluation. Participatory Learning
and Action, 56, 28-32. http://www.iied.org/participatory-learning-action
Wilder, L.,
& Walpole, M. (2008). Measuring
social impacts in conservation: experience of using the Most Significant Change
method. Oryx, 42(4), 529-538. http://dx.doi.org/10.1017/S0030605307000671
Willetts, J.,
& Crawford, P. (2007). The most significant lessons about the Most
Significant Change technique. Development
in Practice, 17(3), 367-379. http://dx.doi.org/10.1080/09614520701336907
References
Dart, J. (2005).
Most significant change technique. In S. Mathison, (Ed.), Encyclopedia of evaluation (pp. 261-263). Thousand Oaks,
California: Sage.
Dart, J., &
Davies, R. (2003). A dialogical, story-based evaluation tool: The Most
Significant Change technique. American
Journal of Evaluation, 24(2), 137-155.
http://dx.doi.org/10.1177/109821400302400202
Davies, R.
(2008, Apr. 13). Most significant change (MSC). Monitoring and Evaluation NEWS. Retrieved from http://mande.co.uk/special-issues/most-significant-change-msc/