Abstract
The monitoring and evaluation function provides for accountability and to some extent transparency and, therefore, governance. However, this function can only be effective if it is conceptually linked within development interventions and public policy. There is an explicit discussion of the middle-third tier (how to monitor and evaluate) as well as the bottom-third tier (data collection and storage, data processing and analysis, reporting results and findings, integrating results and findings into planning and implementation as well as overall decision making). Unfortunately, the top-third tier that links monitoring and evaluation within development interventions (the what) and public policy (the how) is implicit, if present. The discussions often point out that monitoring and evaluation is a management and decision-making tool but they omit or fail to link it to development interventions and public policy, leadership and governance. In this paper, we interrogate literature from a systems thinking perspective to derive a model that conceptually links the monitoring and evaluation function within development interventions and public policy. In doing so, we point out and link the five components (cultural, political, economic, social and environmental) and two processes (imminent and immanent) of development. Similarly, we point out and link the five components (leadership, governance, political-economy, institutional arrangements and organisation arrangements) and three processes (research, decision-making and the public policy cycle) of public policy. It is in the latter that we point out, situate and link the monitoring and evaluation function. We envisage that the proposed model may be useful in reconfiguring institutional and organisational arrangements to foster effective monitoring and evaluation of development interventions.
Introduction
At some point, the reason underlying absent or ineffective monitoring and evaluation of development interventions in some African countries was lack of political will in its broad sense so that we include influential bureaucrats and technocrats. This was partly because monitoring and evaluation sometimes provide information that is not as desirable politically (Baradei, Abdelhamid & Wally 2014). However, as Porter and Goldman (2013) point out, in recent years, there is a growing demand for evidence-based decision making among politicians and bureaucrats – more so from the former as the continent becomes more democratic or as citizens increasingly demand accountability from their ruling elite (Baradei et al. 2014).1 Therefore, politicians in Benin, South Africa and Uganda have thrown their weight in support of monitoring and evaluation (Porter & Goldman 2013). To this list, we can add Kenya, Ghana and Rwanda.
Currently, it seems the political weight is a little bit too much for the civil servant who has eventually taken to monitor and evaluate development interventions for compliance rather than institutionalise or streamline the function – that is, linking monitoring and evaluation within development interventions (the what) and public policy (the how). If institutionalised or streamlined, the monitoring and evaluation function can be effective. Such an arrangement can provide for effective and efficient public administration, operations management and general decision making (Fourie 2006). Eventually, it provides institutional and organisational arrangements that foster integrity, accountability and transparency (Adejemboi 1998; Labelle 2010).
In this paper, we apply systems thinking to the commonly presented definitions of monitoring and evaluation to derive a model that conceptually links the monitoring and evaluation function within development interventions and public policy. First, we ‘formulate the mess’ by identifying the key words or terms that need further clarification in the common definitions of monitoring and evaluation. Second, we then distinguish between contextual terms versus key terms. Third, among the contextual terms we identify the broadest term that encompasses the other terms. Lastly, we ‘idealise or realise’ a solution by systematically linking contextual terms beginning with the broadest with other contextual and, thereafter, ALL the key terms. We envisage that the proposed model may be useful in reconfiguring institutional and organisational arrangements to foster effective monitoring and evaluation of development interventions. We do not explicitly discuss the importance and usefulness of monitoring and evaluation because we inherently think, without reservations, that this function is important in any development intervention.
The approach – Gharajedaghi’s systems methodology and Fisher’s devising seminars
Fundamentally, ‘how can one decode an academic field of study?’ There is no obvious answer to this question but one can attempt by pointing out and then relating or linking the fundamental and contextual aspects of the academic field-of-study under interrogation. Wotela (2016) has detailed an approach of decoding academic fields of study. In a nutshell, he applies systems methodology described in Gharajedaghi (2006:107)2 to see ‘through the chaos and understand the complexities’ of an academic field of study. Gharajedaghi’s (2006) systems thinking methodology is anchored at the centre of four foundations – that is, holistic thinking, operational thinking, self-organisation and interactive design. Holistic thinking provides for a general approach to any academic field of study using a set of verifiable assumptions (structure, function and process) and how these may be interconnected. However, Wotela (2016) has collapsed the concepts of structure and function into one concept and simply called it a ‘component’ but retained the concept of ‘process’. In his paper, a ‘component’ describes the independent parts of the whole. For example, the study of economics (the whole) is made up of two components – that is, microeconomics and macroeconomics. A ‘process’ describes activities or operations that help with realising the objectives of the whole and any of its ‘components’. Put differently, processes are vehicles that allow for getting the products of the whole or products of the independent parts of the whole (components). For example, what one has to do (such as research) to yield economic analyses or either microeconomic analyses or macroeconomic analyses. Finally, to decode an academic field of study, one should interrogate literature guided by responding to the following six questions: (1) ‘what is [insert field of study of interest]?’, (2) ‘what is the purpose of [insert field of study of interest]?’, (3) ‘what are the components (structure and function) of [insert field of study of interest]?’, (4) ‘what are the processes in [insert field of study of interest]?’, (5) ‘what are the established facts in [insert field of study of interest]?’ and (6) ‘what are the key issues and debates in [insert field of study of interest]?’
Using the steps detailed in Wotela (2016), we applied systems thinking or rather the six questions to suggest an initial model that conceptually links the monitoring and evaluation function within development interventions and public policy. We then applied summative content analysis to interrogate literature that supports such a relationship and ended up with an initial framework which we subjected to a modified version of Fisher’s (1983) ‘devising seminars’ over a period of 3 years. The ‘devising seminars’3 comprised about 300 University of the Witwatersrand (WITS) School of Governance postgraduate students4 (divided in groups of about six students). Based on their experience, they interrogated, verified and consequently modified the initial framework and assumptions until we got a satisfactory and fairly robust model that we share in this paper.
Identifying the mess5 – What is the problem?
Here we describe the problem or identify the mess to figure out how we can conceptually link the monitoring and evaluation function within development interventions and public policy. To do so, we asked the postgraduate students – who by their own right are seasoned development practitioners and public administrators and managers with a notable proportion working in the South African civil service and a few from other African countries – to review the following descriptions of monitoring and evaluation and, thereafter, provide their perspectives (see Box 1).
BOX 1: Common definition of monitoring and evaluation that provided a basis of the discussion. |
In the first iteration, the students reported that they understand monitoring and evaluation and its importance. However, when probed on the functionality of monitoring and evaluation inherit in each of these words, they noted that the several words these descriptions of ‘monitoring’ and ‘evaluation’ are not immediately clear and yet to be understood. We then suggested that maybe one should first understand all these words to truly and fully comprehend monitoring and evaluation.
In the second iteration, we examined these words closely and identified two groupings. The first consists of key terms – that is, those that we cannot do without when describing monitoring and evaluation. The second group consists of contextual terms – that is, words we refer to when clarifying the role of monitoring and evaluation. Figure 1 shows these two groupings of words found in common definitions of monitoring and evaluation that need clarification before one can truly understand monitoring and evaluation. We conclude that the problem is resilient because it is embedded in the descriptions of monitoring and evaluation and, therefore, easily overlooked. Therefore, from a common-sense point of view, the description of monitoring and evaluation makes perfect sense but not so from a conceptual or functional point of view.
|
FIGURE 1: Showing contextual and key terms found in common definitions of monitoring and evaluation. |
|
Mapping the mess – how do we structure the solution?
We then decided to discuss and link contextual and key terms inherent in the definition of monitoring and evaluation. A quick examination of the contextual terms points out development as the broadest term encompassing other contextual and key terms. Therefore, we opted for this term as our starting point. Besides, starting with development makes logical and practical sense, as Porter and Goldman (2013) have argued, monitoring and evaluation will only influence decision making and allocation of resources if it links in with the overall mandate of government – which is to foster development.
We used the following six7 questions – derived by Wotela (2016) using the systems thinking methodology – to guide our explicit understanding of development before mapping and linking it to the other terms in the table:
What is development?
What is the purpose of development?
What are the components (structure and function) of development?
What are the processes in development?
What are the established facts in development studies?
What are the key issues and debates in development studies?
In as much as the definition of development is academically and politically debatable, it fundamentally entails ‘change’ in the short term, medium term and long term (Summer & Tribe 2008)8. Effectively, development is a transition from an undesirable state to another perceived to be relatively desirable in a variety of human aspirations (Slim 1995). We can group these aspirations into cultural development (Burkey 1993; Gereffi & Fonda 1992; Jack-Akhigbe 2013), political development (Acemoglu & Robinson 2012; McFerson 1992), economic development (Sachs 2005), social development (Gray 2006; Roseland 2000; Watson 2012) and environmental development (Roseland 2000; Slim 1995;). In an attempt to adhere to the predefined order of systems methodology, we compared this grouping with other sources. For example, Figure 2 shows that these components of development are similar to the determinants of human exclusions proposed by the United Nations Economic Commission for Africa (2014: 21).
|
FIGURE 2: Showing determinants of human exclusion proposed by the United Nations Economic Commission for Africa. |
|
Breaking down development into its components is important in monitoring and evaluation because this field is highly logical and focused because of its dependence on the theory of change, the logical framework and the results chain as well as the results framework. One can further break down each component to eventually get to the attributes (qualitative) and variables (quantitative) of each component of development. Attributes and variables consequently form the basis for developing monitoring and evaluation indicators. Collectively, these five sets of indicators for each component of development should comprise development indicators.
Lastly, as Summer and Tribe (2008) have argued, development processes can take two forms: immanent and imminent. The former refers to unintentional development such as the rise of capitalism and the private sector as well as street vending. The latter implies intentional or willed development that began after the Second World War. This distinction is important for two reasons. First, it introduces the concept of development intervention which obviously implies intentional development. The primary concern of monitoring and evaluation is to measure changes arising from the intended development or changes emanating from development interventions. It makes no case or presents inherent challenges if we have to monitor or evaluate immanent development unless we are merely tracking assumptions or risks attached to the development intervention of interest. Second, the distinction is important to summative evaluation, especially impact evaluation, which seeks to account for counterfactual changes. In short, what proportion of observed change is attributable to the intervention and which part is attributable to immanent development.
Figure 3 illustrates the foregoing discussion by relating development to its five components and its two processes. The figure shows that all the components and ultimately development can occur imminently or immanently. Most impact evaluation text refers to the former as Y1 and the latter as Y0. As we have pointed out, imminent – or intentionally planned – development inherently defines development interventions – that is, a planned and desired response to a perceived cultural, political, economic, social and environmental problem or simply a desire to upgrade or upscale.
|
FIGURE 3: Illustrating the relationship between development and its components as well as its processes. |
|
As the two diagrams in Figure 4 illustrate, a development intervention depicts a change over time hence the timeline at the bottom of the two figures. As we move from the left to the right (overtime) we are moving from a current position to a desired position and this change can be from a problem to a solution as is the case in developing countries. Further, Figure 4a shows that an intervention can be at policy level or programme level or project level. Policy interventions are usually theoretical, long-term, general and broad. On the other extreme, project interventions are action oriented, more specific, narrow and tend to unfold to fruition in the short term. In between, we have programme interventions which tend to be strategic and occur in the intermediate time scale. Whatever the case, as Figure 4b shows, these three levels are interlinked and interrelated. A policy intervention encompasses a number of interrelated development programmes, in turn, a programme encompasses a number of interrelated projects which, in turn, consist of interrelated development activities.
|
FIGURE 4: Illustrating the three levels of development interventions and their distinguishing attributes and relationship. |
|
There is no doubt that for us to understand these three levels of development interventions, we need to appreciate public policy. Implicitly, understanding public policy helps one to ground programme and project management of development interventions. Therefore, to understand development interventions, one should also study public policy as well as programme and project management because it is at this level that policy ideals are operationalised, actualised or realised. Similarly, to understand this field of study we are guided by the following six9 questions:
What is public policy?
What is the purpose of public policy?
What are the components (structure and function) of public policy?
What are the processes in public policy?
What are the established facts in public policy studies?
What are the key issues and debates in public policy studies?
Like in any social science, there are several descriptions of public policy – for example, Fischer, Miller and Sidney (2007) describe public policy as a tool for understanding policymaking processes and a vehicle for supplying decision makers with reliable information on developmental problems. Underlying these descriptions is that public policy is the how of the what – with the ‘what’ being development interventions. Conceptually, it is an applied social science discipline that uses multiple methods of inquiry and arguments to identify, formulate, implement and evaluate development interventions (Jann & Wegrich 2007). Inherent in this function is research, decision making as well as management and monitoring. Notable in literature is that public policy is a complex multidisciplinary field that shares the same space as political studies. For this reason, Simeon (1976) has cautioned us not to disregard political studies and economics when studying public policy. To this list, we should add leadership and governance. Hill and Hupe (2014) have argued that implementing public policy is but operational governance. This makes leadership, governance and political economy major components of public policy. Further, though not as straightforward, interrogating public policy literature as well as personal correspondence with public policy specialists10, we can deduce that this field of study has two more components – that is, institutional arrangements or analysis and organisational arrangements or analysis.
This brings the number of public policy components to five. The leadership and governance components provide for understanding the policy actors and arrangements. Macroeconomic arrangements are important because we need to account for resource endowment and financial arrangements available for development interventions. Further, this component is important because we need to understand the macroeconomic conditions or environment or context under or against which policies are made and managed. The last two components provide for institutional and organisational arrangements that facilitate formulation, implementation, management, monitoring and evaluation of development interventions. For example, Porter and Goldman (2013) argue that effective public reform effort focusing on results should reconfigure institutions to allow for using monitoring and evaluation data and information in all planning, budgeting and decision-making. Therefore, we should point out that integrating the monitoring and evaluation function requires thorough institutional and organisational shifts to support the ideals of this function. By implication, monitoring and evaluation cannot just fit in the current ‘business as usual’ institutional and organisational structures. This probably explains the fragmented monitoring and evaluation function in African government institutions, ministries and departments (Porter & Goldman 2013). Therefore, there is a need to relook at our institutional and organisational arrangements if we are to mainstream the monitoring and evaluation function (Baradei et al. 2014).
What is more pronounced in literature are the public policy processes – that is, research, decision making and the public policy cycle. As one reviews public policy literature, there is a strong temptation to treat the public policy cycle framework as a component instead of a process in public policy. Research as well as stakeholder consultations generate data and information that feeds into the decision-making process (Geurts 2014). Ideally, this should result in enacting an intervention based on the predefined stages provided for in the public policy process.
Figure 5 presents the link between development interventions (the what) and public policy (the how) and their respective components and processes. The figure shows development and its five components – cultural, political, economic, social and environmental – and two processes, that is, imminent and immanent development. The description of imminent development implicitly implies that development interventions can be at three levels – namely policy, programme and project. It is this description of imminent development interventions that conceptually links development interventions to public policy. Further, one would assume that an understanding of policy interventions would also facilitate an understanding of programme and project interventions. Similarly, public policy has five components – leadership, governance, political economy, institutional arrangements and organisational arrangements – and three processes, that is, research, decision making and the public policy cycle.
|
FIGURE 5: Illustrating the relationship between development (and its components and processes) and public policy (and its components and processes). |
|
Because of its importance to the conceptual link of monitoring and evaluation to public policy and, therefore, development interventions, we need to detail the public policy cycle framework. Probably aware of Harold D. Lasswell’s11 work on the public policy cycle framework, Simon (1945) was the first to split decision making and the public policy process into different stages (Zittoun 2009).12 However, it is Lasswell who is regarded as the father of the public policy cycle. Although he had presented this work much earlier in a slightly different and more detailed format, he first presented this initiative in its totality to the American Political Science Association in his presidential address in 1956. In this address, Lasswell (1956) proposed the seven stages of a policy cycle – that is, intelligence, recommendation, prescription, invocation, application, appraisal and termination.
Table 1 shows the activity and key questions that should guide decision makers at each of these seven stages. Activities and answers to these questions at each stage as well as decisions thereof should emanate from research data and information. Despite the shortcomings, central to Lasswell’s stages, distinctions, and specifications is a model that we can use to understand public policy and policymaking or actualisation of development interventions because it provides greater clarity and, therefore, reduces fumbling when enacting or re-enacting development interventions.
TABLE 1: Key questions that should guide decision makers at each of Lasswell’s seven stages of a policy cycle. |
Thereafter, several authors such as Anderson (1975), Jenkins (1978), May and Wildavsky (1978) as well as Brewer and DeLeon (1983) have proposed variations to this stage model. In these newer versions, the proposed technical stages include ‘issue formation’ or ‘diagnosis’, ‘formulation’, ‘implementation’ and ‘evaluation’ while the proposed political elements include ‘policy agenda setting’ and ‘policy adoption’. To tally it to the practice, we reduce these stages to four after incorporating ‘agenda setting’ as part of the diagnostic stage and ‘policy adoption’ as part of the formulation stage. This adaption matches with the approach the South African Presidency is using after the Planning Commission produced the Diagnostic Report in July 2011 which set the agenda of the African National Congress. Thereafter, the Commission produced the National Development Plan 2030 in November 2011 which the Congress adopted during their 53rd National Conference in Mangaung (Free State Province) in 2012.
Ultimately, Figure 6 – linking development (its components and processes), public policy (its components and processes) and the specified stages of the public policy cycle framework – provides a model we can use to conceptually link the monitoring and evaluation function within development interventions and public policy. First of all, even though brief, this model captures and situates the contextual and key terms found in common definitions of monitoring and evaluation. Second, the model links development interventions (the what) and public policy (the how) that we should be assessing using the monitoring and evaluation function to the stages in the public policy cycle, thereby situating monitoring and evaluation in this context and revealing complex interlinkages. It actually idealises what Baradei et al. (2014) term as development monitoring and evaluation (DME). They argue that DME encompasses the traditional programme and project monitoring and evaluation as well as the public policy monitoring and evaluation to foster evidence-based decision making, transparency and effective resource management or collectively governance.
|
FIGURE 6: Illustrating the relationship between development (and its components and processes), public policy (and its components and processes) and the key stages of the public policy cycle framework. |
|
A detailed description of the four stages – diagnostics, formulation, implementation and evaluation – of the public policy cycle
Figure 7 amplifies the stages of the public policy cycle (the bottom part of Figure 6) capturing the monitoring and evaluation function in a development intervention as a way of drawing in other important aspects of the cycle as well as other evaluation functions. As discussed earlier, imminent development inherently defines development interventions – that is, intended or desired change over time. Therefore, an intervention has two important properties, namely ‘change’ and ‘timescale’. This intended change may be a desire to find a solution to a developmental challenge or simply to improve on the status quo. We divide the timescale into three different thresholds – that is, the planning stage comprising diagnostic and formulation, the implementation stage comprising two highly interlinked elements (management and monitoring) and the stocktaking stage comprising summative evaluation (outcome and impact). The multi-arrows between stages are an illustration that development interventions are not linear and exhibit a multi-loop feedback (Gharajedaghi 2006).
|
FIGURE 7: Illustrating the stages of the public policy cycle framework capturing monitoring and evaluation functions in a development intervention. |
|
The diagnostic stage in the figure is presumed to be the first port of call and should address three questions; ‘what is the problem?’, ‘who are the beneficiaries and what are their needs?’ and ‘who are the stakeholders and what are their interests?’. Therefore, during the diagnostic stage, we should apply both quantitative (Yang 2007) and qualitative (Sadovnik 2007) research to understand the developmental problem – that is, its root causes, its symptoms and its consequences. We do so because we intend to find a solution that is effective or sufficient enough to eradicate or alleviate the developmental challenge that we are facing. This is one way of guaranteeing that, all else equal, a detailed problem analysis should deliver an effective development intervention because it exposes the root cause of the problem and is, therefore, likely to provide for an effective remedy. Regardless, the solution should also be compatible with the beneficiaries or should be contextualised. We can only gauge compatibility if we understand the physical setting or context of the beneficiaries and undertake a needs assessment of the intended beneficiaries (Biggeria & Ferrannini 2014; Da Silva, Clark and Cabaço 2014).13 This allows us to select from among the possible alternatives a solution that will result in minimal reaction and, therefore, be relevant and sustainable. Obviously, these two properties have cost implications and may determine the efficiency of an intervention. Therefore, interrogating the last two questions allows for a detailed understanding of the people whose lives the intervention intends to change and will be more certain to deliver a relevant and sustainable development intervention. In sum, we recommend without reservations that the diagnostic stage is an important process that should be treated as such and should not be rushed or gamed because its results, all things being equal, have implications on an intervention’s effectiveness, relevance, sustainability and efficiency. Apart from the technical side of this stage, a problem and its proposed solution should find its way onto the political and government agenda (Birkland 2007; Burstein 1991) before anything else or if anything else.
After understanding the problem and more importantly the beneficiaries as well as their needs, we can then move to the second (formulation) stage where we provide a ‘prognosis’ to our ‘diagnosis’. First, we turn all the root causes of the identified problem into a set of possible solutions to a developmental challenge using a process described as objectives analysis (Norwegian Agency for Development Cooperation [NORAD] 1999). Thereafter, we apply alternative analysis to select the best alternatives from the possible solutions. Lastly, we use the logical framework or results chain, with its underlying theory-of-change, to strategically link the five important elements of a development intervention – that is, the perceived ‘impact’ to the required ‘outcomes’ and ‘outputs’ to their corresponding ‘activities’ and ‘inputs’ – (Kusek & Rist 2004). To do this, we should spell out the anticipated and desired impact (Team Technologies and Operations Core Services 2005) and then chain the appropriate outcomes that will facilitate the achievement of this impact. For the outcomes to be realised, we need to have corresponding outputs (products or services) in place (Görgens & Kusek 2009). The outputs cannot erect themselves unless through concerted activities which definitely need an injection of inputs or resources. Obviously, this process is not as straightforward as outlined here. Sidney (2007) as well as Voß, Smith and Grin (2009) have discussed some tools as well as issues to consider when formulating development interventions. For example, we need to integrate ‘transition management’ for interventions that will be implemented over periods that exceed political terms.
To facilitate measurement of developmental progress, we need to attach the results framework to the results chain. Figure 8 presents a framework of the important elements of the results chain and the results framework. We have already described the five elements of the results chain – that is, impact, outcomes, outputs, activities and inputs. For the results framework, described in Kusek and Rist (2004), we have to first define the indicators, that is, attributes (qualitative) and variables (quantitative) that we should use to track and assess the changes in the five elements of the results chain as well as identify their respective sources of information and data. Second, we then collect baseline values on all the identified indicators at the beginning of an intervention to provide a benchmark which we should use to track the changes resulting from the development intervention. Third, we use the baseline values to determine the target values depending on the amount of effort and duration provided for an intervention. Kusek and Rist (2004) provide a detailed discussion of these steps with Görgens and Kusek (2009) providing a variety of approaches to setting target values.
|
FIGURE 8: Illustrating the important elements of the results chain and the results framework. |
|
Assumptions and risks make up the factors outside the nerve centre of the intervention during implementation. The former describes ‘situations, events, conditions, and decisions’ that should be present if an intervention has to succeed (NORAD 1999:69), whereas the latter describes situations, events and conditions that should be absent if an intervention is to succeed. In the past, these external factors have been treated as one but ideally our review of theory and practice suggests that we should probably treat these factors separately. Currently, the South African Presidency has placed emphasis on risk management of development interventions.
After stitching the results chain and the results framework, we should conclude this formulation stage by undertaking a final check of the desired plan by asking a futuristic question, ‘will this intervention work?’ To answer this question, we need to apply formative or design evaluation. Fundamentally, this component of evaluation interrogates the results chain, results framework and the underlying theory of change to find out if the plan can indeed deliver the anticipated change in the proposed timeline given the inherent real-life complexities, some of which are captured in the assumptions as well as those that may arise during implementation (Rodgers 2008; Woolcock 2013).
We can then move to the third (implementation) stage after formulating an intervention, presumed to be the most interesting stage by practitioners and some academics. Pülzl and Treib (2007) point out that this stage has generated research interest for two reasons. First, it is perceived to be the deliverer of change, and second, it cuts across several fields such as public administration and management as well as policy studies. Further, Pülzl and Treib (2007) also point out the three generations of implementation research. The first wave of implementation research raised awareness that this field had challenges that deserved a detailed understanding. This was followed by conceptualisation of theoretical and other explanatory frameworks. Broadly, the frameworks tease out issues of top-down versus bottom-up approaches to implementation. The last wave of implementation research focuses on bridging ‘the gap between top-down and bottom-up approaches by incorporating the insights of both camps’ to form hybrid approaches or models (Pülzl & Treib 2007:89). Another important debate is the role of what Hill (2003) calls ‘street-level bureaucrats’. She argues that it is not only state professionals who implement policy but the beneficiaries and other stakeholders who are not on the public payroll. Most of these non-state implementers hardly know, at least technically, what they need to know about implementing development interventions and yet they are important in the implementation process. They contextualise the intervention to their reality and if involved in this process they would spell out what can work and what cannot work as well as how an intervention should be delivered to be effective. Therefore, Hill (2003) proposes that training and resourcing non-state professional implementers can be a plus in delivering the intended results of development interventions.
Further, under this stage, we should also discuss management and, therefore, monitoring of the implementation of development interventions. This is because the two main components of this stage are management and monitoring of ‘inputs’ and ‘activities’ in the production of ‘outputs’ meant to realise the intended ‘outcomes’ and consequently the intended ‘impact’. This is why, much more directly, monitoring is a management tool for overseeing the use of inputs, undertaking of activities and production of outputs. However, these three parameters are not an end in themselves; they need to extend to outcomes and consequently impact. More specifically, management points to operations management or public management of inputs, activities, outputs and to a limited extent the outcomes. Obviously, operations management of activities points to performance management as well. The fundamental discussion on issues of operations and performance management of development interventions is a choice between the traditional public sector approach and the managerialist approach. The former is ‘rule-bound and hierarchical, built around centralised power and authority’ with pre-programmed standardised procedures emphasising compliance rather than results (Dixon, Kouzmin & Korac-Kakabadse 1998; Paine 1999:49). The latter uses private sector management principles and practices to ‘get things done’ and, therefore, purports to be results oriented (Dixon, Kouzmin & Korac-Kakabadse 1998). Though this debate has lost attention in recent literature, it is far from over.
Regardless of the management approach, one has to institute monitoring functions to manage inputs, activities, outputs and outcomes. Obviously, monitoring operates during implementation hand-in-hand with management to merely point out what is happening during implementation (Porter & Goldman 2013). However, during this stage, we should check at critical intervals ‘if the intervention is working and why or why not’. It is a process or implementation evaluation that pursues such a question with an intention of checking up on implementation, management and monitoring arrangements of an intervention as well as comparing the intended ideals with the practical reality. Process evaluation allows us to check if we are complying with implementation plans of the intervention and adjust such to the practical reality. Bosch (1996) and Cooke-Davies (2002) have provided an enlightening discussion on this subject.
Lastly, after implementing an intervention for a considerable period, we should take stock of its results – outputs, outcomes and impact – by asking if the produced outputs are leading to the outcomes meant to bring about the desired impact. Furthermore, summative evaluation implies asking questions such as, ‘did the intervention work?’, ‘was it effective?’, ‘was it sustainable?’, ‘was it relevant?’ and ‘was it efficient?’. There are several reports and articles on both outcomes and impact evaluations. The former applies both quantitative and qualitative research strategies cutting across the five research designs described in Bryman (2012) – namely quasi-experimental, cross-sectional, longitudinal, case studies and comparative. The latter is mostly confined to a quantitative research strategy as well as the quasi-experimental and comparative research design. For example, Bouguen et al. (2013) employ a quantitative research strategy and comparative research design to undertake an impact evaluation of early childhood development. Summative evaluation is supposedly the last stage of an intervention but only if we believe that interventions do come to an end. We should still bear in mind that there are other forms of evaluations, namely formative or design evaluation as well as process or implementation evaluation.
Conclusion
Systems methodology has definitely transformed how we understand monitoring and evaluation in the context of development interventions and public policy. There is no doubt that:
the beauty of interactive design and the magic of the iteration of structure, function, and process … combined with the power of operational thinking, and … understanding of the implications of self-organising behaviour, create[d] a competent and exciting methodology.’ (Gharajedaghi 2006:108)
This methodology has helped us decode the complexities inherent in monitoring and evaluation as a tool to development, public policy, leadership and governance. Specifically, understanding the multi-loop nonlinear feedback system of the monitoring and evaluation function and then mapping its dynamic behaviour proved to be particularly useful when linking it to the several relevant fields of study in public administration.
Further, other than applying systems methodology and a detailed literature review, the positive aspect of the model we propose here is that it was created with active participation of about 300 seasoned development practitioners and public administrators. Therefore, our approach and product are not exclusively academically idealised but practically inspired. The model emanates from what civil servants do and, therefore, what they would want to know more in a structured or explicit approach to improve their work. Obviously, it has allowed us to link and understand the contextual and key terms found in common definitions of monitoring and evaluation presented in Figure 1. It is obvious, to monitor and evaluate, one needs to identify indicators – for inputs, activities, outputs, outcomes and impacts – and then establish baseline values and targets for all the identified indicators. Thereafter, one needs to continuously collect and store data on all indicators that should be processed, analysed and results reported as required. In sum, the last two stages of a public policy cycle framework kick-start the description and discussion of monitoring and evaluation terminology. We envisage that eventually the model can assist us to measure development interventions much more effectively and improve on the quality of monitoring and evaluation. In turn, there should be an improvement in the quality of collecting, processing and analysing empirical evidence and, consequently, the rigour of the monitoring and evaluation function. Consequently, public and private institutions might want to effectively institutionalise this function and, therefore, reap the rewards of this function rather than doing it for compliance or perceiving it as ‘something that exposes them to criticism’ (Porter & Goldman 2013:8). The assumption is that conceptually linking the monitoring and evaluation function will facilitate its institutionalisation and enhance the capacity to assess development interventions (the what) and public policy (the how). Jointly, this will provide the much needed accountability, transparency and oversight in the use of public resources and, hence, foster good public administration.
Acknowledgements
This research was partly funded by the Carnegie Large Research Grant facilitated by the University of the Witwatersrand (WITS) Transformation Office. I am grateful to the WITS School of Governance (WSG) M&E students who took part in devising seminars where we discussed earlier formats of the model we present here. I thank the WSG members of staff who attended the conversation where this approach was first presented for their helpful comments as well as the encouraging remarks from Hanlie van Dyk-Robertson and Prof. Anne Mc Lennan. Lastly, I thank Moses Masauso Nzima, Judy Kusek, Dr. Indran Naidoo and Dr Laila Smith as well as the reviewers for helping us fine-tune and reconcile our argument and perfect our write-up.
Competing interests
The authors declare that they have no financial or personal relationships that may have inappropriately influenced them in writing this article.
References
Acemoglu, D. & Robinson, J.A., 2012, The origins of power, prosperity, and poverty: Why nations fail, Crown Publishers, New York.
Adejemboi, S., 1998, ‘Africa and the challenges of democracy and good governance in the 21st century’, paper presented at Development Policy Management Forum (DPMF) Annual Conference on Democracy, Civil Society and Governance in Africa, 7–10 December. Addis Ababa.
Anderson, J., 1975, Public policy-making, Praegger, New York.
Bakewell, O., Adams, J. & Pratt, B., 2003, Sharpening the development process: A practical guide to monitoring and evaluation, International Non-Governmental Organisation Training and Research Centre (INTRAC), Oxford.
Baradei, L.E., Abdelhamid, D. & Wally, N., 2014, ‘Institutionalising and streamlining development monitoring and evaluation in post-revolutionary Egypt: A readiness primer’, African Evaluation Journal 2(1), Art. # 57, 1–16.
Biggeria, M. & Ferrannini, A., 2014, ‘Opportunity gap analysis: Procedures and methods for applying the capability approach in development initiatives’, Journal of Human Development and Capabilities: A Multi-Disciplinary Journal for People-Centered Development and Change 15(1), 60–78.
Birkland, T.A., 2007, ‘Agenda setting in public policy’, in F. Fischer, G.J. Miller & M.S. Sidney (eds.), Handbook of public policy analysis: Theory, politics, and methods, vol. 125, pp. 63–78, CRC Press, Boca Raton, FL.
Bosch, O.J.H., 1996, ‘Monitoring as an integral part of management and policymaking’, paper presented at Symposium of Resource Management; Issues, Visions, Practice, 5–8 July, Lincoln University, New Zealand.
Bouguen, A., Filmer, D., Macours, K. & Naudeau, S., 2013, Impact evaluation of three types of early childhood development interventions in Cambodia, The World Bank Development Research Group, Washington.
Brewer, G. & DeLeon, P., 1983, The foundations of policy analysis, Brooks, Cole, Monterey, CA.
Bryman, A., 2012, Social research methods, Oxford University Press, Oxford.
Burkey, S., 1993, People first: A guide to self-reliant participatory rural development, Zed, London.
Burstein, P., 1991, ‘Policy domains: Organisation, culture, and policy outcomes’, Annual Review of Sociology 17, 327–350. https://doi.org/10.1146/annurev.so.17.080191.001551
Cooke-Davies, T., 2002, ‘The “real” success factors on projects’, International Journal of Project Management 20, 185–190. https://doi.org/10.1016/S0263-7863(01)00067-9
Da Silva, F.C., Clark, T.N. & Cabaço, S., 2014, ‘Culture on the rise: How and why cultural membership promotes democratic politics’, International Journal of Politics, Culture, and Society 27(3), 343–366.
Dixon, J., Kouzmin, A. & Korac-Kakabadse, N., 1998, ‘Managerialism – Something old, something borrowed, little new: Economic prescription versus effective organizational change in public agencies’, International Journal of Public Sector Management 11(2/3), 164–187. https://doi.org/10.1108/09513559810216483
Farr, J., Hacker, J.S. & Kazee, N., 2008, ‘Revisiting Lasswell’, Policy Sciences 41, 21–32. https://doi.org/10.1007/s11077-007-9052-9
Fischer, F., Miller, G.J. & Sidney, M.S., 2007, Handbook of public policy analysis: Theory, politics, and methods, CRC Press, Boca Raton, FL.
Fisher, R., 1983, ‘Negotiating power: Getting and using influence’, American Behavioural Scientist 27(2), 149–166. https://doi.org/10.1177/000276483027002004
Fourie, D., 2006, ‘The application of good governance in public financial management’, Journal of Public Administration 41(2.2), 434–443.
Gereffi, G. & Fonda, S., 1992, ‘Regional paths of development’, Annual Review of Sociology 18, 419–448. https://doi.org/10.1146/annurev.so.18.080192.002223
Geurts, T., 2014, Public policy making: The 21st century perspective, Be Informed, Apeldoom.
Gharajedaghi, J., 2006, Systems thinking: Managing chaos and complexity, a platform for designing business architecture, Elsevier Inc., Amsterdam.
Görgens, M. & Kusek, J.Z., 2009, Making monitoring and evaluation systems work: A capacity development toolkit, The World Bank, Washington.
Gray, M., 2006, ‘The progress of social development in South Africa’, International Journal of Social Welfare 15(Suppl 1), 53–64. https://doi.org/10.1111/j.1468-2397.2006.00445.x
Hill, H.C., 2003, ‘Understanding implementation: Street-level bureaucrats’ resources for reform’, Journal of Public Administration Research and Theory 13(3), 265–285. https://doi.org/10.1093/jopart/mug024
Hill, M. & Hupe, P., 2014, Implementing public policy: An introduction to the study of operational governance, Sage, London.
Hulet, C., 2013, ‘Devising seminars?: Getting to yesable options in difficult public disputes’, unpublished Master of City Planning thesis, Massachusetts Institute of Technology, Cambridge.
Jack-Akhigbe, 2013, ‘The state and development interventions in the Niger Delta Region of Nigeria’, International Journal of Humanities and Social Science 3(10), 255–263.
Jann, W. & Wegrich, K., 2007, ‘Theories of the policy cycle’, in F. Fischer, G.J. Miller & M.S. Sidney (eds.). Handbook of public policy analysis: Theory, politics, and methods, vol. 125, pp. 43–62, CRC Press, Boca Raton, FL.
Jenkins, W.I., 1978, Policy-analysis: A political and organisational perspective, Martin Robertsen, London.
Kusek, J.Z. & Rist, R.C., 2004, Ten steps to a results-based monitoring and evaluation system, The World Bank, Washington, DC.
Labelle, H., 2010, The importance of good governance in the management of public affairs especially state enterprise, Transparency International, Yaounde.
Lasswell, H.D., 1956, The decision process: Seven categories of functional analysis, Bureau of Governmental Research, University of Maryland Press, College Park, MD.
May, J.P. & Wildavsky, A., 1978, The policy cycle, Sage, Beverly Hills CA.
McFerson, H.M., 1992, ‘Democracy and development in Africa’, Journal of Peace Research 29(3), 241–248. https://doi.org/10.1177/0022343392029003001
Norwegian Agency for Development Cooperation (NORAD), 1999, The logical framework approach (LFA): Handbook for objectives-oriented planning, Norwegian Agency for Development Cooperation (NORAD), Oslo.
Paine, G., 1999, ‘Dark side of a hot idea’, Siyaya Magazine Summer Issue 6, 44–51.
Porter, S., & Goldman, I., 2013, ‘A growing demand for monitoring and evaluation in Africa’, African Evaluation Journal 1(1), Art. # 25, 1–29.
Pülzl, H. & Treib, O., 2007, ‘Implementing public policy’, in F. Fischer, G.J. Miller & M.S. Sidney (eds.), Handbook of public policy analysis: Theory, politics, and methods, vol. 125, pp. 89–107, CRC Press, Boca Raton, FL.
Rodgers, P., 2008, ‘Using programme theory to evaluate complicated and complex aspects of interventions’, Evaluation 14(1), 29–48. https://doi.org/10.1177/1356389007084674
Roseland, M., 2000, ‘Sustainable community development: Integrating environmental, economic, and social objectives’, Progress in Planning 54, 73–132. https://doi.org/10.1016/S0305-9006(00)00003-9
Sachs, J.D., 2005, The end of poverty: Economic possibilities for our time, The Penguin Press, New York.
Sadovnik, A.R., 2007, ‘Qualitative research and public policy’, in F. Fischer, G.J. Miller & M.S. Sidney (eds.), Handbook of public policy analysis: Theory, politics, and methods, vol. 125, pp. 417–427, CRC Press, Boca Raton, FL.
Sidney, M.S., 2007, ‘Policy formulation: Design and tools’, in F. Fischer, G.J. Miller & M.S. Sidney (eds.), Handbook of public policy analysis: Theory, politics, and methods, vol. 125, pp. 79–87, CRC Press, Boca Raton, FL.
Simeon, R., 1976, ‘Studying public policy’, Canadian Journal of Political Science 9, 548–580. https://doi.org/10.1017/S000842390004470X
Simon, H.A., 1945, Administration behavior, Macmillan, New York.
Slim, H., 1995, ‘What is development?’, Development in Practice 5(2), 143–148. https://doi.org/10.1080/0961452951000157114
Summer, A. & Tribe, M., 2008, International development studies: Theories and methods in research and practice, Sage, Los Angeles, CA.
Susskind, L.E. & Rumore, D., 2015, ‘Using devising seminars to advance collaborative problem solving in complicated public policy disputes’, Negotiation Journal 31(3), 223–235. https://doi.org/10.1111/nejo.12092
Team Technologies and Operations Core Services, 2005, The logframe handbook: A logical framework approach to project cycle management, The World Bank, Washington, DC.
The [South African] Presidency, 2007, Policy framework for the government-wide monitoring and evaluation systems, The [South African] Presidency, Pretoria.
The Organisation for Economic Cooperation and Development (OECD), 2002, Glossary of key terms in evaluation and results-based management, The Organisation for Economic Cooperation and Development (OECD)/Development Assistance Committee (DAC), Paris.
Voß, J.-P., Smith, A. & Grin, J., 2009, ‘Designing long-term policy: Rethinking transition management’, Policy Science 42, 275–302. https://doi.org/10.1007/s11077-009-9103-5
Watson, M.D., 2012, ‘The colonial gesture of development: The interpersonal as a promising site for rethinking AID to Africa’, Africa Today 59(3), 3–28. https://doi.org/10.2979/africatoday.59.3.3
Woolcock, M., 2013, ‘Using case studies to explore the external validity of “complex” development interventions’, Evaluation 19(3), 229–248. https://doi.org/10.1177/1356389013495210
Wotela, K., 2016, ‘Towards a systematic approach to reviewing literature for interpreting business and management research results’, The Electronic Journal of Business Research Methods 14(2), 83–97.
Yang, K., 2007, ‘Quantitative methods for policy analysis’, in F. Fischer, G.J. Miller & M.S. Sidney (eds.), Handbook of public policy analysis: Theory, politics, and methods, vol. 125, pp. 349–367, CRC Press, Boca Raton, FL.
Zittoun, P., 2009, ‘The problem of time in policy change: A dia-synchronic perspective of the book-mark’, paper presented at International Political Science Association, 12–16 July, Santiago, Chile.
Footnotes
1. Baradei, Abdelhamid and Wally (2014) have discussed the spread (and focus) of monitoring and evaluation from the 1950s in the USA through Europe and finally to developing countries.
2. We are aware that they are more recent versions of this book, but we deliberately used the second edition as it explains what we wanted to achieve much better.
3. According to Hulet (2013) in Susskind and Rumore (2015:224), ‘originated by Roger Fisher and others, a devising seminar’ is an off-the-record, professionally facilitated, face-to-face problem-solving session … over an extended period’. ‘The purpose of a devising seminar is to invent mutually advantageous proposals in response to an existing or potential conflict’ (Susskind and Rumore 2015:226). In our case, it was a structured approach to interrogating literature on academic fields of study as well as the accompanying theoretical and interpretive frameworks. Therefore, we encouraged students to unofficially share what they do understand about academic fields of study and theoretical and interpretive frameworks so that we integrate their approach in this structured approach. We then asked subsequent cohorts to comment on the visual perception recreated using comments from the preceding cohorts until we arrived at a mutually acceptable model. Susskind and Rumore (2015) have a more structured case in which a devising seminar was recently used.
4. Who by their own right are seasoned practitioners working with the South African business and civil service with a few from other African countries.
5. Note that system methodologists use the term ‘formulate the mess’.
6. The definition of the South African Presidency (2007) is not different from that provided by Kusek and Rist (2004) and, therefore, the Organisation for Economic Cooperation and Development (2002) because the second author of ‘Ten Steps to a Results-based Monitoring and Evaluation System’, Ray Rist was instrumental in setting up the Policy framework for the Government-wide Monitoring and Evaluation System.
7. We do not provide answers to all the questions here because of space limitations.
8. Surprisingly, during our interrogation of development literature; monitoring and evaluation is mentioned implicitly, if at all.
9. We do not provide answers to all the questions here because of space limitations.
10. Professors Gavin Cawthra and Anthoni Van Nieuwkerk.
11. Farr, Hacker and Kazee (2008:21) describe Lasswell as ‘one of the greatest political scientists and public intellectual of the twentieth century’.
12. As we have presented them in Figures 5 and 6.
13. Lately this analysis has been overshadowed by environmental impact assessments.
|