Abstract
Background: The Progress in International Reading Literacy Study (PIRLS), and similar international assessments, have consistently shown South African intermediate phase learners’ performance to be among the lowest worldwide. Of particular concern is the Curriculum and Assessment Policy Statement (CAPS) for Home Language in the Intermediate Phase and, specifically, the document’s treatment of the assessment of reading comprehension.
Objectives: In this study, the CAPS requirements for assessing reading comprehension were examined, with the aim of laying the groundwork for an improved policy framework.
Method: The research design of the study involved evaluating the assessment of reading comprehension in the CAPS document, using a social realist approach to uncover its underlying structures and mechanisms.
Results: The study found that a principled approach to the assessment of reading comprehension was lacking, which had a cumulative effect across the CAPS document, resulting in random, yet highly prescriptive, requirements.
Conclusion: The study concluded that CAPS does not provide adequate guidance for improving reading comprehension and, moreover, that the prescribed programme of assessment is not supported by the research literature on reading comprehension. The study recommends that better, more evidence-informed and consultative policies and guidelines be introduced to support teachers in the assessment – and, ultimately, the improvement – of intermediate phase learners’ reading comprehension.
Keywords: reading comprehension; Progress in International Reading Literacy Study (PIRLS); Curriculum and Assessment Policy Statement (CAPS); home language; intermediate phase; assessment.
Introduction
Basic education in South Africa has undergone significant change since the country’s transition to democracy in 1994. A central aspect of this change has been the adoption of outcomes-based education (OBE), which places emphasis on the assessment of learners against predefined outcomes (Spaull 2015). The South African Department of Basic Education’s Curriculum and Assessment Policy Statement (CAPS) continue a similar focus on assessment in its underpinning logic (Govender & Hugo 2018). While this focus places a considerable burden on teachers, it has not improved South Africa’s basic education system (Spaull 2015). The Progress in International Reading Literacy Study (PIRLS) 2016 found that 49% of South African Grade 5 participants were not able to reach basic benchmarks of reading comprehension (Howie et al. 2017). International studies have shown that the explicit teaching of comprehension strategies is key to improving reading levels among intermediate phase learners (National Reading Panel 2000). In its report on this matter, the National Reading Panel highlighted the importance of reading strategy instruction, explaining that comprehension can only be improved if students are taught to use ‘specific cognitive strategies or to reason strategically when they encounter barriers to comprehension when reading’ (National Reading Panel 2000:39).
The extent to which language policies specifically contribute to the low performance of learners in South Africa is not clear, given that language disadvantages are strongly correlated with additional factors such as historical disadvantage, socio-economic status, geography, the quality of teaching and the quality of school management (Rapetsoa & Singh 2018). At the same time, CAPS has been shown to be a key determinant in how reading comprehension is taught (Govender & Hugo 2018; Magagula 2016; Weideman, Du Plessis & Steyn 2017).
The present study investigates learners’ reading comprehension in English Home Language in Grades 4–6 and, in particular, whether CAPS (2011) supports and develops teachers’ assessment knowledge and practice. In an assessment-driven educational system, there is an urgent need for teachers (and teacher educators) to become ‘assessment literate’ (Xu & Brown 2017), that is, to develop a deep understanding of assessment so that they can implement policies in ways that are supportive of learners’ literacy development. Recognising this need helps us understand how CAPS might enable or constrain particular assessment practices in theorising the relationship between policy and practice. The research question guiding this study is: How effective are the assessment requirements in the CAPS document (and its proposed amendments) in supporting the reading comprehension of learners in the intermediate phase (Grades 4–6)?
Policy and practice in the research literature
There is a considerable amount of literature on language policy, language assessment policy and the important role that policy plays in shaping the educational system (see, e.g., Good et al. 2017). While there are many factors that contribute to the success (or failure) of an educational system, coherent and evidence-informed educational policies are chief among them (Coffield 2012). The recognised link between the performance of a school system and its guiding policies makes it imperative that educational policies be grounded on a solid knowledge base (Aydarova & Berliner 2018).
A logical approach to policymaking – especially in cases where policy principles are linked to implementation guidelines – is one that draws on a broad base of research and evidence in the field. Policies need to demonstrate a logical relationship between the purpose and principles of a specific policy, the guidelines that policy offers, and the demands it makes on the implementers of the policy. These logical connections have been described as ‘the causal theory’ (Fullan 2007), because they tell the story of why a policy is necessary, and they provide guidance and assistance to teachers with regard to implementation. Policymakers need to understand the challenges of policy implementation and should also constantly evaluate the results of implementation. Policies need to account for local contexts ‘across institutions’, including ‘culture, demography, politics and economy’ (Pont & Viennet 2017), as these contexts will affect the ways in which a policy is understood and shaped in different institutions. It is rare that policies are uniformly implemented: achieving this requires authentic participation on the part of stakeholders who share ‘common views and experiences of education’ (Hopfenbeck, Flórez Petour & Tolo 2015). Teachers’ involvement in the policymaking process is therefore essential (Good et al. 2017).
Assessment too often becomes the focus of educational policy, as it is assumed to be ‘relatively low cost’ and something that can be both ‘externally mandated and … controlled’ and implemented with ‘relative speed’ (Hamilton 2007). However, attempting to drive educational change through assessment policy is likely to have unintended consequences, as several studies have shown (see, e.g., Birenbaum et al. 2015). When assessment is used as a vehicle for changing an educational system, but is not grounded in assessment theory or research evidence, it is unlikely to be effective, and it may even have detrimental effects on the school system (Hamilton 2007).
In the South African education system, many studies have raised concerns about the number of inconsistencies and contradictions in CAPS documents, including the ‘misalignment of purpose and assessment’ (Weideman et al. 2017). Govender and Hugo found that topics in the CAPS documents were ‘not presented in a systematic and sequential manner’ (2018:25). An earlier study concluded that teachers were often confused by the various CAPS documents in their subject areas and that ‘as a result … they decided to continue with the way they had been working throughout their years of teaching’ (Khoza 2017:189). These research findings on CAPS are grounds for concern. They signal that the policy requirements associated with CAPS are unlikely to enhance the reading comprehension of learners in a meaningful way.
A conceptual framework for the study of assessment policies
Policies that have the intention of improving reading comprehension tend to highlight the importance of the two main purposes of assessment, namely assessment of learning (summative assessment) and assessment for learning (formative assessment) (Gilmore et al. 2009; Hattie 2012). Each of these different assessment purposes is underpinned by its own assessment principles.
The principles underpinning the summative assessment of reading comprehension are validity, reliability and fairness. Summative assessments are ‘high-stakes’ tasks that impact a learner’s school (and potentially post-school) trajectory; thus, considerable care needs to be taken to ensure that high-stakes assessments are valid, reliable and fair (Stillman 2011). It has been suggested that the validity of reading comprehension testing should be measured against the standards of other learning areas, because language is the means by which learners access knowledge in all other learning areas (Boals et al. 2015; Keary & Kirkby 2018; Wolf, Farnsworth & Herman 2008). The content of summative comprehension tests therefore impacts their validity and fairness.
Formative assessment has been defined as ‘a process used by teachers and students during instruction that provides feedback to adjust ongoing teaching and learning to improve students’ achievement of intended instructional outcomes’ (Popham 2008). Formative assessment in reading comprehension is underpinned by learning-oriented principles, such as focusing on the ‘sub-skills and building blocks in the learning progression’ (Popham 2008), targeting ‘areas of difficulty’ (Hattie 2015), and providing ‘timely feedback’ on comprehension tasks (Harvey & Kosman 2014), as well as creating an alignment between formative and summative assessments (Clark 2015). In order to formatively assess learners’ reading comprehension, the sub-skills of reading – including lexical proficiency and the ability to identify micro and macro text structures – should be assessed (Clarke et al. 2013).
Researchers have pointed out that the sub-skills and knowledge areas on which feedback is provided during formative assessments should align with the content and format of summative assessments (Popham 2008), given that part of the function of formative assessment is to prepare learners for summative assessment. In other words, formative assessment should not be random: it should be congruent with learning outcomes. Educational researchers point out that formative assessment should target difficult skills or knowledge areas, rather than those areas in which learners are generally proficient (Clark 2015). Experienced teachers can often predict the stages in reading progression at which learners typically experience difficulty or confusion, while novice teachers tend to need guidance to ensure that they target the skills most relevant for formative feedback (Hattie 2012). Timeous feedback is clearly important if learners are to profit from formative assessment. Over-assessment should be avoided in both formative and summative assessment, as there is no pedagogical basis for it; in fact, over-assessment has been shown to be detrimental to the quality of learning (Wiliam 2006).
The implications of the research on the formative and summative assessment of reading comprehension are summarised in Table 1, which offers a conceptual framework for the analysis of CAPS.
TABLE 1: A conceptual framework for analysing the effectiveness of assessment requirements within Curriculum and Assessment Policy Statement (reading comprehension). |
Theoretical framework
In order to probe more deeply into how the CAPS document understands the formative and summative assessment of reading comprehension, this study draws on Legitimation Code Theory (LCT). LCT identifies a range of dimensions that underpin knowledge-based practices: Specialization, Semantics, Temporality, Resources and Autonomy. In this study, we draw on the dimension of Semantics, which speaks to the heart of ‘meaning-making’ and is therefore particularly appropriate for an enquiry into the assessment of reading comprehension. The Semantics dimension consists of two continua, semantic gravity and semantic density, which can be mapped on a Cartesian plane to reveal the different ways in which meanings might be more or less theoretical or practical.
In this study, we focus on semantic gravity, which refers to:
The degree to which meaning relates to its context. Semantic gravity may be relatively stronger (+) or weaker (−) along a continuum of strengths. The stronger the semantic gravity (SG+), the more dependent meaning is on its context; the weaker the gravity (SG−), the less dependent meaning is on its context. (Maton 2014:129)
Weakening the semantic gravity in a policy document on assessment would involve stating the general principles of assessment, which are abstract and independent of the particulars of a specific context or case. Strengthening semantic gravity, on the other hand, would involve applying these principles towards competent assessment practice. Studying the strengthening and weakening of semantic gravity over a text thus provides a way of mapping variations across a policy document. The distinction between contextualisation and abstraction is particularly useful in policy analysis, as it can reveal both the strengths and the weakness of a policy document, as well as its gaps and blind spots.
Research methods and design for policy analysis
The research design of this study is an evaluation of the South African English Home Language CAPS (2011) document through a study of its semantic gravity profile, with a specific focus on the assessment of reading comprehension. The conceptual framework outlined above, drawn from the relevant literature, shows that an assessment policy should be expected to reveal the relationship between the assessment principles that underpin the policy and the guidelines for practice specified in the document – or, to use the terms of the theoretical framework, between weaker and stronger forms of semantic gravity. For this reason, the analytical methods of the study focused on an in-depth discourse analysis.
Using a high-level theory such as semantic gravity requires a ‘translation device’ to link the abstract concept of semantic gravity to specific concepts related to assessment practice. The first step was thus to develop and test a translation device that could determine the relative strength and weakness of semantic gravity across the CAPS document. Four levels of semantic gravity were identified, extracted from the conceptual framework. To avoid confusion, these levels of semantic gravity were numbered, from SG1 (weakest level of semantic gravity) to SG4 (strongest level of semantic gravity) (see Table 2). SG1 represents the context-independent aspects of the policy, such as its broad purposes, principles and definitions. SG2 corresponds to the implications for practice that are inferred from the general principles, as well as the general requirements for practice. SG3 represents a strengthening of semantic gravity in the provision of guidelines for practice. SG4 is the most context-dependent level and contains exemplars of good practice and the detailed logistics of practice, such as the particular forms to be completed and the times of assessment. These four levels provide a useful mechanism for identifying shifts between stronger and weaker forms of semantic gravity and the relationship between them. Due to the level of confusion we found in the CAPS document, such as random or illogical statements and non-sequiturs, we added an additional level: SG5, which represents no logical relation to context.
TABLE 2: A translation device for identifying semantic gravity in policy documents. |
In ‘translating’ semantic gravity into the key features of policy provision that guide the practice of assessing reading comprehension, we theorised that an ideal policy document would initially show weaker levels of gravity, that is, it might introduce the policy with a statement of its general purpose, the principles on which it is based and provide definitions of key terms. Semantic gravity would be strengthened over the course of the policy document, for example the implications of the principles and recommendations for practice would be included, along with more practical guidelines and logistical requirements for teaching and assessment practice. Such an ideal assessment policy document would be founded on research evidence and would contain clear statements of the purposes and the principles on which the policy is based. The implications of these evidence-based principles for practice would be logically connected to the requirements stipulated by the policy, which might also include sanctions for disregarding the policy. The policy would provide clear guidelines for teachers to follow, which might include practical and contextualised examples. Finally, the policy might include specific requirements for practice, such as templates that need to be followed (or adapted) or particular materials that need to be used. Figure 1 represents an ideal semantic gravity profile for an assessment policy.
|
FIGURE 1: An ideal semantic gravity profile for assessment policy. |
|
Additional steps in our research process included examining all mentions of ‘comprehension’ in CAPS to develop an overview of how the document conceptualised comprehension. The sub-sections on assessment (Curriculum and Assessment Policy Statement [CAPS] 2011:88–104) were also analysed, with each sentence of those sections that specifically addressed the assessment of reading comprehension subjected to detailed analysis. To avoid a forced fit between the CAPS text and the translation device, analysis took place at two levels. The first level of coding was done without the translation device, which provided us with an overview of the content of the document and allowed us to highlight assessment-relevant concepts, such as ‘assessment is a continuous planned process’ (CAPS 2011:88). For the second level of analysis, the translation device was used to code the different sections of the document as SG1, SG2, SG3, SG4 or SG5.
Ethical consideration
Ethical clearance for the research study was granted by the Cape Peninsula University of Technology and the Western Cape Education Department, with ethical clearance number: EFEC5-8/2018.
Results of the study: How ‘structured, clear and practical’ is the Curriculum and Assessment Policy Statement document?
In this section, we present the results of the study. We start with an overview of reading comprehension in CAPS, and then focus more specifically on the assessment of reading comprehension and on the requirements for the assessment of comprehension (CAPS 2011).
An overview of reading comprehension in Curriculum and Assessment Policy Statement
Minimal attention is given to the teaching of comprehension, or its assessment, in the CAPS document itself. This despite the importance of reading comprehension in the teaching plans, in which teachers are instructed to set comprehension exercises ‘every second week’ (CAPS 2011:14) and in the programme of assessment, which requires comprehension tests for ‘formal’ assessment, that is, for both examinations and continuous assessment towards the learners’ final marks (CAPS 2011:93–101).
The CAPS document proposes only two reading strategies, neither of which is mentioned in the text itself. These strategies only appear in the glossary:
- Rereading – rereading is a reading strategy that gives the reader another chance to make sense out of a challenging text.
- Restating – restating is a reading strategy where the reader will retell, shorten, or summarise the meaning of a passage or chapter, either orally or in written form. (CAPS 2011:110).
Rereading is not helpful for enhancing comprehension (Clarke et al. 2013). Restating is a basic strategy (Gill 2008), but there are many others (e.g. cause-and-effect and problem-and-solution) that are more appropriate for meaning-making at the intermediate level (Meyer & Ray 2011). The advice offered to teachers is not helpful; it includes making sure that learners ‘pause occasionally to check [their] comprehension and to let the ideas sink in’ (CAPS 2011:10), which is to be found mystifying during the reading process. Teachers are told to instruct learners who do not understand what they are reading as follows: ‘Reread a section if you do not understand at all. Read confusing sections aloud, at a slower pace, or both’ (CAPS 2011:10). Several researchers have found that the advice offered in CAPS with regard to reading is inappropriate and inadequate (e.g. Rule & Land 2017).
When the 48 mentions of ‘comprehension’ are disaggregated across the document, and the semantic gravity translation device is applied to the analysis, the trends described above become more distinct, as Table 4 shows.
TABLE 3: Number of mentions of ‘comprehension’ in the Curriculum and Assessment Policy Statement document. |
TABLE 4: Types of mentions of ‘comprehension’ in the Curriculum and Assessment Policy Statement document. |
Table 4 reveals that the CAPS document does not explicitly state the purposes, principles or definitions that underpin its approach to reading comprehension. The lack of a principled approach to reading comprehension poses a significant barrier to literacy instruction (Gill 2008). CAPS provides minimal guidance or exemplars, and yet considerable emphasis is placed on prescriptive teaching plans and on the plan of assessment.
The semantic gravity profile (Figure 2) represents the mentions of reading comprehension across the CAPS document, with the numbers along the bottom of the graph referring to the pages of the CAPS document. Level SG1 is the level of purposes, principles and definitions (which is absent from the text, but implied in some of the definitions in the glossary at the end of the document). Level SG2 represents the implications for practice, such as ‘Well-developed Reading and Viewing skills are central to successful learning across the curriculum’ (CAPS 2011:10). Level SG3 provides guidelines and exemplar in support of practice, such as: ‘Use guided group reading and independent/pair reading methods and gradually get learners to do more and more independent reading’ (CAPS 2011:10). Level SG4 represents the logistics of practice, including timetables, templates and instructions, such as ‘Term 2, Weeks 1 & 2 Information text – weather’ (CAPS 2011:25).
|
FIGURE 2: The semantic gravity profile of comprehension across the Curriculum and Assessment Policy Statement document. |
|
Section 2 of the CAPS document, ‘Introducing Home Language in the Intermediate Phase’, has only two mentions of teaching comprehension, one related requirements (‘You will also set a variety of comprehension activities to ensure that learners understand what they read’, CAPS 2011:10), and the other a guideline (‘Pause occasionally to check your comprehension and to let the ideas sink in’, CAPS 2011:10). This is followed by an SG4 ‘flatline’ of comprehension activity requirements in Section 3 (the teaching plan), which stretches from page 16 to page 88.
There are three mentions of comprehension testing in Section 4 (Assessment in Home Language), including a claim that ‘a memorandum is better suited to a spelling test or a reading comprehension activity’ (CAPS 2011:90), and that ‘reading comprehension’ should include ‘vocabulary work’ (CAPS 2011:93). The semantic gravity is weakened slightly in the general guidelines provided in the ‘Cognitive Level’ tables (CAPS 2011:91–92), but strengthens in the requirements for testing in the programme of assessment (CAPS 2011:93–102).
The section entitled ‘Moderation of Assessment’ instructs moderators to ensure that teachers assess ‘learners’ ability to analyse and synthesize information given in a text’ (CAPS 2011:103), and not to ask questions about general knowledge related to the text. The extract below provides an example of an instruction to moderators:
The moderator will give good comment, among other things, on the levels of questioning in comprehension testing; the frequency of extended writing; the quality of assessment instruments and the developmental opportunities afforded and the teacher’s engagement with learner workbooks and evidence of learner performance. (CAPS 2011:103)
The instructions to moderators are clear and logical, yet nowhere does the text instruct teachers on these requirements.
CAPS concludes with a glossary of definitions, which is the only section that rises to SG1. The semantic gravity profile of Figure 2, with its flatline at SG4 (the teaching plan) confirms prior research that finds the CAPS document to be ‘prescriptive’ (Govender & Hugo 2018; Weideman et al. 2017).
The missing assessment principles
It has been pointed out that the lack of a logical progression from principles to practice is the main cause of policy failure (Fullan 2007). As shown in Figure 2, the CAPS document does not explicitly state the principles that underpin its guidelines and requirements for teaching or assessing reading comprehension. Underpinning principles are implied in many of the instructions given to teachers, for example: ‘regular feedback should be provided to learners to enhance the learning experience’ (CAPS 2011:88), ‘language learning is a process’ (CAPS 2011:88), and ‘the work on which assessment is conducted must have been covered during the term’ (CAPS 2011:89). However, none of these implied principles is explained, referenced to research, or cross-referenced with the many requirements for practice. Thus teachers are unlikely to understand why particular requirements are placed on them.
Searching for the missing principles
Principles are often expressed in the clear and precise use of terms. What we found in the CAPS document was imprecise (even colloquial) and non-standard terms, many of which were not defined in the text or glossary. The document’s use of the term ‘informal assessment’ is a case in point, partly because it is not defined, and partly because it is not recommended as an assessment practice. Terms such as ‘formative assessment’ or ‘classroom-based assessment’ are more commonly used in the literature, while use of the term ‘informal assessment’ is generally regarded as undesirable, owing to its potential to create misjudgements (Waggett, Johnston & Jones 2017). The formative assessment of reading comprehension requires specific techniques and methods (Xu & Brown 2017), with very particular sub-skills and reading strategies that teachers should monitor (Shanahan & Lonigan 2010). The CAPS recommendation that teachers use ‘many of [their] learning activities to assess learners’ performance informally’ is not derived from evidence or principles.
The term ‘daily assessment’ is used interchangeably with ‘informal assessment’, and the term is similarly undefined, with no evidence to support its use. Over-assessment has been shown to negatively impact teaching quality (Wigfield, Gladstone & Turci 2016). The CAPS document is confusing with regard to the requirements for ‘informal daily assessment’, stating that ‘the results of the informal daily assessment tasks are not formally recorded unless the teacher wishes to do so’ (CAPS 2011:89). Similar confusing directives have been pointed out by other scholars in their critiques of CAPS documents across a range of subjects (e.g., Maddock & Maroun 2018).
The impact of the missing assessment principles on advice to teachers
The lack of explicit principles to guide the teaching and assessment of reading comprehension has a knock-on effect across the CAPS document, resulting in increasingly confusing advice to teachers on assessment practice, such as the following:
When giving a formal assessment task, there will be a focus on a particular skill, for example, Listening and Speaking or Reading or Writing. However, because language learning is an integrated process, more than one skill will be used. (CAPS 2011:88)
The literature on the assessment of comprehension proposes the opposite to the advice offered above. An initial focus on particular skills and sub-skills, as well as on learners’ reading strategies, is recommended, with formative tasks becoming more integrative in preparation for summative assessment (e.g. García & Cain 2014). A summative assessment task would generally require not only the integration of reading and writing skills, but also the application of reading strategies to understand the text at micro- and macro-levels.
Because explicit assessment principles do not guide practice, claims about assessment can become contradictory. In the passage below, for example, the supportive relationship between teaching, learning and assessment is misunderstood and a confusing claim is made about assessment not being ‘a separate entity’. This claim is then contradicted in commending integrated assessment practice, followed by examples of good practice:
Assessment in Languages is ongoing and supports the growth and development of learners. It is an integral part of teaching and learning as it provides feedback for teaching and learning. It should be incorporated in teaching and learning instead of being dealt with as a separate entity. Furthermore, integrated assessment of various language aspects should be practiced. For example, we could start off with a reading piece and do a comprehension test. Language knowledge questions could also be addressed based on the same text. Post-reading the text learners could be asked to respond to the text by, for example, writing a letter about the issues raised in the text or to write some creative response to the content of the text. To wrap up this activity, discussions could be held about the topic and in this way we address all of the language skills in one fluent, integrated activity. (CAPS 2011:88)
The semantic gravity profile of the section quoted above (Figure 3) reveals the confusion. Implications for practice are stated, but these become illogical (SG5), before turning to general remarks about integrated assessment practice and specific guidelines for practice.
|
FIGURE 3: The semantic gravity profile of advice to teachers regarding assessment. |
|
In Figure 3 the numbers along the bottom of the graph refer to the sentences in the passage quoted above. There is no reference to purposes or principles or definitions (SG1) in this passage. The passage opens with generic advice (SG2) that becomes confusing (SG5). More advice is then offered, as shown by the rise back to level SG2 (but on a new topic of integrative assessment), and the passage concludes with examples.
Guidelines for practice: Missed opportunities in the cognitive levels table
Guidelines, such as those that appear in the above-quoted passage, are distributed across the document, but the ‘Cognitive Levels’ table (CAPS 2011:91–92) has a special focus on exemplary reading comprehension questions. This table is a missed opportunity in the CAPS document. Firstly, it offers examples of different types of questions, in order of increasing difficulty and complexity: ‘literal’ questions, ‘reorganizational’ questions, questions that require ‘inference’, evaluative questions and appreciative questions (CAPS 2011:91–92). While distinctions between question types are useful, the research literature recommends that teachers need to explain and model reading comprehension strategies until learners begin to use these strategies independently (Gill 2008). The most widely cited recommendation for improving reading comprehension is increasing explicit instruction in comprehension strategies (National Reading Panel 2000), rather than asking increasingly complex questions. CAPS does not guide teachers on how to help learners acquire the reading strategies that would enable them to address the recommended questions, or on how to assess these strategies. The suggested questions in the ‘Cognitive Levels’ table are generic (i.e. questions that readers at a range of levels would use); no attempt is made to adapt them for learners at the intermediate phase. A question such as: ‘What does a character’s actions/attitude(s)/motives … show about him/her in the context of universal values?’ (CAPS 2011:92) is clearly not appropriate for the intermediate level.
The ‘knock-on’ effect of the missing assessment principles
The plan of assessment is the main mechanism for guiding summative assessment practice (referred to as ‘formal assessment’) in CAPS. Its intention is ‘to spread formal assessment tasks in all subjects in a school throughout a term’ (CAPS 2011:93). Summative assessment in CAPS comprises both continuous assessment (referred to as ‘school-based assessment’) and two examinations, one in Term 2 and one in Term 4. Four reading comprehension tests are required as part of the continuous assessment, or one per term. Reading comprehension appears in two forms in the examinations: firstly, as comprehension tests covering a ‘range of texts … including visual or graphic texts’ (CAPS 2011:101) and, secondly, in the 2-h ‘Integrated Paper’ (CAPS 2011:98), consisting of ‘reading comprehension, language in context, writing – essays and transactional texts’ (CAPS 2011:101).
As a guide for teachers, the following examples are offered as possible ‘transactional texts’:
Formal & informal letters to the press / Formal letters of application, request, complaint, sympathy, invitation, thanks, congratulations, & business letters / Friendly letters / Magazine articles & columns / Memoranda / Minutes & agendas, Newspaper articles & columns / Obituaries/ Reports (formal & informal) / Reviews / Written formal & informal speeches / Curriculum Vitae / Editorials / Brochures / Written interviews / Dialogues. (CAPS 2011:101)
The above list is inappropriate for intermediate phase learners and would not meet the principles of validity or fairness in a summative assessment. It is in the programme of assessment that the cumulated effect of the lack of a principled approach to the teaching and assessment reading comprehension is most keenly felt.
Conclusion: Why the Curriculum and Assessment Policy Statement document is unlikely to improve reading comprehension
In this article we studied the CAPS requirements for assessing reading comprehension, with the aim of laying the groundwork for an improved policy framework for the assessment of reading comprehension. While several studies have critiqued the CAPS documents and pointed to the confusing nature of its guidelines for teachers (Govender & Hugo 2018; Weideman et al. 2017), this study contributes to existing knowledge by empirically demonstrating how the breakdown in logic occurs across the CAPS document. We have shown how the lack of a principled approach is cumulative, and that without clear principles derived from research and theory to guide the teaching and assessment of reading comprehension, advice to teachers is likely to be random. The advice and recommendations for practice that CAPS offers become increasingly inconsistent over the course of the document, resulting in a plan of assessment that is largely inappropriate for the intermediate level. The semantic gravity tool we employed allowed us to map the underpinning structure of the CAPS document and thus to propose a way forward for the improvement of the CAPS document, or of other policy guides related to reading comprehension.
This article has highlighted the lack of adequate guidance and support provided to teachers in terms of executing CAPS effectively and the study raises several implications for further research. In particular, it reveals the need to assess teachers’ (and teacher educators’) understanding of reading comprehension and, in turn, the level of training and support required to help them understand and apply principles of reading comprehension. It also reveals the necessity of incorporating key reading strategies for intermediate level readers – for instance, description, sequence, comparison, cause-and-effect and problem-and-solution (Meyer & Ray 2011) – into South African education policy in consultative and context-sensitive ways.
The cover slogan of CAPS is ‘Structured, clear, practical: Helping teachers to unpack the power of the NCS’. The results of this study, however, suggest that considerable improvement is needed for the CAPS Home Language Intermediate Phase document to attain a logical structure, clear and principled definitions, and useful, practical guidelines for the teaching and assessment of reading comprehension.
Acknowledgements
Competing interests
The authors have declared that no competing interests exist.
Authors’ contributions
M.M.d.L. provided data and wrote the article, C.W. oversaw the content, data and edits, and H.D. moderated the article.
Funding information
NRF grant number SARCI150209113904.
Data availability statement
Data sharing is not applicable to this article as no new data were created.
Disclaimer
We proclaim that the views expressed in the submitted article are our own and not an official position of the institution or funder.
References
Aydarova, E. & Berliner, D.C., 2018, ‘Responding to policy challenges with research evidence: Introduction to special issue’, Education Policy Analysis Archives 26(32), 1–5. https://doi.org/10.14507/epaa.26.3753
Birenbaum, M., DeLuca, C., Earl, L., Heritage, M., Klenowski, V., Looney, A. et al., 2015, ‘International trends in the implementation of assessment for learning: Implications for policy and practice’, Policy Futures in Education 13(1), 117–140. https://doi.org/10.1177/1478210314566733
Boals, T., Kenyon, D.M., Blair, A., Cranley, M.E., Wilmes, C. & Wright, L.J., 2015, ‘Transformation in K-12 English language proficiency assessment: Changing contexts, changing constructs’, Review of Research in Education 39(1), 122–164. https://doi.org/10.3102/0091732X14556072
Clark, I., 2015, ‘Formative assessment: Translating high-level curriculum principles into classroom practice’, Curriculum Journal 26(1), 91–114. https://doi.org/10.1080/09585176.2014.990911
Clarke, P.J., Truelove, E., Hulme, C. & Snowling, M.J., 2013, Developing reading comprehension, John Wiley & Sons, Ltd., Hoboken, NJ.
Coffield, F., 2012, ‘Why the McKinsey reports will not improve school systems’, Journal of Education Policy 27(1), 131–149. https://doi.org/10.1080/02680939.2011.623243
Fullan, M., 2007, The new meaning of educational change, 4th edn., Teachers’ College Press, New York, NY.
García, J.R. & Cain, K., 2014, ‘Decoding and reading comprehension: A meta-analysis to identify which reader and assessment characteristics influence the strength of the relationship in English’, Review of Educational Research 8(1), 74–111. https://doi.org/10.3102/0034654313499616
Gill, S.R., 2008, ‘The comprehension matrix: A tool for designing comprehension instruction’, The Reading Teacher 62(2), 106–113. https://doi.org/10.1598/RT.62.2.2
Gilmore, A., Crooks, T., Darr, C., Hall, C., Hattie, J., Smith, J. et al., 2009, ‘Towards defining, assessing and reporting against national standards for literacy and numeracy in New Zealand’, Assessment Matters: Education, Health and Human Development Reports 59(1), 135–150.
Good, A.G., Barocas, S.F., Chávez-Moreno L.C., Feldman, R. & Canela, C., 2017, ‘A seat at the table: How the work of teaching impacts teachers as policy agents’, Peabody Journal of Education 92(4), 505–520. https://doi.org/10.1080/0161956X.2017.1349490
Govender, R. & Hugo, A., 2018, ‘Educators’ perceptions of the Foundation Phase English Home Language Curriculum and Assessment Policy Statement’, Per Linguam 34(1), 7–32. https://doi.org/10.5785/34-1-767
Hamilton, L., 2007, ‘Chapter 2: Assessment as a policy tool’, Review of Research in Education 36(1), 88–98.
Harvey, M. & Kosman, B., 2014, ‘A model for higher education policy review: The case study of an assessment policy’, Journal of Higher Education Policy and Management 36(1), 88–98. https://doi.org/10.1080/1360080X.2013.861051
Hattie, J., 2012, Visible learning for teachers: Maximizing impact on learning, Routledge, London.
Hattie, J., 2015, ‘The applicability of visible learning to higher education’, Scholarship of Teaching and Learning in Psychology 1(1), 79–91. https://doi.org/10.1037/stl0000021
Hopfenbeck, T.N., Flórez Petour, M.T. & Tolo, A., 2015, ‘Balancing tensions in educational policy reforms: Large-scale implementation of assessment for learning in Norway’, Assessment in Education: Principles, Policy and Practice 22(1), 44–60. https://doi.org/10.1080/0969594X.2014.996524
Howie, S., Combrinck, C., Roux, K., Tshele, M., Mokoena, G. & Palane, N.M., 2017, PIRLS literacy 2016: South African highlights report, University of Pretoria, Pretoria.
Keary, A. & Kirkby, J., 2018, ‘“Language shades everything”: Considerations and implications for assessing young children from culturally and linguistically diverse backgrounds’, TESOL in Context 26(1), 27–44. https://doi.org/10.21153/tesol2017vol26no1art705705
Khoza, S.B., 2017, ‘Student teachers reflections on their practices of Curriculum and Assessment Policy Statement’, South African Journal of Higher Education 29(4), 179–197. https://doi.org/10.20853/29-4-512
Maddock, L. & Maroun, W., 2018, ‘Exploring the present state of South African education: Challenges and recommendations’. South African Journal of Higher Education 32(2), 192–214. https://doi.org/10.20853/32-2-1641
Magagula, S.W., 2016, ‘The township schools’ foundation phase teachers’ experiences in the implementation of CAPS’, Master of Management Studies thesis, University of Witwatersrand, Johannesburg.
Maton, K., 2013, Knowledge and knowers: Towards a realist sociology of education, Routledge, London.
Meyer, B.J.F. & Ray, M.N., 2011, ‘Structure strategy interventions: Increasing reading comprehension of expository text’, International Electronic Journal of Elementary Education 4(1), 127–152.
National Reading Panel, 2000, Teaching children to read: An evidence-based assessment of the scientific research literature on reading and its implications for reading instruction, National Institute of Child Health and Human Development, Rockville, MD.
Pont, B. & Viennet, R., 2017, Education policy implementation: A literature review and proposed framework, OECD education working paper, 162, OECD Publishing, Paris.
Popham, W.J., 2008, Transformative assessment, Association for Supervision and Curriculum Development, Alexandria, VA.
Rapetsoa, J.M. & Singh, R.J., 2018, ‘Does the Curriculum and Assessment Policy Statement address teaching and learning of reading skills in English First Additional Language?’, Mousaion 35(2), 56–78. https://doi.org/10.25159/0027-2639/1270
Rule, P. & Land, S., 2017, ‘Finding the plot in South African reading education’, Reading & Writing 8(1), 2079–8245. https://doi.org/10.4102/rw.v8i1.121
Shanahan, T. & Lonigan, C.J., 2010. ‘The National Early Literacy Panel: A summary of the process and the report’, Educational Researcher 39(4), 279–285. https://doi.org/10.3102/0013189X10369172
South African Department of Basic Education, 2011, National curriculum statement, Curriculum and Assessment Policy Statement (CAPS) Grades 4–6 English Home Language, Government Printer, Pretoria.
Spaull, N., 2015, Schooling in South Africa: How low quality education becomes a poverty trap, South African Child Gauge, Stellenbosch University, Stellenbosch.
Stillman, J., 2011, ‘Teacher learning in an era of high-stakes accountability: Productive tension and critical professional practice’, Teachers College Record 113(1), 133–180.
Waggett, R.J., Johnston, P. & Jones, L.B., 2017. ‘Beyond simple participation: Providing a reliable informal assessment tool of student engagement for teachers’. Education 137(4), 393–397.
Weideman, A., Du Plessis, C. & Steyn, S., 2017, ‘Diversity, variation and fairness: Equivalence in national level language assessments’, Literator 38(1), 1–9. https://doi.org/10.4102/lit.v38i1.1319
Wigfield, A., Gladstone, J.R. & Turci, L., 2016. ‘Beyond cognition: Reading motivation and reading comprehension’. Child development perspectives 10(3), 190–195. https://doi.org/10.1111/cdep.12184
Wiliam, D., 2006, ‘Formative assessment: Getting the focus right’, Educational Assessment 11(3-4), 283–289.
Wolf, M.K., Farnsworth, T. & Herman, J., 2008, ‘Validity issues in assessing English language learners’ language proficiency’, Educational Assessment 13(2–3), 80–107. https://doi.org/10.1080/10627190802394222
Xu, Y. & Brown, G.T.L., 2017, University English teacher assessment literacy: A survey-test report from China, Papers in Language Testing and Assessment 6(1), 133–158.
|