THE SOUTH AFRICAN MONITORING SYSTEM FOR PRIMARY SCHOOLS

Get Complete Project Material File(s) Now! »

Problem Statement and Rationale

… [South Africa] is a country with natural wealth and many cultures. It is also notorious for the Apartheid (sic) policies that have left a lasting impression on the education system in the country. Evidence of this [lasting impression] lies in the appalling conditions in many schools across the country, and these conditions exist primarily in previously so-called African, coloured and Indian schools. South Africa, since the first democratic elections in 1994, has embarked on a substantial reform effort in many areas including education. (Howie, 2002, p. 9)
Education is a major concern for the South African government, which had invested 5.8% of the gross domestic product (GDP) in this sector between 1995 and 2003 (National Treasury Republic of South Africa, 2005). In 2006 this investment dropped slightly to 5.4% of GDP, representing 17.6% of the total government expenditure (World Bank, 2008). On average, other upper middle income countries and Sub-Saharan Africa countries spent 4.1% and 4.2% of their GDP respectively on Education in 2006 (World Bank, 2008). The National Treasury of RSA (2005) noted that there had been an enormous growth in enrolment figures for primary and secondary schools (8.1 million in 1985 to 12.0 million in 2004). Population growth and immigration had no doubt contributed to these figures, as the net enrolment rate in primary education dropped from 90% in 1991 to 88% in 2006  (World Bank, 2008). In 2006, South Africa still had nearly 470,000 primary school children aged children who did not attend school (World Bank, 2008).
Despite the significant funding and increase in enrolment for education in South Africa, the quality of education remained a concern (Taylor, Muller, & Vinjevold, 2003). Nowhere was the shortcoming of education provision more apparent than in the low learner performance, especially in subjects such as Reading, Mathematics and Science. This low learner performance was clearly illustrated through South Africa‘s performance in international studies such as the Trends in Mathematics and Science Study (TIMSS) 2003 (Martin, Mullis, Gonzalez, & Chrostowski, 2004) and the Progress in International Reading Literacy Study (PIRLS) 2006 (Howie, et al., 2008). The concerns about South African education are further highlighted in national studies such the Grade 3 and 6 National Systemic Evaluations (Department of Education, 2002b).
It may be that this poor performance is a legacy of the apartheid education system, however, there is evidence of an international trend wherein increased investment in education is not necessarily associated with improvement in education (Cassassus, 2001; Hayward & Hedge, 2005). Hattie (2005, p. 12) notes that in the United States of America (USA) ―…there is not a lot of evidence that the massive increases in state/federal monies have made a difference to the quality of teaching and learning.‖ Hattie (2005) goes on to argue that, though USA spending on education had increased in the previous 40 years7, the achievement curve remained constant over the same period of time. RSA is challenged with redressing the neglect of large portions of the education system during the apartheid era, but it is clear that this large investment in formal education alone is not improving the quality of learning in schools. This re-asserts the need to combine educational investment with appropriate monitoring at primary school level to improve the quality of teaching and learning.
In RSA, educational data are collected through international comparative educational studies such as PIRLS (Howie, et al., 2008) and TIMSS 1995 (Howie, 1997), 1999 (Howie, 2001) and 2003 (Martin, et al.,(2004). Systems level data are generated through systemic evaluations, which mirror the poor international performance. Poor performance is noted in both the Grade 3 and Grade 6 National Systemic Evaluation Reports (Department of Education, 2002b, 2006a, 2006b). School level monitoring is also mandated as part of the Internal Quality Management system of schools (IQMS) (Education Labour Relations Council, 2003). The mere availability of these data alone cannot improve learner performance, as the data also needs to be appropriately returned to schools and employed by them for planning, decision-making and action.

Table of Contents

  • LIST OF FIGURES
  • LIST OF TABLES
  • LIST OF FIGURES
  • LIST OF ABBREVIATIONS
READ  Women Entrepreneurship Programme (WEP)

CHAPTER ONE:  INTRODUCTION AND OVERVIEW
1.1 Definition of Terms
1.2 The SAMP Project
1.3 Problem Statement and Rationale
1.4 Research Questions
1.5 Research Methodology
1.6 Presentation Style
1.7 Structure of this Thesis
CHAPTER TWO:  THE SOUTH AFRICAN MONITORING SYSTEM FOR PRIMARY SCHOOLS
2.1 The PIPS Instrument
2.2 The South African Birth of SAMP
2.2.1 The Importance of Value-Added Measures
2.2.2 Contextualisation and Adaptation of SAMP
2.2.2.1 The PIPS Instrument in South Africa Prior to 2006
2.2.2.2 Adaptation of PIPS into SAMP during 2006
2.2.2.3 SAMP 2008
2.3 Conclusion
CHAPTER THREE: CONTEXTUALISATION, LITERATURE REVIEW AND CONCEPTUAL FRAMEWORK
3.1 The Educational Landscape in South Africa
3.1.1 Resource Availability
3.1.2 Challenges to School Attendance
3.1.3 The Impact of Social Problems
3.1.4 Issues of Diversity
3.1.5 Educator Related Issues
3.1.6 General Education and Training in South Africa, the Foundation phase
3.2 Monitoring and Feedback in South African Education
3.3 Literature Review
3.3.1 Possible Purposes of Monitoring and Feedback Systems
3.3.2 Types of Monitoring and Feedback Systems in Education
3.3.3 School Performance Feedback Systems
3.3.3.1 United Kingdom – CEM Suite
3.3.3.2 New Zealand – asTTle
3.3.3.3 Netherlands – Zebo
3.3.3.4 America, Louisiana – School Analysis Model (SAM)
3.4 Use of feedback in schools
3.5 Conceptual Framework
3.5.1 External Environment and Context of the Use of the Feedback System
3.5.2 Internal Environment and Context of the Use of the Feedback System
3.5.3 The Complexity of Change
3.6 Conclusion
CHAPTER FOUR:  OVERVIEW OF THE RESEARCH DESIGN
4.1 Research Paradigm
4.1.1 Ontology
4.1.2 Epistemology
4.1.3 Axiology
4.1.4 Methodology
4.2 Research Design
4.2.1 Design Research
4.2.2 Evaluative Criteria in Design Research
4.2.3 Application of Design Research for this Inquiry
4.2.4 Population for the Design Research
4.2.5 Research Procedures
4.2.6 Shifts in Emphasis in the Design Research Process
4.3 Methodological Quality
4.3.1 Role of the Researcher
4.3.2 Realm of Application
4.4 Conclusion
CHAPTER FIVE:  PRELIMINARY PHASE: PROBLEM IDENTIFICATION, NEEDS AND CONTEXT ANALYSIS
5.2 Prior Development, Needs and Context Analysis
5.2.1 Pre-Existing Feedback System (Prior to 2006)
5.2.2 Reports – Pre-Existing Feedback System
5.2.3 Feedback Sessions – Pre-Existing Feedback System
5.2.4 Informal Evaluation of – Pre-Existing Feedback System
5.2.4.1 Reports
5.2.4.2 Feedback Sessions
5.2.5 Design Principles from Literature Review
5.2.6 Exemplary Case Study – asTTle, New Zealand
5.2.6.1 Literature Review
5.2.6.2 Sampling
5.2.6.3 Data Collection
5.2.6.4 Data Capturing
5.2.6.5 Data Analysis
5.2.6.6 Discussion
5.2.7 Design Principles from the Exemplary Case Study
5.3 Conclusion
CHAPTER SIX:  PROTOTYPING PHASE: ESTABLISHING CONDITIONS FOR USE (CYCLE 1-2)
6.1 Cycle 1 (Prototype I – Baseline 2008)
6.1.1 Prototype I – Baseline 2008
6.1.1.1 Reports
6.1.1.2 Feedback Sessions – 176 – 6.1.2 Formative Evaluation of Prototype I – 176 – 6.1.2.1 Selection of Participants
6.1.2.2 Data Collection
6.1.2.3 Data Capturing and Analysis
6.1.2.4 Results and Design Guidelines-Expert Evaluators
6.1.2.5 Results and Design Guidelines-Delphi Technique
6.1.3 Cycle 2 (Prototype II, Follow-up 2008)
6.1.4 Prototype II – Follow-up 2008
6.1.4.1 Reports
6.1.4.2 Feedback Sessions
6.1.5 Formative Evaluation of Prototype II
6.1.5.1 Sampling
6.1.5.2 Data Collection
6.1.5.3 Data Capturing
6.1.5.4 Data Analysis
6.1.5.5 Results and Design Guidelines
6.2 Conclusion
CHAPTER SEVEN: PROTOTYPING PHASE: TRANSFORMING CONDITIONS FOR USE INTO USE (CYCLE 3)
7.1 Cycle 3 (Prototype III – Baseline 2009)
7.1.1 Prototype III – Baseline 2009
7.1.1.1 Reports
7.1.1.2 Instrument Manuals
7.1.1.3 Feedback Session
7.1.1.4 Electronic Resource
7.1.2 Formative Evaluation of Prototype III
7.1.2.1 Sampling
7.1.2.2 Data Collection
7.1.2.3 Data Capturing
7.1.2.4 Data Analysis
7.1.2.5 Results and Findings – Report Evaluation Questionnaire
7.1.2.6 Exemplary Cases
7.1.2.7 Discussion and Design Guidelines
7.2 Conclusion
CHAPTER EIGHT:  ASSESSMENT PHASE: CYCLE 4
8.1 Research Cycles
8.2 Cycle 4 (Prototype IV – Follow-up 2009)
8.2.1 Prototype IV – Follow-up 2009
8.2.1.1 Reports
8.2.1.2 Instrument Manuals
8.2.1.3 Feedback Sessions
8.2.1.4 Website
8.2.2 Semi-summative Evaluation of Prototype IV
8.2.2.1 Sampling
8.2.2.2 Data Collection
8.2.2.3 Data Capturing
8.2.2.4 Data Analysis
8.2.2.5 Results and Findings – Expert Evaluators’ Reports
8.2.2.6 Results and Findings – Teachers and Management Questionnaires
8.2.2.7 Design Guidelines from the Evaluator Reports
8.2.2.8 Design Guidelines from the Final Evaluation Questionnaire
8.3 Conclusion

GET THE COMPLETE PROJECT

Related Posts