Across East Africa, significant resources are dedicated to large-scale assessments designed to measure learning outcomes. Taking Uganda as an example, there are national exams at the end of primary and secondary school, an annual sample assessment of students from three year groups, a citizen-led assessment of 6 -16 year olds, regular donor-led assessments tied to programmes, and the country participates in a regional east and southern African assessment. These assessments provide a wealth of information about the performance of the education system.
But is this information being used effectively? As well as understanding the performance of the education system, the results from learning assessments can be used to drive improvements. For example, the government could refine policies based on the data, or teachers could use the information to adapt their lessons. This is not yet happening consistently across East Africa. This blog suggests how the data could be used more effectively, often drawing on examples of success from across the region.
National exams could be used more systematically for school accountability
High stakes examinations are a controversial topic across the region. Detractors argue that they have led to excessive teaching to the test, and have narrowed the curriculum. In Kenya, publishing information about schools performance based on exam results has been banned. In Uganda, arguments to discontinue the primary leaving exams are increasingly high profile.
These arguments are gaining traction partly because the benefits of national examinations for school accountability have not been realised. School accountability systems have the potential to drive system wide improvement by creating incentives for school leaders and teachers to improve their work. This only works when there is a coherent accountability framework. In most East African countries this framework is not in place. For example, in Uganda the main indicator of secondary school performance is the percentage of students who achieve a Division 1 grade (the highest mark out of 4). Every year, newspapers publish ‘league tables’ of schools on this basis. However, this way of measuring performance encourages schools to focus their teaching disproportionately on high performing students, who have the chance to secure one of the top grades. The information is also not used systematically by the government to reward high performing schools, or to challenge weak ones.
The Big Results Now programme in Tanzania may point a way forward. Examination results for each school are available through an easily searchable website, and schools receive high profile rewards for good performance (this blog from Ian Attfield, DFID Adviser in Tanzania, explains further). If this new initiative can be shown to drive improved outcomes, then other countries in the region may be persuaded to follow.
The purpose of any assessment needs to be clear and realistic, and also drive the frequency and content of assessments
Many assessments around the world try to do too much. The temptation when developing an assessment is to generate data for a range of purposes, in an attempt to maximise value for money. But this can be at the expense of providing really accurate information about any one aspect of education.
This issue is apparent in East Africa. For example in Uganda, the National Assessment of Progress in Education (NAPE) lists six purposes, ranging from tracking national standards over time, to providing guidelines for teachers. This is a challenging task for any assessment, and unsurprisingly NAPE has varying degrees of success in meeting each of its aims. For example, the guidelines to teachers are very high-level, such as ‘use a practical approach to teach biology’, and there are not sufficient resources to provide teachers with on-going support to translate this advice into improved classroom practice.
Instead, NAPE could focus on its core purpose of tracking national standards over time. The assessment could be streamlined accordingly, rather than trying to report information about performance in every curriculum area. Anchor items could be introduced to make year on year comparisons of performance more robust. With less aims to fulfil, NAPE could then devote more of its resources to promoting a culture of using learning outcomes data within policy-making circles.
How can assessment providers help to improve outcomes?
All the organisations which manage these East African assessments aim to contribute to improving learning outcomes. For example, Uwezo, a citizen-led assessment programme conducted in Kenya, Tanzania and Uganda, aims to increase the number of children with basic literacy and numeracy skills by 10%. Uwezo’s theory of change suggests that community members will be influenced by low results to demand action. However, there is little evidence that this happens in practice, particularly without clear structures in place for parents to pursue their concerns. Uwezo are considering how the network of citizen-assessors can use their position to link teachers and the communities together. This type of project is vitally important.
Data from other assessments is also given to teachers and parents in the hope that it will stimulate change. But changing long-standing practice in schools and communities is hard, and usually requires on-going effort. Some promising work is already underway. In Kenya, the Tusome programme is providing government officials with real-time information about reading performance in different districts and the number of visits conducted by teacher trainers, which they can use to make sure the programme meets its targets.
Could a strategic view across all assessments create a more efficient and effective system?
At the moment there is significant duplication in the assessments being used in the region. For example in Uganda, NAPE, Uwezo, and the regional assessment (Southern and Eastern Africa Consortium for Monitoring Educational Quality – SACMEQ) all aim to track national standards over time. Of these, NAPE and Uwezo are administered every year. In addition, donor-led assessments such as the Early Grade Reading Assessment (EGRA) are used to evaluate individual programmes. By looking across all the assessments in a country perhaps a more efficient system be introduced, in which each assessment contributes in one clearly defined way.
A more efficient system would then free up resources to be spent on using the data from assessments to improve outcomes. Using the data well is as important as generating the information. Publishing data in reports is unlikely to be enough to tackle the entrenched problem of low levels of learning in schools. More systematic solutions are needed, along the lines of the Big Results Now and Tusome programmes. Creating initiatives like these, with the potential to change the behaviour of schools, requires investment, but is also crucial.
By Phil Elks from Ark.
Phil was the author of a recent Think Piece produced for DFID titled ‘The impact of assessment results on education policy and practice in East Africa’.