Central Square Foundation
Central Square Foundation

What NEP 2020 says about school assessments

By : Aditi Nangia November 2020

Improved learning is key for people to achieve their full potential, and for the country to cement its position as a global leader. Therefore, accurately measuring student learning in early years of schooling is critical to understand their learning levels, and identify the gaps and solutions in the education system.

How can we measure learning outcomes?

There are multiple kinds of assessments that can be used to measure learning levels among students. Here are the two major kinds of assessments:

i) Individual student assessments: as the name suggests, these assessments refer to the kind that evaluates each student individually. These can be formative where every student is assessed on an ongoing basis, or summative where each student is evaluated at the end of a module, chapter, or academic year. The former is an assessment for learning, as it provides specific insight to the teacher on how student learning can be improved, while the latter is an assessment of learning.

ii) System-level assessments: these account for evaluation of schools, regions, or education systems. Evaluating education systems at a large-scale provides a system-level understanding of learning outcomes to inform education policy and practice.[1] These are held at a national or international level. Such assessments help identify factors that can improve student learning such as student-teacher ratio; provide recommendations for curriculum review and reform; monitor the quality of education provision; and support education policy development.

How are large-scale assessments of an education system conducted?

There are two ways of conducting large-scale assessments:

i) Sample Assessments
Sample assessments provide representative information about learning outcomes at an aggregated level — district, block, state, etc. A select number of students from a particular school or socio-economic background are evaluated as a sample. This helps policymakers and state education departments to periodically measure the health of the system.

Sample national assessments in India include the National Achievement Survey (NAS) conducted by the NCERT and the Annual Status of Education Report survey (ASER) conducted by Pratham, a non-profit organization. International examples of sample assessments include PISA, TIMMS, PIRLS, etc.

ii) Census assessments
In addition to sample-based assessments, many education systems have adopted a census-based approach to learning assessments as a key pillar in their education reforms.[2] These assessments are conducted for all students in all schools in specific grades. They measure and report the performance of schools and students and provide a comparable marker for school quality.

Census assessments are much larger in scale compared to sample assessments and can provide useful granular information to identify poorly performing schools and accordingly target interventions to improve quality of education provision. When aligned to competencies and robust assessment frameworks, these assessments can provide diagnostic information to help teachers adapt instruction.[3]

NEP 2020 outlines the need for census assessments in key grades in India

The recently approved National Education Policy (NEP) 2020 has proposed a low-stakes annual school examination in grades 3, 5 and 8 across all schools (government and private) to evaluate children’s progress on core concepts, higher order skills and their application. This assessment will steer away from rote memorisation and encourage more meaningful learning. In grade 3, the assessment will focus on Foundational Literacy and Numeracy — basic reading and math skills that every child should acquire by the end of grade 3.

Unlike Board exams conducted in grades 10 or 12 at the end of the schooling cycle, these assessments will measure learning throughout school years and their results can be used for policy and program planning to improve outcomes during a child’s schooling years.

Census assessments can also provide insight into how schools are performing on learning outcomes based on a common and comparable metric subsequently making it easier to identify schools that lag behind[4]. Additionally, publicly disclosing this information to parents will help them choose schools based on learning performance and compel schools to improve quality.[5]

How are census assessments different from already existing assessments in India?

Many state governments in India have recognised the importance of collecting school-level data on learning outcomes and conducted annual/ biannual/ quarterly census learning assessments in grades 1-8.

To meet the goals outlined in the NEP 2020 of key-stage assessments in grade 3, 5, and 8, the design and implementation of state-led census assessments needs to be reconsidered to ensure that the assessments are competency based and rigorous. Moreover, these assessments need to be conducted with fidelity and the data gathered should be judiciously used for improving education quality.

Several studies in India have found that the student assessment data is often artificially inflated and hence unreliable.[6] It will be critical to ensure data reliability during implementation of school examinations in grades 3, 5 and 8 so that the assessment produces accurate and relevant reporting and the system trusts the data to use it for corrective action.

Global examples of successful census assessments

Chile, Peru, Mexico, Bangladesh, Australia and the United Kingdom (UK) have adopted a census-based approach to learning assessments as a key pillar in their education system.[7] Rigorous assessment tools and implementation standards ensure data reliability. The assessments test competencies in select subjects.

To illustrate, key stages 1 and 2 in the UK (grades 2 and 6) and the National Assessment Program – Literacy and Numeracy (NAPLAN) assessments in Australia test language and numeracy. Both UK’s key stage assessments and NAPLAN have been developed using rigorous detailed assessment frameworks based on competencies like ‘reading comprehension’, ‘number systems’, etc. and proficiency levels like ‘beginner’, ‘meets expectations’ and ‘exceeds expectations’. NAPLAN especially engages test developers to develop items that meet test specifications; all items undergo field trials and psychometric analysis using Item Response Theory before being selected for the final test instruments; and assessments are equated every year to enable comparisons over cycles.[8]

Chile’s census assessment SIMCE (Quality of Education Evaluation System) has been widely studied and is considered a reliable metric for measuring school level outcomes by all stakeholders namely schools, parents and the Government of Chile. This assessment evaluates language and numeracy in grades 4, 8 and 10, along with history and civics in grades 8 and 10. School quality data as collected through SIMCE is made publicly available to build parent awareness. Moreover, if a school is ranked at the bottom of the scale for over four years, it risks loss of recognition.

Chile’s SIMCE demonstrates that successful dissemination of census assessment data to parents and schools can improve quality and accountability. This is also validated by Chile’s improved ranking on international large scale assessments such as PISA between 2000-2015.[9]

Way forward

The grade 3, 5 and 8 assessment could borrow from international best practices in competency-based assessments. The assessment methodologies, analysis and interpretation of data could follow scientific principles to ensure that this new assessment is different from the existing state-led census assessments, year-end school exams and/or board exams. This would include rigorous scientific sampling procedures, well-constructed contextual items and valid and reliable assessment tools.[10] The results from these assessments shouldn’t be used to pass or fail students, or add additional pressure; they should be strictly used for improvement purposes.

Reliable data in-itself can be an important goal for the system to achieve along with improvement in learning outcomes over assessment cycles. To make it harder for the system to misreport data, additional security practices can be implemented such as external involvement in paper correction, using multiple test booklets, presence of external invigilators, etc. Retests could also be conducted to gauge the validity of the data after a test cycle and year-on-year retests could provide the system with a reliability score, improvement of which could be incentivised.


References

[1] UNESCO. 2019.The promise of large-scale learning assessments: acknowledging limits to unlock opportunities

[2] Verger et al. 2018. Global education policy and international development: New agendas, issues and policies

[3] OECD Reviews of Evaluation and Assessment in Education: North Macedonia

[4] Ritika Shah, Centre for Civil Society, Rethinking K-12 Assessment Framework

[5] Experimental studies conducted in South Asia suggest that distributing school reports with information on comparative school quality, to parents and schools improves learning in private and government schools, implying that census learning assessments could be used to improve learning through top-down and bottom-up monitoring if the data is reliable and relevant to various stakeholders. Afridi et al. 2017. Improving Learning Outcomes through Information Provision: Evidence from Indian Villages, Andrabi et al. 2014. Report Cards: The Impact of Providing School and Child Test Scores on Educational Markets

[6] Abhijeet Singh, 2020: Myths of Official Measurement: Auditing and Improving Administrative Data in Developing Countries.
Note: Evidence from randomised control trials conducted on large scale assessments in Madhya Pradesh and Andhra Pradesh suggests that paper-based census assessment data is distorted and significantly higher, when compared with the audit retests conducted by the researchers, and similar tablet based tests. In the Pratibha Parv, assessment conducted annually for grades 1-8 in Madhya Pradesh, it was found that a large proportion of students answered the same question correctly in the official Pratibha Parv test, when compared with the retest conducted by the researchers. In another experiment in Andhra Pradesh, students in government and private schools in grade 4 were administered tablet and paper-based census assessments. When compared with the audit retest scores, it was found that the paper based assessment data was inflated by 20%, despite external invigilation, while the tablet based data was reliable.

[7] Verger et al. 2018. Global education policy and international development: New agendas, issues and policies

[8] National assessment program – Development and Review process, ACARA and NAPLAN

[9] How Chile combines competition and public funding, The Economist

[10] Principles of Good Practice in Learning Assessment, ACER, UIS and UNESCO

Share this on

Aditi Nangia

Aditi Nangia

Aditi Nangia is a part of the Private Governance team at CSF. The team’s vision is to create a thriving private education sector with constant focus on increasing quality and innovation on learning outcomes, along with working with the government to create an enabling environment for the sector.