Year 5 Overview of Research

Year 5 Science assessment snapshot: Evaluating a science program

Assessment Principle 5: Assessment should lead to informative reporting

Reflecting on the Assessment Snapshot

Reliability is a central issue in reporting student performance. One aspect of reliability relates to whether an assessment provides sufficient information to establish the exact proficiency of a student.

Professor Margaret Wu's paper titled 'Interpreting NAPLAN Results for the Layperson' provides advice in relation to the whether you can use one test to reliably infer student proficiency.

Teachers and parents should be aware that a student's NAPLAN score on a test could fluctuate by about ±12%. Consequently, any use of an individual student's NAPLAN result should take this uncertainty into account. Remember that NAPLAN results are based on just one single test of limited test length. A sample of 40 questions is not sufficient to establish, with confidence, the exact numeracy proficiency of a student. The same caution applies to all subject areas tested (Wu, nd, p. 2).

Margaret Wu's cautions about limitations of NAPLAN data are relevant in this assessment snapshot. It is important that teachers use the information they have gathered from their in-class observations of their students and their own assessment tasks in conjunction with the results from this assessment to inform their reporting of student performance and their evaluation of their science teaching.

Reliability can also be thought of in terms of comparability of teacher judgements. In a review of research into the issues surrounding the use of teachers' judgements for summative purposes Harlen (2005) recommended that

  • Teachers should not judge the accuracy of their assessments by how far they correspond with test results but by how far they reflect the learning goals.
  • There should be wider recognition that clarity about learning goals is a [sic] needed for dependable assessment by teachers.
  • Teachers should be made aware of the sources of bias in their assessments, including the 'halo' effect, and school assessment procedures should include steps that guard against such unfairness.
  • Schools should take action to ensure that the benefits of improving the dependability of the assessment by teachers is sustained e.g., by protecting time for planning assessment, in-school moderation, etc.
  • School [sic] should develop an 'assessment culture' in which assessment is discussed constructively and positively and not seen as a necessary chore. The value for the practice of formative assessment of improving practice in summative assessment needs to be recognized (p. 267).
Reflection questions
  • What range of information do you draw on when you report student performance and evaluate your teaching?
  • Do your assessments reflect your learning goals?
  • Do you take steps to guard against sources of bias in your assessments?
  • Do you work with colleagues to analyse student performances so that your judgements are comparable

Assessment Principle 6: Assessment should lead to school-wide evaluation processes

Reflecting on the Assessment Snapshot

In an evaluation of the research of highly effective school leaders, Professor Geoff Masters found, effective school leaders gave high priority,

… to the school-wide analysis and discussion of systematically collected data on student outcomes, including academic, attendance and behavioural outcomes. Data analyses consider overall school performance as well as the performances of students from identified priority groups; evidence of improvement/regression over time; performances in comparison with similar schools; and, in the case of data from tests such as NAPLAN, measures of growth across the years of school (Masters, 2010, p.3).

It is important however that teachers as well as school leaders are closely involved in the evaluation processes. Masters' research shows that in effective schools, all teaching staff had access to a broad range of student achievement data and used it to analyse, study and display individual and cohort progress. They also set aside time for in-depth staff discussions of achievement data and of strategies for continuous improvement of student outcomes (Masters, 2010).

For a school to be a model learning organization, all faculty members should be professional learners: They should engage in deep, broad study of the learning they are charged to cause. What works? What doesn't? Where is student learning most successful, and why? How can we learn from that success? Where are students struggling to learn, and why? What can we do about it? Effectively tackling these questions is what the "professional" in "professional practice" means (Wiggins and McTighe, 2006, p. 26).

Reflection questions
  • What data do you collect?
  • How do you evaluate your teaching practice?
  • How do you use your evaluation to refine your teaching?
  • How do you work with colleagues to identify and evaluate both the intended and unintended consequences of any initiative or program?
References

Harlen, W. (2005). Trusting teachers' judgement: research evidence of the reliability and validity of teachers' assessment used for summative purposes. Research Papers in Education, 20 (3), 245-270.

Masters, G. (2010). Teaching and Learning School Improvement Framework. State of Queensland (Department of Education and Training) and The Australian Council for Educational Research. Retrieved http://www.acer.edu.au/documents/C2E-Teach-and-learn-no-crop.pdf

Wiggins, G. & McTighe, J. (2006). Examining the Teaching Life. Educational Leadership, 63 (6),
26-29.

Wu, M. (nd). Interpreting NAPLAN Results for the Layperson. Retrieved http://www.appa.asn.au/index.php/appa-business/news-items/733-interpreting-naplan-results-for-the-lay-person-