This post is all about using data in helping to identify underperforming groups of, or individual learners. We, as with other schools, need to be more focussed on taking actions that will rescue our pupils from disappointing grades or failure rather than actually collecting and analysing data.
Why does data matter?
Most schools around the world now use data in some form or other, recording information on learner attainment based against gender, ethnicity etc and, of course, some pre-indicator of attainment such as CAT (cognitive ability test) scores.
The value of this is that it allows teachers and heads of department to identify underperforming individuals or groups of pupils and take action.
However, due to the ease of exporting data from computer systems, it has been coming in ever large swathes to teachers and heads of department, often in a 'raw' format which is difficult to handle and needs specific spreadsheet training to investigate. Due to the varying backgrounds of staff and the varying consistency of teacher training, it is not fair to expect staff to be highly proficient in spreadsheet manipulation.
Despite this, most schools still supply teachers and heads of departments spreadsheets such as this and asked them to 'analyse' the data. Often this data may include some form of individual VA (value-added) score (a numeric value indicating how far below or above their predicted grade a learner is), which identifies learners who are under or over-performing, but the rest is left in the hands of the individuals to interpret and find patterns.
Everybody loves a value-added spreadsheet |
Lastly, this data is often used as a kind of post-mortem analysis on what went wrong, rather than as an intervention. Whilst this is perfectly acceptable to use for investigations into improving curriculum delivery, it is not useful for helping learners right now.
How can we improve?
The UK has recently recognised this and has recently published an interesting report, Eliminating unnecessary workload associated with data management. The report highlights a range of issues for schools (especially with the UK recently ditching levels), but the most interesting points for our school are on the principles of effective data management, those being:
- Am I clear on the purpose? Why is this data being collected, and how will it help
improve the quality of provision?
Two of our data collection cycles during the year are intended for us to track pupils in groups and as individuals to highlight underperformance. The collection of data allows us to take action before it is too late, and we have worked as a team to identify the best times, within the restrictions of our school calendar year, to do this.
As a side note, data should not be used for performance management of teachers because this:
- Is this the most efficient process? Have the workload implications been
properly considered and is there a less burdensome way to collect, enter, analyse,
interpret, and present the information?
The data collection systems we use (entering teacher marks onto engage) are the most efficient we have currently identified, but the analysis is currently not fit for purpose. We need a better way to allow equitable and consistent data discussions for different departments. Heads of larger departments need to consider class by class differences (relying on teachers to identify individual learner actions), whereas a smaller department might be more focussed on individual learners. - Is the data valid?
There will probably always be some questions on the validity of CAT or other pre-indication data in schools, but CAT test data has been used for years and is linked to IQ tests, which have been in use since the start of the 20th Century. There are a number of educational psychologists and website articles supposedly 'debunking' IQ tests, but overwhelmingly, research points to IQ being heavily correlated to educational success, life expectancy and income, and that education is paramount in improving it.
Despite this, we should always err on the side of caution with data (and therefore not use if for performance management of teachers) as IQ tests are only 'generally' accurate (i.e. there will always be learners that meet the criteria, for example, of a high performer that do not ever perform to the expected standards).
In terms of the validity of the teacher entered data, this depends very much on how robust and accurate our teacher-designed assessments are.
So, why does this affect me?
Our school has developed an effective, 3 times a year, cycle of data collection but we are still a little too heavily focussed on the 'weighing of the pig', due to handing out 'data analysis' sheets and expecting staff to interrogate it for patterns, rather than focussing on the actions we need to take.
Therefore, a proposed new data conversation sheet has been devised. After each data collection cycle, this sheet will be distributed and should be used to inform discussions both within the department and with SLT on planned actions that will be taken, which may be in the form of:
- A Head of Department or nominated individual supporting a staff member with a class
- Improving differentiation for ELL / EAL learners
- Individual actions such as contacting parents, support classes etc.
- Longer term solutions such as changing curriculum delivery methods, assessments etc.
You can view the sheet here