EduResearch Matters

EduResearch Matters is a blog for educational researchers in Australia to get their work and opinions out to the general public. Please join us here. We would love to get your comments and feedback about our work.

What you should know now about the NSW government and Dolores Umbridge’s evil ways

The NSW Government has announced the creation of an ‘expert teacher’ role, to be paid almost $150 000 pa.  While this could replace the ineffective Highly Accomplished and Lead Teacher [HALT] system, the work expected of these expert teachers is already part of many teachers’ standard practice, as social media was quick to highlight

These reforms, while offering some positives, do not address teachers concerns about pay and conditions for all teachers, not just an elite few.  The recent announcement of a behaviour expert to be appointed as part of the NSW government’s plan to resolve the teacher workload crisis was also met with derision online, with many invoking JK Rowling’s brutal disciplinarian, Dolores Umbridge, in their responses.  Can Hogwarts solve the education crisis, or is that magical thinking?

Hogwarts School of Witchcraft and Wizardry offers a vision of an idealised (if at times fairly dangerous) school. As Katherine Firth explains, Hogwarts can be seen as an example of Michel Foucault’s construction of schools as sites of social control and discipline. In most readings, the teachers are part of the disciplining apparatus, however, I would argue that in the current circumstances, the teachers themselves are being subject to disciplining through the exertion of government power over their workplace. 

At Hogwarts’ NSW campus, the Ministers for Magic(al Thinking) (State Premier Dominic Perrottet and Education Minister Sarah Mitchell) have waved their wands to produce resource packs, disregarding teachers crying out for more planning time to allow them to collaboratively develop materials suited to their students’ needs. Just as Umbridge’s secondment at Hogwarts was as much about ensuring staff compliance as student behaviour, so too does the appointment of a behaviour expert suggest that NSW teachers aren’t doing their job well, further perpetuating the media narrative that it is teachers failing their students, not the system failing the teachers. 

French philosopher and theorist Michel Foucault explores how schools, hospitals, the military and other large-scale public institutions work as sites of “discipline”, training individuals to comply with social expectations. These sites of discipline “establish in the body the constructive link between increased aptitude and increased domination” (166): the more you comply, the better. Teachers (products of the schooling system themselves) reinforce structures and behaviours for students that they replicate in their own work. Yet what happens when teachers reject these attempts at discipline?

Schools work in lesson-units, where both students and teachers are expected to be present, and behave in particular ways, according to a set schedule. All of these organisational structures are designed to promote compliance and therefore increase productivity.  While analysis of this often focuses on the regimenting of the student’s day, it is also a disciplining act that teachers are subject to. The point of this scheduling is to allow for what Foucault labelsexhaustive use” –“extracting from time, ever more available moments and, from each moment, ever more useful forces” . As teachers shout into the online void about unsustainable workloads, the (disciplining) government seeks only “maximum speed and maximum efficiency”, refusing to concede that the workload is the problem. It is here we come to the (hor)crux of the teacher shortage crisis: The Ministry have lost their grip on teachers and they are in open revolt, just as McGonagall and the staff of Hogwarts united and fought back against the He Who Must Not Be Named.

Just as McGonagall led an internal resistance to the unreasonable demands of the Ministry, so too NSW teachers have united to protest the expectations being forced upon them in recent strike action.  They see the institution to which they have subscribed for their own schooling, and their careers as fundamentally flawed, and they begin to resist the powers that have sought to direct their conduct. This abandonment of and loss of faith in schools as an institution by the very workers who are meant to maintain them presents an existential crisis to our governments. If they wish to maintain schools in something resembling their current forms, they will need to change how they exert their power. How will higher pay for a select few entice people to the profession? How will the provision of generic resources, when teachers already have robust collegial networks for resource sharing, reduce workload and burnout? How will the maintenance of national systems of testing (a form of observation and control) reduce teacher stress? How will the appointment of one behaviour advisor make every classroom safer? These rewards for compliance, for docility, have lost their power for many teachers, and so they seek an escape, just as one would when wrongfully held in another institution Foucault describes – the prison. 

If the governments which control Australia’s education sector hope to restore trust in schools, they must regain the good will of teachers, as their compliance is what makes the system work.  Increased reward and reduced workload are the common elements of teachers’ calls for reform, not yet another consultant, off-the-shelf package or reward for a select few. Having pushed teachers beyond the limits of their productive capacity, and thus united teachers in a way that supersedes the partitioning of schools, sectors and states in protest and solidarity, governments must reform the institution itself if they hope to restore the magic of learning in our schools in the future. 

Dr Alison Bedford is a lecturer (curriculum and pedagogy) in the School of Education at the University of Southern Queensland and a secondary school history teacher.

The good, the bad and the pretty good actually

Every year headlines proclaim the imminent demise of the nation due to terrible, horrible, very bad NAPLAN results. But if we look at variability and results over time, it’s a bit of a different story.

I must admit, I’m thoroughly sick of NAPLAN reports. What I am most tired of, however, are moral panics about the disastrous state of Australian students’ school achievement that are often unsupported by the data.

A cursory glance at the headlines since NAPLAN 2022 results were released on Monday show several classics in the genre of “picking out something slightly negative to focus on so that the bigger picture is obscured”. 

A few examples (just for fun) include:

Reading standards for year 9 boys at record low, NAPLAN results show 

Written off: NAPLAN results expose where Queensland students are behind 

NAPLAN results show no overall decline in learning, but 2 per cent drop in participation levels an ‘issue of concern’ 

And my favourite (and a classic of the “yes, but” genre of tabloid reporting)

‘Mixed bag’ as Victorian students slip in numeracy, grammar and spelling in NAPLAN 

The latter contains the alarming news that “In Victoria, year 9 spelling slipped compared with last year from an average NAPLAN score of 579.7 to 576.7, but showed little change compared with 2008 (576.9). Year 5 grammar had a “substantial decrease” from average scores of 502.6 to 498.8.”

If you’re paying attention to the numbers, not just the hyperbole, you’ll notice that these ‘slips’ are in the order of 3 scale scores (Year 9 spelling) and 3.8 scale scores (Year 5 grammar). Perhaps the journalists are unaware that the NAPLAN scale ranges from 1-1000? It might be argued that a change in the mean of 3 scale scores is essentially what you get with normal fluctuations due to sampling variation – not, interestingly, a “substantial decrease”. 

The same might be said of the ‘record low’ reading scores for Year 9 boys. The alarm is caused by a 0.2 score difference between 2021 and 2022. When compared with the 2008 average for Year 9 boys the difference is 6 scale score points, but this difference is not noted in the 2022 NAPLAN Report as being ‘statistically significant’ – nor are many of the changes up or down in means or in percentages of students at or above the national minimum standard.

Even if differences are reported as statistically significant, it is important to note two things: 

1. Because we are ostensibly collecting data on the entire population, it’s arguable whether we should be using statistical significance at all.

2. As sample sizes increase, even very small differences can be “statistically significant” even if they are not practically meaningful.

Figure 1. NAPLAN Numeracy test mean scale scores for nine cohorts of students at Year 3, 5, 7 and 9.

The practical implications of reported differences in NAPLAN results from year to year (essentially the effect sizes) are not often canvassed in media reporting. This is an unfortunate omission and tends to enable narratives of largescale decline, particularly because the downward changes are trumpeted loudly while the positives are roundly ignored. 

The NAPLAN reports themselves do identify differences in terms of effect sizes – although the reasoning behind what magnitude delineates a ‘substantial difference’ in NAPLAN scale scores is not clearly explained. Nonetheless, moving the focus to a consideration of practical significance helps us ask: If an average score changes from year to year, or between groups, are the sizes of the differences something we should collectively be worried about? 

Interestingly, Australian students’ literacy and numeracy results have remained remarkably stable over the last 14 years. Figures 1 and 2 show the national mean scores for numeracy and reading for the nine cohorts of students who have completed the four NAPLAN years, starting in 2008 (notwithstanding the gap in 2020). There have been no precipitous declines, no stunning advances. Average scores tend to move around a little bit from year to year, but again, this may be due to sampling variability – we are, after all, comparing different groups of students. 

This is an important point for school leaders to remember too: even if schools track and interpret mean NAPLAN results each year, we would expect those mean scores to go up and down a little bit over each test occasion. The trick is to identify when an increase or decrease is more than what should be expected, given that we’re almost always comparing different groups of students (relatedly see Kraft, 2019 for an excellent discussion of interpreting effect sizes in education). 

Figure 2. NAPLAN Reading test mean scale scores for nine cohorts of students at Year 3, 5, 7 and 9.

Plotting the data in this way it seems evident to me that, since 2008, teachers have been doing their work of teaching, and students by-and-large have been progressing in their skills as they grow up, go to school and sit their tests in years 3, 5, 7 and 9. It’s actually a pretty good news story – notably not an ongoing and major disaster. 

Another way of looking at the data, and one that I think is much more interesting – and instructive – is to consider the variability in achievement between observed groups. This can help us see that just because one group has a lower average score than another group, this does not mean that all the students in the lower average group are doomed to failure.

Figure 3 shows just one example: the NAPLAN reading test scores of a random sample of 5000 Year 9 students who sat the test in NSW in 2018 (this subsample was randomly selected from data for the full cohort of students in that year, N=88,958). The red dots represent the mean score for boys (left) and girls (right). You can see that girls did better than boys on average. However, the distribution of scores is wide and almost completely overlaps (the grey dots for boys and the blue dots for girls). There are more boys at the very bottom of the distribution and a few more girls right at the top of the distribution, but these data don’t suggest to me that we should go into full panic mode that there’s a ‘huge literacy gap’ for Year 9 boys. We don’t currently have access to the raw data for 2022, but it’s unlikely that the distributions would look much different for the 2022 results.  

Figure 3. Individual scale scores and means for Reading for Year 9 boys and girls (NSW, 2018 data).

So what’s my point? Well, since NAPLAN testing is here to stay, I think we can do a lot better on at least two things: 1) reporting the data honestly (even when its not bad news), and 2) critiquing misleading or inaccurate reporting by pointing out errors of interpretation or overreach. These two aims require a level of analysis that goes beyond mean score comparisons to look more carefully at longitudinal trends (a key strength of the national assessment program) and variability across the distributions of achievement.

If you look at the data over time NAPLAN isn’t a story of a long, slow decline. In fact, it’s a story of stability and improvement. For example, I’m not sure that anyone has reported that the percentage of Indigenous students at or above the minimum standard for reading in Year 3 has stayed pretty stable since 2019 – at around 83% up from 68% in 2008. In Year 5 it’s the highest it’s ever been at 78.5% of Indigenous students at or above the minimum standard – up from 63% in 2008. 

Overall the 2022 NAPLAN report shows some slight declines, but also some improvements, and a lot that has remained pretty stable. 

As any teacher or school leader will tell you, improving students’ basic skills achievement is difficult, intensive and long-term work. Like any task worth undertaking, there will be victories and setbacks along the way. Any successes should not be overshadowed by the disaster narratives continually fostered by the 24/7 news cycle. At the same time, overinterpreting small average fluctuations doesn’t help either. Fostering a more nuanced and longer-term view when interpreting NAPLAN data, and recalling that it gives us a fairly one-dimensional view of student achievement and academic development would be a good place to start.

Sally Larsen is a Lecturer in Learning, Teaching and Inclusive Education at the University of New England. Her research is in the area of reading and maths development across the primary and early secondary school years in Australia, including investigating patterns of growth in NAPLAN assessment data. She is interested in educational measurement and quantitative methods in social and educational research. You can find her on Twitter @SallyLars_27

Why the federal government must ditch Jobs Ready Graduates now

New figures challenge the assumptions behind the Job-Ready Graduates package, introduced by the former Coalition government and unchanged by Labor. That package has underestimated the value and employability of arts, social science and humanities graduates.

The employment outcomes of students enrolled in arts, social sciences and humanities degrees have risen to 89.6 per cent – an increase of 25 percentage points according to the Quality Indicators for Learning and Teaching (QILT) 2022 Longitudinal Graduate Outcomes Survey released this month.

The Package, introduced under former Education Minister Dan Tehan in 2020 and implemented this year, has seen the cost for students of many arts, social science and humanities degrees more than double.

QILT’s longitudinal study shows that the graduates in a wide range of disciplines, including arts, social sciences and humanities are highly employable and that attempts to drive students into some fields at the expense of others are misplaced.

The report measures the medium-term outcomes of higher education graduates based on a cohort analysis of graduates who responded to the 2019 Graduate Outcomes Survey. 

It noted the figures around generalist degrees “continue to demonstrate an important point – that while undergraduates from some fields of education, in particular those with generalist degrees, have weaker employment outcomes soon after completing their course, the gap in employment outcomes across fields of education tends to narrow over time.”

The Federal Government must commit to abandoning the policy which is putting our students at a significant financial disadvantage.

Nick Bisley

It also states that while vocational degrees tend to have higher employment outcomes than generalist degrees in the short term “the gap in employment rates between those with vocational and generalist degrees diminishes over time”.

80 per cent of students following their passions

This research follows earlier findings from the Universities Admission Centre Student Lifestyle Report. It found 81 per cent of the nearly 14,000 Year 12 students interviewed said passion would guide their choices for further study.

Four in five of last year’s high school graduates have said passion is their leading influence when choosing a degree, showing that the previous government’s attempts to drive enrolment numbers using fee increases was always likely to fail.

These statistics further disprove claims fee increases would guide student preferences under the JRG.

DASSH is calling for university fee reform under the upcoming Accord to be undertaken by the Federal Government given the lack of evidence linking fee levels to job outcomes and career success more broadly.

Productivity Commission observations

In addition to results from the above reports, the Productivity Commission has recently made several key points about student fees being used as incentives in its 5-year Productivity Inquiry: From learning to growth. In this report the Commission finds that students are best placed to judge for themselves what education suits their interests and their aspirations.

The report rightly points out: “Government subsidies for tertiary education could be allocated more efficiently and equitably, without necessarily increasing the total amount of public funding.”

“Currently, governments set differential subsidies based on targeting public benefits and skill needs, but these have little impact on student choice because income-contingent loans eliminate upfront fees and make price differences less salient.”

Our members believe attempting to manipulate student preferences through price signalling is counterproductive to the aims of having an efficient and high-quality tertiary system.

DASSH strongly supports the evidence in the report that shows human capital will be more in demand in the future than ever before.

“As our reliance on the services sector expands, people’s capabilities (‘human capital’) will play a more important role than physical capital in improving productivity,” the report states.

“General and foundational skills will continue to underpin the workforce’s contribution to productivity, and as routine tasks are automated, newly created jobs will increasingly rely on areas such as interpersonal skills, critical thinking, working with more complex equipment, and accomplished literacy and numeracy.”

The skills described in the report are derived through the education of students in the arts, social sciences and humanities. It is impossible to know in advance what the value of these disciplines or specific courses offered within our degrees will be in part because of the rapidly changing nature of the labour market and the innovative ways in which knowledge is put to use in society.

The current price settings for arts, humanities and social sciences degrees were set without any evidence that they would work nor any consideration about the impact on current or future students. 

Those degrees are valued by employers and provide a strong intellectual foundation for long term career success. The JRG punishes students who want to pursue studies that are beneficial to them and society more broadly and a new and more equitable pricing level should be developed.

The Federal Government must commit to abandoning the policy which is putting our students at a significant financial disadvantage.


Nick Bisley is President of the Australasian Council of Deans of Arts, Social Sciences and Humanities. He is Dean of Humanities and Social Sciences and Professor of International Relations at La Trobe University. His research focuses primarily on Asia’s international relations, great power politics and Australian foreign and defence policy. Nick is a member of the advisory board of China Matters and a member of the Council for Security and Cooperation in the Asia-Pacific. Nick is the author of many works on international relations, including Issues in 21st Century World Politics, 3rd Edition (Palgrave, 2017), Great Powers in the Changing International Order (Lynne Rienner, 2012), and Building Asia’s Security (IISS/Routledge, 2009, Adelphi No. 408). He regularly contributes to and is quoted in national and international media including The Guardian, The Wall Street Journal, CNN and Time Magazine

Distorted reports keep coming. This one will make you livid

What should we be talking about when we talk about teachers? Teachers’ pay, working conditions and the looming teacher shortage. 

What are media talking about instead? A commonly suggested ‘solution’ to address concerns about standards in teaching: pre-prepared lessons, or, as the Grattan Institute describes them in a recent report, ‘high quality teaching materials’. 

The Grattan Institute, a thinktank, notes in its summary of the report that: “of 2,243 teachers and school leaders across Australia, … only 15 per cent of teachers have access to a common bank of high-quality curriculum materials for all their classes.”

In a departure from any claims to objectivity, the report paints a picture of teachers “being left to fend for themselves, creating lessons from scratch and scouring the internet and social media for teaching materials”.

This is yet another example of a large scale survey conducted by those with only a tangential relationship to the profession. It ignores the views of many teachers and offers a ready-made solution – one likely to become another costly and wasted expense for taxpayers. It also fails to note such approaches have been tried in some jurisdictions in Australia – with limited success, for example, the Curriculum-to-Classroom program in Queensland was found to be deficient. The surest outcome of such an approach would be a new revenue stream for specifically chosen edu-businesses as they rush to be selected as the provider of choice.   

There are already a range of paid options for teachers to access similar resources through sites like Twinkl, Teachers Pay Teachers, TES and others. Admittedly, these are paid resources; we argue it is unreasonable for teachers to pay for any curriculum resources out-of-pocket. However, even a ‘free’ version seems misguided because it does not pay attention to the work – and the expertise – that is central to teachers’ practice. And this practice includes the careful design and development of learning materials. This is not something that can be outsourced. 

As we say, the assumption and positioning of highly trained and university-qualified teachers, many of whom have trained for 4 or more years, as vulnerable and ‘fending for themselves’ is odd.

Planning lessons, finding, curating and developing resources is central to the work of teachers. Many teachers take great delight in carefully crafting lessons that leverage students’ interests; education is not, and never has been a one-size-fits-all model and any claim otherwise is undermining teachers, leaders and education support staff around Australia.

Teachers delivering content via a pre-prepared script or lesson might seem easier and simpler, but it remains difficult to see who benefits from a lifeless and unthinking teacher delivering someone else’s content. The key to teaching – and learning – lies in the human relationships between teachers and students. Those human relationships allow for careful contextualisation and design. It is that which drives teachers to search for just the right YouTube clip – the one that will appeal to that particular Year 9 Science class – not a sense of ‘fending for themselves’. Whereas teachers are responsible for their school students, families and communities, creators of ‘teacher-proof’ lesson banks are accountable to their corporate employers. As Lucinda McKnight reflects, ‘who would we rather have designing learning experiences for our own children?’. 

Our new book, Empowering Teachers and Democratising Schooling: Perspectives from Australia has taken the focus of empowering teachers and outlines alternative, human-centric ways that teachers can be trusted and empowered to make decisions about their work, with the shared goal of democratising approaches to education. By combining theory, academic thinking and teachers’ best practice examples, the book provides a range of suggestions on many of the key challenges facing Australian education. For example, George Lilley outlines the way that teachers have been sidelined in favour of a rigorous adherence to educational research.Alex Wharton’s chapter   imagines what an education system might look and function like if teachers were respected across all facets of their domain. 

Polly Dunning’s chapter articulates the range of pressures placed upon teachers – and the effects this has on children. Not surprisingly, the nature of lesson planning is not mentioned, but rather the rise of administrivia and additional expectations placed upon teachers without additional time or funding provided. 

As with many things in education, the best solutions require humans to be empowered to find their own solutions.   Education is filled with complex, ‘wicked’ problems, where solutions can take time, and require contextual nuance. ‘Solutions’ such as those suggested by The Grattan Institute ultimately misunderstand the work of the teacher as technocratic and therefore something that can be standardised. Until we appreciate the complexity of what it means for teachers to teach, we will continue to be presented with claims of ‘teacher-proof’ policies and materials that ignore the diversity of the students, whilst disenfranchising the teaching profession. 

If we are aiming to recruit and retain teachers, poorly thought out solutions such as providing teachers with pre-prepared teaching materials, as suggested by the Grattan Institute, is not the answer. This will do little to reduce workload, and it will also further damage the reputation of the teaching profession by limiting the expertise of teachers. These outcomes will do little to encourage people to become or remain teachers.  

Instead, we must look towards long term solutions that recognise the expertise of the profession. Trust, empowerment and listening to the voices of the profession is key. 

Keith Heggart is an early career researcher with a focus on learning and instructional design, educational technology and civics and citizenship education. He is a former high school teacher, having worked as a school leader in Australia and overseas, in government and non-government sectors.

Steven Kolber is a teacher at a Victorian public school, the founder of #edureading founder, secretary of Teachers Across Borders Australia and a proud member of @AEUvictoria. #aussieED Global Teacher Prize top 50 Finalist

Tom Mahoney is a teacher and educator of secondary VCE Mathematics and Psychology students, currently completing a PhD in Educational Philosophy part time through Deakin University. His research explores the influence of dominant educational ideologies on teacher subjectivity. You can keep up to date with Tom’s work via his fortnightly newsletter, The Interruption, via Substack. Tom is on Twitter @tommahoneyedu  

AERO responds to James Ladwig’s critique

AERO’s response is below, with additional comments from Associate Professor Ladwig. For more information about the statistical issues discussed, a more detailed Technical Note is available at AERO.

On Monday, EduResearch Matters published a post by Associate Professor James Ladwig which critiqued the Australian Education Research Office’s Writing development: what does a decade of NAPLAN data reveal? 

AERO’s response is below, with additional comments from Associate Professor Ladwig. 

AERO: This article makes three key criticisms about the analysis presented in the AERO report, which are inaccurate.

Ladwig claims that the report lacks consideration of sampling error and measurement error in its analysis of the trends of the writing scores. In fact, those errors were accounted for in the complex statistical method applied. AERO’s analysis used both simple and complex statistical methods to examine the trends. While the simple method did not consider error, the more complex statistical method (referred to as the ‘Differential Item Analysis’) explicitly considered a range of errors (including measurement error, and cohort and prompt effects).

Associate Professor Ladwig: AERO did not include any of that in its report nor in any of the technical papers. There is no overtime DIF analysis of the full score – and I wouldn’t expect one.  All of the DIF analyses rely on data that itself carries error (more below). There is no way for the educated reader to verify these claims without expanded and detailed reporting of the technical work underpinning this report. This is lacking in transparency, falls shorts of the standards we should expect from AERO and makes it impossible for AERO to be held accountable for its specific interpretation of their own results.

AERO: Criticism of the perceived lack of consideration of ‘ceiling effects’ in AERO’s analysis of the trends of high-performing students’ results, omits the fact that AERO’s analysis focused on the criteria scores (not the scaled measurement scores). AERO used the proportion of students achieving the top 2 scores (not the top score), for each criterion, as the matrix to examine the trends. Given only a small proportion of students achieved a top score for any criterion (as shown in the report statistics), there is no ‘ceiling effect’ that could have biased the interpretation of the trends.

Associate Professor Ladwig made his ‘ceiling effect’ comments while explaining how the NAPLAN writing scores are designed not in relation to the AERO analysis.

AERO: The third major inaccuracy relates to the comments made about the ‘measurement error’ around the NAPLAN bands and the use of adaptive testing to reduce error. These are irrelevant to AERO’s analysis because the main analysis did not use scaled scores, it did not use bands, and adaptive testing is not applicable to the writing assessment.

Associate Professor Ladwig’s comment was about the scaling in relation to explaining the score development, not about the AERO analysis.

In relation to the AERO use of NAPLAN criterion score data in the writing analysis, however, please note that those scores are created either through scorer moderation processes or (increasingly where possible) text interpretative algorithms.  Here again the address of the reliability of these raw scores was absent, but with one declared limitation noted, in AERO’s own terms:

Another key assumption underlying most of the interpretation of results in this report is that marker effects (that is, marking inconsistency across years) are small and therefore they do not impact on the comparability of raw scores over time. (p[.66)

This is where AERO has taken another short cut, with an assumption that should not be made.  ACARA has reported the reliability estimates to include that in the scores analysis.  It is readily possible to report those and use them for trend analyses.

AERO: A final point: the mixed-methods design of the research was not recognised in the article. AERO’s analysis examined the skills students were able to achieve at the criterion level against curriculum documents. Given the assessment is underpinned by a theory of language, we were able to complement quantitative with a qualitative analysis that specifically highlighted the features of language students were able to achieve. This was validated by analysis of student writing scripts.

Associate Professor Ladwig says this is irrelevant to his analysis. The logic of this is also a concern. Using multiple methods and methodologies does not correct for any that are technically lacking.  In relation to the overall point of concern, we have a clear example of an agency reporting statistical results in a manner that elides external scrutiny accompanied by an extreme media positioning. Any of the qualitative insights to the minutia these numbers represent will probably very useful for teachers of writing – but whether or not they are generalisable, big, or shifting depends on those statistical analysis themselves. 

AERO’s writing report is causing panic. It’s wrong. Here’s why.

If ever there was a time to question public investment in developing reports using  ‘data’ generated by the National Assessment Program, it is now with the release of the Australian Educational Research Organisation’s report ‘Writing development: What does a decade of NAPLAN data reveal?’ 

I am sure the report was meant to provide reliable diagnostic analysis for improving the function of schools. 

It doesn’t. Here’s why.

There are deeply concerning technical questions about both the testing regime which generated the data used in the current report, and the functioning of the newly created (and arguably redundant) office which produced this report.

There are two lines of technical concern which need to be noted. These concerns reveal reasons why this report should be disregarded – and why media response is a beatup.

The first technical concern for all reports of NAPLAN data (and any large scale survey or testing data) is how to represent the inherent fuzziness of estimates generated by this testing apparatus.  

Politicians and almost anyone outside of the very narrow fields reliant on educational measurement would like to talk about these numbers as if they are definitive and certain.

They are not. They are just estimates – but all of the summary statistics reports are just estimates.  

The fact these are estimates is not apparent in the current report.  There is NO presentation of any of the estimates of error in the data used in this report. 

Sampling error is important, and, as ACARA itself has noted, (see, eg, the 2018 NAPLAN technical report) must be taken into account when comparing the different samples used for analyses of NAPLAN.  This form of error is the estimate used to generate confidence intervals and calculations of ‘statistical difference’.  

Readers who recall seeing survey results or polling estimates being represented with a ‘plus or minus’ range will recognise sampling error. 

Sampling error is a measure of the probability of getting a similar result if the same analyses were done again, with a new sample of the same size, with the same instruments, etc.  (I probably should point out that the very common way of expressing statistical confidence often gets this wrong – when we say we have X level of statistical confidence, that isn’t a percentage of how confident you can be with that number, but rather the likelihood of getting a similar result if you did it again.)  

In this case, we know about 10% of the population do not sit the NAPLAN writing exam, so we already know there is sampling error.  

This is also the case when trying to infer something about an entire school from the results of a couple of year levels.  The problem here is that we know the sampling error introduced by test absences is not random and accounting for it can very much change trend analyses, especially for sub-populations So, what does this persuasive writing report say about sampling error? 

Nothing. Nada. Zilch. Zero. 

Anyone who knows basic statistics knows that when you have very large samples, the amount of error is far less than with smaller samples.  In fact, with samples as large as we get in NAPLAN reports, it would take only a very small difference to create enough ripples in the data to show up as being statistically significant.  That doesn’t mean, however, the error introduced is zero – and THAT error must be reported when representing mean differences between different groups (or different measures of the same group).

Given the size of the sampling here, you might think it ok to let that slide.  However, that isn’t the only short cut taken in the report.  The second most obvious measure ignored in this report is measurement error.  Measurement error exists any time we create some instrument to estimate a ‘latent’ variable – ie something you can’t see directly.  We can’t SEE achievement directly – it is an inference based on measuring several things we can theoretically argue are valid indicators of that thing we want to measure.  

Measurement error is by no means a simple issue but directly impacts the validity of any one individual student’s NAPLAN score and an aggregate based on those individual results.  In ‘classical test theory’ a measured score is made of up what is called a ‘true score’ and error (+/-).  In more modern measurement theories error can become much more complicated to estimate, but the general conception remains the same.  Any parent who has looked at NAPLAN results for their child and queried whether or not the test is accurate is implicitly questioning measurement error.

Educational testing advocates have developed many very mathematically complicated ways of dealing with measurement error – and have developed new testing techniques for improving their tests.  The current push for adaptive testing is precisely one of those developments, in the local case being rationalised as adaptive testing (where which specific test item is asked of the person being tested changes depending on prior answers) does a better job of differentiation those at the top and bottom end of the scoring range (see the 2019 NAPLAN technical report for this analysis). 

 This bottom/top of the range problem is referred to as a floor or ceiling effect.  When large proportion of students either don’t score anything or get everything correct, there is no way to differentiate those students from each other – adaptive testing is a way of dealing with floor and ceiling effects better than a predetermined set of test items.  This adaptive testing has been included in the newer deliveries of the online form of the NAPLAN test.

Two important things to note. 

One, the current report claims the performance of high ‘performing’ students’ scores has shifted down – despite new adaptive testing regimes obtaining very different patterns of ceiling effect. Second, the test is not identical for all students (they never have been).  

The process used for selecting test items  is based on ‘credit models’ generated by testers. Test items are determined to have particular levels of ‘difficulty’ based on the probability of correct answers being given from different populations and samples, after assuming population level equivalence in prior ‘ability’ AND creating difficulties score for items while assuming individual student ‘ability’ measures are stable from one time period to the next.  That’s how they can create these 800 point scales that are designed for comparing different year levels.

So what does this report say about any measurement error that may impact the comparisons they are making?  Nothing.

One of the ways ACARA and politicians have settled their worries about such technical concerns as accurately interpreting statistical reports is by introducing the reporting of test results in ‘Bands’.  Now these bands are crucial for qualitatively describing rough ranges of what the number might means in curriculum terms – but they come with a big consequence.  Using ‘Band’ scores is known as ‘coarsening’ data – when you take a more detailed scale and summarise it in a smaller set of ordered categories – and that process is known to increase any estimates of error.  This later problem has received much attention in the statistical literature, with new procedures being recommended for how to adjust estimates to account for that error when conducting group comparisons using that data.  

As before, the amount of reporting of that error issue? Nada.

 This measurement problem is not something you can ignore – and yet the current report is worse than careless on this question.

It takes advantage of readers not knowing about it. 

When the report attempts to diagnose which component of the persuasive writing tasks were of most concern, it does not bother reporting that the error for each of the separate measures of those ten dimensions of writing has far more error than the total writing score, simply because the number of marks for each is a fraction of the total.  The smaller the number of indicators, the more error (and less reliability).

Now all of these technical concerns simply raises the question of whether or not the overall findings of the report will hold up to robust tests and rigorous analysis – there is no way to assess that from this report, but there is even bigger reason to question why it was given as much attention as it was.  That is, for any statistician, there is always a challenge to translate the numeric conclusions into some for of ‘real life’ scenario.

To explain why AERO has significantly dropped the ball on this last point, consider its headline claim that year 9 students have had declining persuasive writing scores and somehow representing that as a major new concern.  

First note that the ONLY reporting of this using the actual scale values is a vaguely labelled line graph showing scores from 2011 until 2018 – skipping 2016 since the writing task that year wasn’t for persuasive writing (p 26 of the report has this graph).  Of those year to year shifts, the only two that may be statistically significant, and are readily visible, are from 2011 to 2012, and then again from 2017 to 2018.  Why speak so vaguely? From the report, we can’t tell you the numeric value of that drop, because there is no reporting of the actual number represented in that line graph.  

Here is where the final reality check comes in.  

If this data matches the data reported in the national reports from 2011 and 2018, the named mean values on the writing scale were 565.9 and 542.9 respectively.  So that is a drop between those two time points of 23 points.  That may sound like a concern, but recall those scores are based on 48 marks given for writing.  In other words, that 23 point difference is no more than one mark difference (it could be far less since each different mark carries a different weighting in formulation that 800 scale).  

Consequently, even if all the technical concerns get sufficient address and the pattern still holds, the realistic title of Year 9 claim would be ‘Year 9 students in 2018 NAPLAN writing test scored one less mark than the Year 9 students of 2011.’

Now assuming that 23 point difference has anything to do with the students at all, start thinking about all the plausible reasons why students in that last year of NAPLAN may not have been as attentive to details as they were when NAPLAN was first getting started.   I can think of several, not least being the way my own kids did everything possible to ignore the Year 9 test – since the Year 9 test had zero consequences for them.  

Personally, these reports are troubling for many reasons, inclusive of the use of statistics to assert certainty without good justification, but also because saying student writing has declined belies that obvious fact that is hasn’t been all that great for decades.  This is where I am totally sympathetic to the issues raised by the report – we do need better writing among the general population.  But using national data to produce a report of this calibre, by an agency beholden to government, really does little more than provide click-bait and knee jerk diagnosis from all sides of a debates we don’t really need to have.

James Ladwig is Associate Professor in the School of Education at the University of Newcastle.  He is internationally recognised for his expertise in educational research and school reform.  Find James’ latest work in Limits to Evidence-Based Learning of Educational Science, in Hall, Quinn and Gollnick (Eds) The Wiley Handbook of Teaching and Learning published by Wiley-Blackwell, New York. James is on Twitter @jgladwig

AERO’s response to this post

ADDITIONAL COMMENTS FROM AERO provided on November 9: information about the statistical issues discussed, a more detailed Technical Note is available at AERO.

On Monday, EduResearch Matters published the above post by Associate Professor James Ladwig which critiqued the Australian Education Research Office’s Writing development: what does a decade of NAPLAN data reveal? 

AERO’s response is below, with additional comments from Associate Professor Ladwig. 

AERO: This article makes three key criticisms about the analysis presented in the AERO report, which are inaccurate.

Ladwig claims that the report lacks consideration of sampling error and measurement error in its analysis of the trends of the writing scores. In fact, those errors were accounted for in the complex statistical method applied. AERO’s analysis used both simple and complex statistical methods to examine the trends. While the simple method did not consider error, the more complex statistical method (referred to as the ‘Differential Item Analysis’) explicitly considered a range of errors (including measurement error, and cohort and prompt effects).

Associate Professor Ladwig: AERO did not include any of that in its report nor in any of the technical papers. There is no overtime DIF analysis of the full score – and I wouldn’t expect one.  All of the DIF analyses rely on data that itself carries error (more below). There is no way for the educated reader to verify these claims without expanded and detailed reporting of the technical work underpinning this report. This is lacking in transparency, falls shorts of the standards we should expect from AERO and makes it impossible for AERO to be held accountable for its specific interpretation of their own results.

AERO: Criticism of the perceived lack of consideration of ‘ceiling effects’ in AERO’s analysis of the trends of high-performing students’ results, omits the fact that AERO’s analysis focused on the criteria scores (not the scaled measurement scores). AERO used the proportion of students achieving the top 2 scores (not the top score), for each criterion, as the matrix to examine the trends. Given only a small proportion of students achieved a top score for any criterion (as shown in the report statistics), there is no ‘ceiling effect’ that could have biased the interpretation of the trends.

Associate Professor Ladwig made his ‘ceiling effect’ comments while explaining how the NAPLAN writing scores are designed not in relation to the AERO analysis.

AERO: The third major inaccuracy relates to the comments made about the ‘measurement error’ around the NAPLAN bands and the use of adaptive testing to reduce error. These are irrelevant to AERO’s analysis because the main analysis did not use scaled scores, it did not use bands, and adaptive testing is not applicable to the writing assessment.

Associate Professor Ladwig’s comment was about the scaling in relation to explaining the score development, not about the AERO analysis.

In relation to the AERO use of NAPLAN criterion score data in the writing analysis, however, please note that those scores are created either through scorer moderation processes or (increasingly where possible) text interpretative algorithms.  Here again the address of the reliability of these raw scores was absent, but with one declared limitation noted, in AERO’s own terms:

Another key assumption underlying most of the interpretation of results in this report is that marker effects (that is, marking inconsistency across years) are small and therefore they do not impact on the comparability of raw scores over time. (p[.66)

This is where AERO has taken another short cut, with an assumption that should not be made.  ACARA has reported the reliability estimates to include that in the scores analysis.  It is readily possible to report those and use them for trend analyses.

AERO: A final point: the mixed-methods design of the research was not recognised in the article. AERO’s analysis examined the skills students were able to achieve at the criterion level against curriculum documents. Given the assessment is underpinned by a theory of language, we were able to complement quantitative with a qualitative analysis that specifically highlighted the features of language students were able to achieve. This was validated by analysis of student writing scripts.

Associate Professor Ladwig says this is irrelevant to his analysis. The logic of this is also a concern. Using multiple methods and methodologies does not correct for any that are technically lacking.  In relation to the overall point of concern, we have a clear example of an agency reporting statistical results in a manner that elides external scrutiny accompanied by an extreme media positioning. Any of the qualitative insights to the minutia these numbers represent will probably very useful for teachers of writing – but whether or not they are generalisable, big, or shifting depends on those statistical analysis themselves. 

It’s so dramatic: what new play Chalkface gets very right about being a teacher

Chalkface is a new play about teachers, currently being staged by the Sydney Theatre Company and playing at the Sydney Opera House.

Written by Angela Betzien, Chalkface is advertised as a black comedy in which an ‘old school’ teacher clashes with a bright-eyed newbie. As a researcher of teachers’ work and a head teacher Science, we hoped the play would be a fresh take on the profession we live every day; that it would make us laugh, and maybe even have something insightful to say.

We went to see Chalkface last Saturday evening and overall, we think that the play gets a lot right.

From the peeling paint, to the old mis-matched chairs, out-of-service hot water tap and enormous tin of Nescafe Blend 43, the set is without question the quintessential public school staffroom in Australia. There is even a resident rat* eating through the precious limited stock of coloured paper in the supply cupboard. Meanwhile, one teacher tells the newbie teacher to note the school’s general scent, which he describes as “old fart”.

Between us, we’ve spent a lot of time in a lot of schools. While some are newer or better-resourced than others, we can tell you that this is generally a pretty accurate representation of public education in NSW. (The “old fart” smell in particular seems, curiously, universal.)

What the accuracy of the play’s set, and jokes about lack of resources reflect, however, are a systemic underfunding and lack of maintenance of government educational facilities which will not be news to any local audience. It is well known Australia has a problem with educational equity, and the play takes frequent jabs at wealthy private sector schools. While the teachers in the play guzzle Blend 43 and rotate cleaning shifts, for example, the private school up the road has apparently just hired a full-time barista for its staff. The contrast here is stark, and while not all private schools are hugely wealthy, some of them certainly are and despite years of debate about developing a ‘needs-based’ funding system, we aren’t there yet.

Chalkface doesn’t end its commentary on education policy with resourcing, however. The school principal wants teachers to focus on NAPLAN preparation at the expense of richer learning activities, as he angsts about the possibility of losing students to other schools. This experience has a sound basis in research; the marketisation of education through the ongoing encouragement of parental ‘choice’ and the displaying of NAPLAN results on the My School website has had well-documented flow-on effects of ‘teaching to the test’.

Nicknamed ‘Thatcher’, the principal is renowned within the school for his austerity, even stealing kids’ lunchboxes from the lost and found. His anxiety about the school’s budget reflects not only an overall lack of funding, but also the current positioning of principals as school ‘business managers’, having a larger share of financial responsibility for the running of the school. Our bright-eyed newbie teacher, for instance, is on a temporary contract, which she is told is because she is cheaper; the rise in fixed-term contract work in teaching is also a current issue for the profession.

The relationship between ‘Thatcher’ and the rest of the teachers in the school is, indeed, fractious. The principal establishes a ‘suggestions’ box which is derided, by everyone other than him, as a “black hole”. Again we see resonance with current themes in policy and research: under autonomous schooling conditions, principals in NSW have been described as chronically over-worked, their attention diverted from engaging with staff perspectives and working conditions.

As for the teachers themselves, the divide between the ‘old’/experienced, and young/‘new’ teacher may seem stereotypical, yet also raises important questions around teacher burnout. One of the discomforts we felt while watching Chalkface was the way in which the teachers, especially those more experienced, talked about their students. Usually these were jokes but they were always disparaging, and not always funny. ‘Deficit’ talk – where students and/or their families are described as lacking, either in intelligence or desirable social norms – is indeed rife in teaching and probably does rear its ugly head most frequently in school staffrooms. It can serve to support cycles of poor academic outcomes for populations of students experiencing forms of educational disadvantage. It can also be linked to burnout, as indeed the younger teacher in the play identifies: one of the three dimensions of burnout is ‘depersonalisation’, and we see much of this in the staffroom talk of Chalkface.

Also of concern is the raft of rather alarming health conditions the teachers experience, caused by their jobs. One has a damaged coccyx from having a student pull a chair out from underneath him; another has spent the summer holidays in a psychiatric ward after being locked in a cupboard overnight by a student. These are extreme examples. They are funny, but they are also not funny, reflecting genuine, current concern with teacher wellbeing.   

There are some positive outcomes in Chalkface. The two women teachers who are the main characters learn and grow from each other, and it’s genuinely enjoyable to see them do so. But that is about it. Ultimately, nothing is done about the inequity the school faces and the difficulty of these teachers’ jobs. In fact, most of the teacher characters leave this under-resourced school by the end of the play.

Chalkface lands on a description of pedagogy as the “art and science of hope”. Generally, the play feels authentic. It made us laugh, and sometimes grimace in frustrated recognition. But ultimately, its ending portrays a bleak situation for public school education. We hope this part isn’t accurate, although we worry that it is.

*Spoiler alert: it turns out, it’s not a rat.

Meghan Stacey is a senior lecturer in the UNSW School of Education, researching in the fields of the sociology of education and education policy. Taking a particular interest in teachers, her research considers how teachers’ work is framed by policy, as well as the effects of such policy for those who work with, within and against it.


Nigel Kuan is Head Teacher Science at Inner Sydney High School. He holds a Bachelor of Science with Honours in Physics Education and a Masters of Teaching. Nigel has presented at Teach Meets and other practitioner forums, and takes a particular interest in student engagement and scientific literacy. 

Header images from the Sydney Theatre Company. Photo: Prudence Upton From left to right: Susan Prior, Stephanie Somerville, Catherine McClements and Nathan O’Keefe in Sydney Theatre Company’s Chalkface, 2022.

Open access: why we must break down the paywalls now

Publication via open access increases visibility, reach and impact of educational research, so why are we still hitting our heads against paywalls?

Why do we publish? For attention, for funding and for impact. The quality research undertaken by Australian education researchers has the biggest chance of impact if it is readily accessible to educators and policymakers. So why is the important voice of research left out of policy and practice? 

There is no question that one major obstacle is the lack of timely access to published research. Unless you are a researcher with an academic library login, then paywalls, restrictive terms of access, and the time to get hold of legally published research, shut out the very people who need this information.

Traditional publishing model 

Why are our organisations paying excessive amounts to both publish and  access (often) publicly funded-research? In case anyone needs a refresher on the traditional academic journal publishing model:

Academics receive funding via grants to conduct research (taxpayer-funded). Research findings in the form of academic papers and book chapters are reviewed (mainly for free) by their academic peers. Authors rely on their institutions to pay publication costs. Once published, institutions pay again to get access to these works via library subscriptions. 

So, who benefits from this model? Certainly not those outside the academic ecosystem, nor those inside it. The costs are worn primarily by researchers and their institution (x 2) while publishers make a significant profit (for example, a major academic publisher now has a profit margin heading towards 40%, higher than companies like Microsoft and Google).  

OA benefits all

Next week is Open Access Week (24-30 October) which provides an opportunity for the academic and research community to stop and rethink why we do what we do. So, what is open access (OA)? 

Open Access is the free, immediate, online availability of research articles combined with the rights to use these articles fully in the digital environment (SPARC).

The momentum towards open access publishing is not without challenges and misrepresentation, and fair and equitable access to Australian education research remains challenging. 

The quality question 

There remains persistent misconceptions around OA, often conflated with predatory publishing. Quality discussions in 2022 are more nuanced. In order to be indexed by the Australian Education Index, ERIC, Scopus, et al., all journals (no matter their publishing model), are evaluated for adherence to quality standards including the application of peer review. The most comprehensive index of OA journals, DOAJ, follows rigorous principles for transparency and best practice in scholarly publishing that need to be met to be included. Quality and credibility should be vested in the research itself, not simply by the place of publication. While business models are very much in flux, scepticism is healthy when applied to both not-for-profit and big-name commercial players.

Re-route around the paywall

As members of educational organisations and academia we are not the only consumers of research. OA ensures that those outside of the institutional bubble, including those responsible for policy and information provision, have access to timely research. It also supports collaborative research without restrictions on a global scale. 

Internationally, there is a lot in-play around research access and discoverability. The emphasis on OA is significant in light of the White House (OSTP) memo on equitable access to research, and the release of UNESCO Recommendation on Open Science

Nationally, the OA policy revision from Australia’s leading health and medical research body removes the current 12-month embargo on the release of NHMRC-funded research from 2023 and all research will be openly licensed (ensuring and clarifying re-usability). If open access is good for science and medicine, it’s just as crucial for education. 

If governments and research organisations are mandating equitable and timely access the writing on the ‘paywall’ seems inevitable. It’s feasible to expect more favourable changes in terms of access to all research.

That’s not to say that for-profit publishers are not embracing OA – many actually are. But while they might be removing paywalls to access the research, they have generally adopted an author-pays model via article processing charges (APCs), again contributing to inequitable publishing opportunities for authors. Some have noted that major publishers are actually now leveraging the OA movement for their own benefit and time will tell as to the success of transformative agreements between institutions and publishers. 

Taking back control of the education research landscape

Improving access to education research requires an informed approach to navigating the OA landscape. Put simply, it comes down to the purpose of education research to affect and improve educational practice and policy. Journals are not the only vehicle for sharing research. Institutional and subject-specific repositories are valuable in terms of discoverability and expanding the use of research to a wider and diverse community. Repositories are central locations for institutional content and “have a critical role in archiving, preserving and sharing the diverse content produced by universities so it can be used by others and have the greatest impact on our society”.

Disciplinary repositories offer a bespoke OA alternative, and contribute to the visibility of discipline-specific research. We have recently been involved with a project that highlights actions that improve access to tertiary education research. The Universities Australia Learning and Teaching Repository (LTR) collates higher education learning and teaching research. Initially LTR archived the learning and teaching project reports and outputs funded by the Australian Government from 1994-2018. Repository content is openly licensed and in 2021 LTR undertook a pilot activity to index selected articles from the education-focused, OA Student Success journal. This pilot activity provides two obvious benefits in terms of educational research: 

  1. LTR, as a resource, is sustained by contemporary scholarly research filling the void left by the removal of consistent education research funding in Australia
  2. As an OA tool, educational research is curated and amplified to an educational community that extends further than our institutions. 

What can researchers do?

As education researchers, there is a lot we can do to make the OA model work for us.

Publish and review in OA journals

When thinking about where to donate your time as a reviewer or author, consider open access options. Read your publishing contract. If you transfer your copyright to a publisher can your own institution re-use your published content in its teaching? Most OA journals apply Creative Commons licenses which articulate rights to reuse, which means immediate and free access to your work.

Deposit in relevant institutional and education repositories

Give your institution’s repository plenty of love. Depositing your research in an institutional repository automatically improves discoverability and impact and that’s a fact. Most recently, the Curtin Open Knowledge Initiative examined citation activity in the OA landscape and noted “Open access through disciplinary or institutional repositories showed a stronger effect than open access via publisher platforms”. 

Advocate for Open AccessThere is value in an active (and collective) academic voice when it comes to advocating for OA to educational research. First and foremost, academic libraries and the information professionals working in this space are well-placed to help researchers navigate the changing publishing landscape. Secondly, keep abreast of international and national OA organisations and their activities. Open Access Australasia is a ‘go-to’ central source for current information and resources. Get your institution on board if they are not already members. Finally, if you are an editorial member or a reviewer for a paywalled educational research publication, start a conversation around the value of OA in your own community – what have you got to lose?

Tracy Creagh is Journal Manager, Academic Journals in the Office for Scholarly Communication at Queensland University of Technology (QUT). She manages three of QUT’s five open access, peer reviewed scholarly journals and leads the institutions’ Open Access (OA) Journals Community of Practice dedicated to sharing and contributing to best practice in open scholarship. She is also Managing Editor of the Student Success journal. Tracy is on Twitter @creaght 

Pru Mitchell is Manager, Information Services at the Australian Council for Educational Research and adjunct lecturer, School of Information and Communication Studies at Charles Sturt University. She is a long-term advocate for open education resources and her research interests include metadata standards for digital collections that enhance the discoverability of open access content. Pru is on Twitter @pru

‘My attitude COMPLETELY changed’: why universities should move new teachers from resentment to respect for Indigenous Australia as we vote on the Voice


Editor’s note: One of the biggest challenges in Australian education is how we embed an understanding of Indigenous cultures and knowledges. As Australia approaches a vote on The Voice, universities have a responsibility to change Initial Teacher Education (ITE)  to incorporate cultures and knowledges appropriately. Students enrolled in ITE already have views on what they will learn in their compulsory courses – and those views are confronting. How can educators move students from uncomfortable and scared, to be bold and prepared? This is a longer blog post than usual – but in it, Quandamooka scholar Dr Mitchell Rom explores how we might produce a teaching workforce that places sufficient value on Indigenous knowledges and perspectives.

In June this year, the Australian Institute for Teaching and School Leadership (AITSL) released its final report Building a culturally responsive Australian teaching workforce as part of its Indigenous cultural competency project. This national report is a progressive step in the right direction towards raising awareness and understanding of how to better support Indigenous students in schools. The term “cultural competency” is defined in the report as, “When organisations and individuals…expand their cultural knowledge and resources in order to better meet the needs of minority populations” (Cross et al., 1989, as cited in AITSL, 2022, p. 35). The term “cultural responsiveness” is also used in the report which stated that “Being ‘culturally responsive’, in the context of Australian schools, is the ability to respond to the diverse knowledges, skills and cultural identities of Aboriginal and Torres Strait Islander students” (AITSL, 2022, p. 9). 

Prepared over three years, the report suggested that ITE should play a key role in developing the cultural competency and responsiveness levels of pre-service teachers or future teachers in Indigenous education. It stated “It is critical that ITE programs prepare teachers for the wide range of students they may teach, including Aboriginal and Torres Strait Islander students” (AITSL, 2022, p. 17). The report further recommended that “Aboriginal and Torres Strait Islander content should be included as a mandatory unit of study or indeed, mandatory cross-curricula focus, within ITE programs” (AITSL, 2022, p. 6). Having recently completed my PhD in ITE and compulsory Indigenous education, I agree with these statements and the importance of ITE programs. However, the recent report unfortunately does not acknowledge that the Indigenous education space at university is filled with colonising and complex challenges for academic teaching teams and pre-service teachers. Before looking to ITE programs as one of the answers to improving the cultural competency of teachers, it is important to consider the varied challenges linked to Indigenous education courses in these programs.

Study findings

My PhD study was grounded in the context of the Australian Professional Standards for Teachers (APST), Graduate Standards 1.4 and 2.4, which were introduced by AITSL in 2011. These two national standards emphasise teaching Indigenous students (1.4) and teacher skills and knowledge around reconciliation in schools (2.4) (AITSL, 2011). Universities have now turned their attention to preparing our future teachers to be able to meet these national standards and develop cultural competency in this area through offering Indigenous education courses.

The study specifically focused on the key learning, teaching and education policy challenges situated in contemporary Indigenous education courses at university. The study involved 174 non-Indigenous pre-service teachers from an elite Queensland university who were studying a compulsory Indigenous education course. It also involved five academic teaching staff from the same course, as well as myself as a Quandamooka teacher who has taught in three Indigenous education courses across two Queensland universities. The research identified, through a storying methodological approach (Phillips & Bunda, 2018), a total of 11 key challenges across three interrelated areas of the university. These three areas included key challenges within the university classroom space (lecture or tutorial context), the broader university institution, as well as with education policy, namely APST 1.4 and 2.4 by AITSL.

Pre-service teacher journeys in Indigenous education

Pre-service teachers can have varied experiences of studying Indigenous education at university. The study found that some pre-service teachers were willing to engage with Indigenous education from the initial commencement of the course. For example, one pre-service teacher shared “At the beginning of the course, I felt excited and ready to expand my views and knowledge”. Another pre-service teacher noted “I felt increasingly comfortable with the way everything was taught and became more understanding, appreciative and open minded about Indigenous ways of knowing, being & doing”. Some pre-service teachers began the course displaying resistance towards Indigenous education and then were able to change their attitude and position as the course progressed. In addition to this, some pre-service teachers remained resistant learners throughout the entirety of the course, despite the efforts of teaching teams and national policy agendas such as APST 1.4 and 2.4.  Overall, the research found that many Queensland pre-service teachers experienced challenges navigating a compulsory Indigenous education course within their ITE program.

Stepping into the course, 126 pre-service teachers shared that they had mixed initial views towards learning Indigenous education. One pre-service teacher stated “I was wary [the course] would be wrapped in anti-Western rhetoric and would focus on demonizing Western culture”. Another pre-service teacher shared “I didn’t have a high opinion or high expectation from the course title alone and felt apprehensive going into this course mainly due to what people had said about previous semesters (most people told me this course sucked)”. Other words used to describe student feelings towards beginning Indigenous education included “uncomfortable”, “borderline apathetic”, “unimpressed”, “confused”, “confronted”, “dreading it”, “pointless”, “guilty”, “very hesitant” and “unprepared”. Moreover, 66 pre-service teachers shared that they had limited engagement, knowledge and understanding with regards to Indigenous education prior to university. In the study, pre-service teachers shared that “I’d never had much to do with Indigenous studies in my schooling so I wasn’t sure what to expect in this course” and “My experiences [with Indigenous studies] at school were mostly tokenistic”.

Within the university classroom, nearly 40 pre-service teachers showed a level of resistance towards studying the course. One pre-service teacher commented on the compulsory nature of the course and stated “I was not looking forward to beginning the course and was extremely unhappy it was compulsory since I knew I’d be thought of as a middle-class white male that oppressed everyone”. In relation to being introduced to the concept of white privilege in class, another pre-service teacher shared “I didn’t like how the tutorials made me feel. It felt like the teaching staff would make activities that addressed how white privileged I was and almost make me feel shit about being white”.

As the course progressed during the semester, a number of pre-service teachers were able to shift their attitudes regarding Indigenous peoples and education. For example, one pre-service teacher wrote “My perspectives, understandings and attitude around Indigenous education have COMPLETELY changed but I also believe that this was attributed to the study habits and attitudes I brought into tutorials and lectures”. Another pre-service teacher shared “In class, I was continually faced with situations where I would think ‘But that’s not my fault’, but was able to stop and transform my understanding so that I could use my white privilege to ensure the deserved respect is given to the first peoples of Australia”. These student experiences demonstrate learning, understanding and growth in this contested learning and teaching space. I am confident these non-Indigenous pre-service teachers will continue to work to strengthen Indigenous education as allies.

Unfortunately, the study also highlighted various inconsistencies in relation to pre-service teacher development. In the final week of studying the course, one pre-service teacher mentioned “I would say that the nature of this course has made me look at Indigenous education more negatively”. Another pre-service teacher shared “Right now, I feel confused and this course has left me more scared of teaching Aboriginal and Torres Strait Islander students than I was before”. On finishing the course, one hesitant pre-service teacher wrote “While I write this in the final week of the course, I still do not feel like I’m prepared to meet [APST] 1.4 & 2.4”.

Moving forward

Some pre-service teacher experiences shared above highlight an education system that is gradually shifting towards a greater respect for Indigenous education matters. While this is positive, the findings also reflect a current system (particularly in relation to some schools in Queensland), that does not place sufficient value on Indigenous knowledges and perspectives. This is reflected in many pre-service teacher comments around their previous learning and their own ill-preparedness to commence the Indigenous ITE course. In light of this, and the broader study findings, education stakeholders including AITSL need to be aware that improving cultural competency requires an understanding of the complex challenges situated in compulsory Indigenous education. It also requires a recognition that there are key challenges that sit external to the control of academic teaching teams including, in particular, pre-service teachers arriving at university ill-prepared from schools and remaining resistant to studying Indigenous education. Broadly speaking, for these challenges to improve, educational institutions at all levels (from primary schools to universities), and those who lead and work in these social institutions need to continue to shift and change. This is needed so that by the time our pre-service teachers commence Indigenous education studies at university, they are more equipped to navigate these spaces and become culturally competent and responsive practitioners. One way of doing so is by placing greater value on Indigenous knowledges and perspectives in schools. This includes ways of thinking that seek to challenge the colonial status quo. By doing so, this will support our future teachers to be more effectively prepared when working with Indigenous students in our schools and therefore will continue to strengthen Indigenous education.

Mitchell is a postdoctoral researcher interested in Decolonial studies, Education and Health. His PhD focused on the key learning, teaching and education policy (Australian Professional Standards for Teachers 1.4 and 2.4) challenges situated in the contemporary Indigenous Australian education space at university. He initially trained as a secondary school teacher in the disciplines of English and History in Queensland and studied Education for over a decade. He also taught at university, published and worked across various levels of education. As a Quandamooka researcher, Mitchell is interested in discussing social matters with like-minded scholars for positive community change. Mitchell is an advocate of strength-based thinking and decoloniality. Contact him on LInkedIn.

Image in header is Ren Perkins, a PhD student undertaking research with and about Aboriginal and Torres Strait Islander teachers, in action.

Beyond koalas, UN organisations work to put climate change education on the agenda

How UN organisations work to put climate change education on the agenda

Climate literacy is the biggest educational challenge of our time. Even as the Australian government’s recent action signals productive steps toward climate action, there is still much work to be done in addressing climate change and climate literacy can help with this. 

What is climate literacy? It’s an understanding of how climate change works, but also includes feeling motivated and able to participate in meaningful change for the planet. Where climate literacy is established, political will for action will follow.

We can achieve this through education – as we rapidly approach catastrophic ecological thresholds and tipping points, education is increasingly being considered a key component in our response to climate change. Young people are acutely aware of—and are taking action against—the unfolding climate crisis, with research capturing their experience and its emotional toll.

Intergovernmental bodies, such as the United Nations (UN), play an important role in progressing social causes through education, and action on climate change education is an ongoing focus for UN organisations. This includes the development of UN policy programs with a focus on climate change education, which impact national education policies globally. 

The work of these UN organisations is impacted by a lack of resourcing and isolated organisational structures. Observing the ways that UN organisations work together to advance their common agendas is of crucial importance to understanding how they meet their climate action goals. 

The UN & climate action

There are an increasing number of UN international organisations (IOs) developing programs in relation to climate change education (CCE): 

These UN policy programs aim to guide policy decision-making by UN member states until 2030. 

These initiatives are promising, given the urgent need for escalated action on climate change globally, and considering the impact previous UN policy programs have had on national and regional education systems, as well as other development goals. However, there is concern about the influences on policy programs and their effects on program quality. 

A lack of transparent decision-making weakens the work of the UN and impacts the quality and effects of work intended to address climate change. The UN’s policy programs are also up against other political forces that challenge their ability to fulfil their goals.

In light of these factors, it is important to consider the impact that those who work for and within policy programs (policy actors) do across networks and through relationship-building in order to deliver the UN’s work.

How UN networks advance the climate agenda

Environmental concerns have not always been seen as a core policy agenda in the education sector, just as education has not previously been significant on the environmental policy agenda.

The UN IOs introduced above bridge this gap with their dual focus on environment and education. However, both UNESCO-ESD and the UNFCCC-ACE are small, under-resourced subunits, doing important and far-reaching work with minimal resourcing.

The lack of focus on climate change education creates incentive for UN organisations to seek support across programs and networks. These relationships help to increase the profile of the climate work being done, and strengthen each policy program.

UNESCO’s ESD and UNFCCC’s ACE coordinate activities that bring together government and non-government representatives to do climate policy work. Our study finds that there are several points of connection between ESD and ACE, including writing joint reports, and meeting formally and informally to make connections.

The joint writing of policy documents, such as the Action for Climate Empowerment: Guidelines for accelerating solutions through education, training and public awareness and Integrating ACE into Nationally Determined Contributions: A Short Guide for Countries provided opportunities to work together on similar aims.

Meetings are also a key aspect of the shared work of UN organisations. Both ACE and ESD host regular high level events, attended by national governments and other intergovernmental staff, as well as staff who work across these UN IOs. Meetings often provide opportunities for cross-organisational work to unite in the interest of shared climate agendas. 

These co-network policy arrangements help UN organisations to overcome their limitations (such as limited resources) and link up with another organisation that does have the mandate and resources for an outcome they both would like to achieve (e.g. producing/publishing climate change education reports).

This tells us that the work involved in achieving the aims of climate change-oriented UN programs relies on networks of shared interest, and relationships bridging across organisations.

What does this tell us about the UN’s work on climate change education?

Climate change is a matter of critical importance to present and future generations around the globe. Our research indicates that advancing climate change education is not limited to siloed organisations or policy writers; it is a responsibility that is shared across multiple organisations and groups.

It also tells us that two of the UN’s international organisations tasked with addressing climate change, ACE and ESD, do not necessarily have the mandates or resources that are required to deliver the work required in advancing climate change education.

By better understanding the interactions across these policy programs, this research helps our understanding of how organisations might work together, outside and across their scope and jurisdictions, in order to gather adequate resources to promote and fulfil climate agendas. 

Marcia McKenzie is Professor in Global Studies and International Education in the Melbourne Graduate School of Education, University of Melbourne. Her research includes both theoretical and applied components at the intersections of comparative and international education, global education policy research, and climate and sustainability education, including in relation to policy mobility, place and land, affect, and other areas of social and geographic study. This research will be the focus of Marcia’s forthcoming papers at the 2022 AARE conference in Adelaide. 

Stephanie Wescott is a Postdoctoral Research Fellow in the Melbourne Graduate School of Education, University of Melbourne. Her postdoctoral research studies the network relationships between UN policy programs and climate change education.