AARE blog

Why Segal’s plan to combat antisemitism in education is dangerous and should be rejected

In her plan to battle antisemitism, Jillian Segal, the federal government’s special envoy to combat antisemitism, has delivered  a recipe for racial discord in schools and universities. It will stifle free speech and undermine superior attempts to combat racism. Segal will have the right to define the offences of antisemitism, the offenders and the punishments and will also shape the re-education of the various parties.

Responses

Prominent progressive Jews and Jewish groups, academicscultural and political commentators and human rights, Muslim, Palestinian, Indigenous groups highlight the plan’s considerable deficiencies.

 These commentators have identified the plan’s partisan definitions and arguments, untrustworthy evidence base and unreasonable and unwarranted policy recommendations. Segal’s claims are seen as excessive, and her proposals as repressive— potentially threatening democracy. If implemented, they believe her plan will undermine free speech, academic freedom and the right to protest.  

The Minister for Education, Jason Clare will not be bullied into ‘immediate action’. He is waiting for the report from the Islamophobia envoy (August) and the Australian Human Rights Commission’s review into racism in universities (November).  

Bad education, bad youth and a redemptive definition

Teachers, schools, universities and young people are in Segal’s sights. Her focus on education, Segal says, stems from the generational differences between the over and under 35s as to their ‘media consumption’, ‘perceptions of the Middle East and the Jewish community’ and the ‘Holocaust and its impacts on society’.   

To Segal, the under 35s are uninformed and misguided. They must be re-educated. Their universities and schools cannot be trusted to do this job because within them, antisemitism is ‘ingrained and normalised’.  So, despite her lack of educational expertise, Segal must step into the breach.

She will start by insisting that all educational institutions and systems adopt a particular definition of antisemitism. In its examples, this definition conflates anti-Israel and anti-Zionist sentiments with antisemitism. It has been persuasively discredited because of this dangerous conflation and because of its weaponisation.

But Segal wants to stick with it anyway. Afterall, it allows her to see examples of antisemitism whenever, wherever and however critical views about Israel and Zionism are expressed. It thus stretches her influence, multiplies her opportunities for denouncement and helps deflect wider attention away from Israel’s ever-more appalling treatment of the Palestinian people.

Rescuing universities?

This overzealous envoy expects universities to bow to as suspect definition She wants to

·      develop and launch a university report card, assessing each university’s implementation of effective practices and standards to combat antisemitism, including complaints systems and best practice policies, as well as consideration of whether the campus/online environment is conducive to Jewish students and staff participating actively and equally in university life.

·      work with government to enable government funding to be withheld, where possible, from universities, programs or individuals within universities that facilitate, enable or
fail to act against antisemitism. …

We argue there are  shades of authoritarianism here. Segal wants surveillance over university courses, teachers and researchers. She wants to repress speech and protest. She wants to intensify the current persecution of pro-Palestinian staff and students— those already silenced by many universities.  In short, she seeks to purge the university of voices and activities that she regards as illegitimate.

We think Segal also exhibits  moral blindness.  and fails to acknowledge that Israel’s treatment of the Palestinian people, currently and over time, provides a fertile context for anti-Israel sentiments.

In our view, her compassion appears to be reserved exclusively for those experiencing antisemitism— as defined above.  We have seen no ovidence that she shows pity for the fear and pain of others — certainly not for the anguish of the people in Gaza experiencing genocide, ethnic cleansing, starvation. Is she blind to this increasingly recognised ‘moral emergency of our time’?

Saving schools?

The envoy’s proposed Key Actions for schools include working with appropriate authorities to

  • embed Holocaust and antisemitism education, with appropriate lesson plans, in national and state school curricula
  • provide guidance to government on antisemitism education for educators and public officials 
  • provide recommendations to government on enhancing education about Jewish history, identity, culture and antisemitism in high school curricula ..

Segal has no expertise in curriculum and pedagogy or in the philosophies and practices of anti-racist education. Those with such expertise are unlikely to welcome her ‘lesson plans’, ‘guidance’ and curriculum ‘enhancement’. They are more likely to see her approach as counter to the best available programmes and practices and as unaware of the composition of current classrooms.  

Many classrooms include students from all sorts of cultures, religions and circumstances — some very difficult.  Under Segal’s racially hierarchical regime these students would be entitled to ask, ‘What about my family’s and community’s struggles with racism? What about our ‘history, identity, culture’?  What about other experiences of genocide?’  

Alternative and superior approaches are available and necessary—including critical racial literacy alongside anti-racist, decolonial methods. These recognise that racism may be experienced differently by different groups. But they do not prioritise one racially subjugated group over another or pit racially subjugated groups against each other. Rather they adopt ‘systemic, intersectional, strengths-based, and coordinated action’ as the National Anti-Racism Framework explains.

Jane Kenway is an elected Fellow of the Academy of Social Sciences, Australia, Emeritus Professor at Monash University and Professorial Fellow at the University of Melbourne. Her research expertise is in educational sociology.  

Main image: Student encampment at Adelaide University – Kaurna Yerta 5 May 2024. Photo: Jack Desbiolles. There is no evidence to say that any of these patterns of censorship occurred during this encampment.

Part two: NAPLAN Results Haven’t Collapsed – But Media Interpretations Have

This is the second instalment of our two-part series addressing claims made about NAPLAN data. The first part is here

We begin this section by addressing a comment made in ABC media reporting on 2025 NAPLAN results.

“We’ve seen declines in student achievement despite significant investment in funding of the school system”

This comment echoes a broader theme that re-surfaces regularly in public discussions of student achievement on standardised tests. There are two aspects of this comment to unpack and we address each in turn. 

No evidence

First the concept that student achievement is declining is demonstrably untrue if we evaluate NAPLAN data alone. There is no evidence that student achievement in NAPLAN declined between 2008 and 2022 – and indeed there were some notable gains for Year 3 and Year 5 students in several domains. Results from 2023 to 2025 have remained stable across all Years and all domains. 

By contrast, there have been well-documented declines in average achievement in the Reading, Mathematics and Scientific Literacy tests implemented by the Programme for International Student Assessment (PISA). PISA tests are undertaken by Australian 15-year-old students every three years. The most recent data, from the 2022 assessment round showed that these declines had flattened out in all three test domains since the 2015 round: in other words the average decline has not continued after 2015. 

There’s plenty of speculation as to why there have been declines in PISA test scores specifically, and there are enough plausible explanations to suggest that no single change in schools, curriculum, pedagogy or funding will reverse this trend. Nonetheless it is important to highlight the contrast between PISA and NAPLAN and not conflate the two in public discussion about student performance on standardised tests.

Before Gonski, schools were relatively underfunded

The second aspect of the claim above is that increases in school funding should have resulted in improvements in NAPLAN achievement (notwithstanding the fact that average results are not trending downwards). School funding has increased since the publication of the first Gonski report in 2011, and subsequent government efforts to adequately fund schools as per the model agreed upon. This is one reason why the total amount of money spent on schooling has increased in the last 10-15 years: because prior to Gonski, government schools were relatively underfunded across the board (and many remain so).

A second reason relates to government policies resulting in more children staying in school for longer (arguably a good thing). The 2009 National Report on Schooling in Australia (ACARA, 2009) produced a handy table identifying new state and territory policies aimed at increasing the proportions of students engaged with education, training or employment after the age of 15 (p. 36). For example, in NSW (the largest jurisdiction by student numbers), the new policy from 2010 was as follows:

“(a) From 2010 all NSW students must complete Year 10. After Year 10, students must be in school, in approved education or training, in full-time employment or in a combination of training and employment until they turn 17.”

Students stay at school longer

This and similar policies across all states and territories had the effect of retaining more students in school for longer, therefore costing more money.  

The other reason total school funding has increased is simple: growth in total student numbers. If there are more students in the school system, then schools will cost more to operate.

According to enrolment statistics published on the ACARA website, from 2006 to 2024, the number of children aged 6 – 15 enrolled in schools increased from 2,720,866 to 3,260,497. This represents a total increase of 539,631 students, or a 20% increase on 2006 numbers. These gains in total student numbers were gradual but consistent year on year. It is a pretty simple calculation to work out: more students = higher cost to schools.

Students who ‘start behind, stay behind’

The design of the NAPLAN tests allow an excellent opportunity to test claims that children who start with poor achievement never ‘catch up’. Interestingly, the Australian Education Research Organisation published a report in 2023 that calls into question this idea. The AERO report demonstrated that of all the children at or below the National Minimum Standard (NMS) in Year 3 (187, 814 students in their national sample), only 33-37% remained at or below NMS to Year 9. 

We can explain this another way using the terminology from the new NAPLAN proficiency standards. Of the ~10% of students highlighted as needing additional support, it is likely that one third of these students will need that additional support throughout their schooling – or around 3.5% of the total population. The remainder of the students needing additional support in Year 3 in fact did make additional gains and moved up the achievement bands as they progressed from Year 3 to Year 9.

AERO’s analyses supported other research that had used different methods to analyse longitudinally matched NAPLAN data. This research also showed no evidence that students starting at the bottom of  NAPLAN distributions in Year 3 fell further behind. In fact, on average, students starting with the poorest achievement made the most progress to Year 9.

Sweeping inaccurate claims

Consistently supporting students who need additional help throughout their school years is something that teachers do and will continue to do as part of their core business. Making sweeping claims that are not supported by the available data is problematic and doesn’t ultimately support schools and teachers to do their jobs well. 

In recent weeks, there have been some excellent and thoughtful pieces calling for a more careful interpretation of NAPLAN data, for example here and here. It is disappointing to see the same claims recycled in the media year after year, when published, peer-reviewed research and sophisticated data analyses don’t support the conclusions. 

Sally Larsen is a senior lecturer in Education at the University of New England. She researches reading and maths development across the primary and early secondary school years in Australia, interrogating NAPLAN. Thom Marchbank is deputy principal academic at International Grammar School, Sydney and a PhD candidate at UNE supervised by Sally Larsen and William Coventry. His research focuses on academic achievement and growth using quantitative methods for understanding patterns of student progress.

NAPLAN Results Haven’t Collapsed – But Media Interpretations Have

Each year, the release of NAPLAN results is accompanied by headlines that sound the alarm – about policy failures, teacher training and classroom shortcomings, and further and further slides in student achievement. 

In this two-part series, we address four claims that have made the rounds of media outlets over the last couple of weeks. We show how each is, at best, a simplification of NAPLAN achievement data, and that different interpretations (not different numbers) can easily lead to different conclusions. 

Are a third of students really failing to meet benchmarks?

Claims that “one-third of students are failing to meet benchmarks” have dominated recent NAPLAN commentary in The Guardian, the ABC, and The Sydney Morning Herald. While such headlines generate clicks, fuel public concern and make for political soundbites, they rest on a shallow and statistically naive reading of how achievement is reported.

The root of the problem is a change in how a continuous distribution is cut up. 

In 2023, ACARA shifted NAPLAN reporting from a 10-band framework to a new four-level “proficiency standard” model, in conjunction with the test moving online to an adaptive framework, rather than paper-based tests. 

Under the older system, students meeting the National Minimum Standard (NMS) were in Band 2 or above in Year 3, Band 4 or above in Year 5, and Band 6 or above in Year 7 and 9. Those students were not “failing”; rather, they were on the lower end of a normative distribution. Now, with fewer reporting categories (just four instead of ten), the same distribution of achievement is compressed. Statistically, when you collapse a scale with many levels into one with fewer, more students will cluster below the top thresholds; but that doesn’t mean their achievement has declined.

A particular target

Take, for example, the 2022 Year 9 Writing results. This was a particular target for media commentary this year and last.

That year, about one in seven (14.3%) of students were in Band 5 or below the National Minimum Standard. In 2025, by contrast, 40.2% of students were in the categories “Needs additional support” and “Developing”, which are the new categories for perceived shortfall. 

This represents a nearly threefold jump. But that’s only if the categories of ‘below NMS’ and within the ‘bottom two proficiency groupings’ are considered qualitatively equivalent. That’s a naive interpretation.

But let’s look at how those two groups actually scored.

In 2022, the NMS for Writing in Year 9 was Band 6, i.e., a NAPLAN score of  ≥430.8 (and Band 7 started at ~534.9), whereas in 2025, the “Developing”/“Strong” boundary is at 553, which is above the 2022 Band 6 cut-off (~485), and roughly equivalent to midway through 2022’s Band 7. 

This means that what was previously considered solid performance (Band 6 or low Band 7) is now seen as “Developing”, not “Strong.” The “Strong” range (553–646), by contrast, corresponds roughly to upper Band 7 and most of Band 8 from the 2022 scale, and the “Exceeding” range (647+) overlaps mostly with Band 9+. Students now have to reach what was previously considered top‑quartile performance to be classified as “Strong” or higher. A student scoring 500 in Year 9 Writing was in Band 6 in 2022 – above the NMS – but now falls just short of “Strong” (cut = 553). That same student would now be labelled as “Developing,” even if their skills haven’t changed.

The boundaries have changed

The results are the same. It’s the boundaries which have changed.

What’s also missing in the new scheme is the ability to compare between year levels. The historical bands allowed a direct vertical comparison across year levels; you could say a Year 3 student in Band 6 was at the same proficiency as a Year 5 student in Band 6. Proficiency categories, in comparison, are year-specific labels. “Strong” in Year 3 is not the same raw proficiency as “Strong” in Year 9; it’s the same relative standing within year expectations. 

Vertical comparison is still possible with the raw scale scores, but not with the categories. This shift makes the categories more communicative for parents (“Your child is Strong for Year 5”), but less useful for direct cross-year growth statements without going back to the underlying scale.

Surprisingly, there has been commentary that suggests that we should expect 90% of students to be scoring in the top two bands – “Strong” and “Exceeding”. 

How would that work?

Population distributions always contain variability around their mean, and the achievement distributions year to year for NAPLAN are generally consistent and similar. Expecting 90% of students to be in the top two categories, therefore, is statistically unrealistic, especially when those categories represent higher-order competencies. 

As we saw earlier, the “Strong” range (553–646) corresponds roughly to an upper Band 7 and most of Band 8 from the 2022 scale, and the “Exceeding” range (647+) overlaps mostly with 2022’s Band 9+. Students now have to reach what was previously considered top‑quartile performance to be classified as “Strong” or higher. This is a very exacting target. 

The bell curve

Most assessment distributions are approximately normal (shaped like a “bell curve”), so high achievement on the NAPLAN scale naturally includes fewer students, just as low achievement bands do. Without an intensive increase in needs-based resourcing that might change things such as class sizes and teacher to student ratios, the availability of school-based materials, resources, and training, or one to one support for struggling learners, the shape of the population distribution is likely to remain disappointingly stable.

The overall message is that students haven’t suddenly lost ground in their learning; we’ve just changed the categories that we use to understand their achievement. To interpret NAPLAN data accurately, we must consider how the framework has shifted, and avoid drawing simplistic conclusions. Even more importantly, this change in reporting is not any kind of evidence of educational failure – it’s just a shift in how we describe student progress, and not what that progress is. Misrepresenting it fuels anxiety without helping schools or students.

Most Year 9 Students Do Not Write at a Year 4 Level

Another headline resurfacing with troubling regularity is the claim that “a majority of Year 9 students write at a Year 4 level”. It’s a “major crisis” in students’ writing skills, reflecting a “thirty year policy failure”. 

This is based on distortions of analysis from the Australian Educational Research Organisation that selectively focused on Persuasive Writing as a text type. Unlike media coverage, AERO’s recent report, “Writing development: What does a decade of NAPLAN data reveal?” did not conclude that Year 9 students’ writing is at “an all-time low”. Instead, the report found a slight historical decline in writing achievement, only when examining persuasive writing as a text type. 

AERO’s report is somewhat misleading, though, because it focuses only on persuasive writing, even though NAPLAN can and does assess narrative writing as well, and the two text types are considered to have equivalence from the point of view of test design. 

In fact, in publicly available NAPLAN data from the Australian Curriculum, Assessment and Reporting Authority (ACARA), and in academic analyses of this data undertaken last year, Australian students’ Writing achievement – taken as both persuasive and narrative writing – has been quite stable over time. 

Consistent for more than a decade

For example, without taking narrative writing away from persuasive writing, mean Year 9 NAPLAN Writing achievement is quite stable, with 2022 representing the strongest year of achievement since 2011:

A graph with a line going up

AI-generated content may be incorrect.

Year 9 students’ average achievement may have been consistent for more than a decade, but what about the claim that the majority of Australian Year 9s write at a Year 4 level?

For Year 5, mean writing achievement has ranged between 464 and 484 nationally between 2008 and 2022 on the NAPLAN scale. For Year 3, the mean score range for the same period was 407 to 425. With developmental growth, mean Year 4 achievement might be expected to be somewhere in the 440 to 450 range. 

However, the cut-point for the NMS for Year 9 tended to be around 484, historically. Why is this important? Because national NAPLAN reporting always supplied the proportion of students falling below the National Minimum Standard (Band 5 and below in Year 9). This tells us how many students are demonstrating lower levels of writing achievement.

Where are most year nine students?

In 2022, only 14.3% of Year 9 students were in Band 5 or below (or below a NAPLAN scale score of about 484), which was the second-best year on record since 2011, as illustrated in the figure below. Contrast that with the 65.9% of students who scored in Band 7 or above in 2022 (with a cut point of 534.9), clearly indicating most Year 9 students have writing proficiency far beyond primary levels.

When you consider that the NMS records the proportion of students falling into Band 5 and below for Year 9, 13.7% to 18.6% of students fell in this range from 2008–2022. This is quite different to “the majority”.

In Part 2 (published tomorrow) of this series we go on to address the perennial claim that students’ results are declining, the argument that additional school funding should result in ‘better’ NAPLAN results, and the idea that children who start behind will never ‘catch up’.

Sally Larsen is a senior lecturer in Education at the University of New England. She researches reading and maths development across the primary and early secondary school years in Australia, interrogating NAPLAN. Thom Marchbank is deputy principal academic at International Grammar School, Sydney and a PhD candidate at UNE supervised by Sally Larsen and William Coventry. His research focuses on academic achievement and growth using quantitative methods for understanding patterns of student progress.

Stop Policing Punctuation Now: Why AI Detection Needs a Rethink 

Like many educators, our social media feeds have been filled with commentary on the impact of AI on teaching, learning, and assessment. One problem appears intractable; namely, how can we tell when students have used AI-generated text in their work? We’re not writing to offer an answer to that question; indeed, at this point, it’s clear that there isn’t a reliable method of separating ‘synthetic’ text from ‘organic’. Instead, we want to bring attention to a troubling possibility, one that is of perhaps greater significance to our role as educators who don’t necessarily teach the finer points of literacy and writing while teaching just about every subject. It’s this: In our efforts to police the use of AI in learning and assessment, are we likely to diminish the quality and character of our own writing and that of our students in the process? 

While many of us have tried to keep up with new tools, evolving policies, and changes with detection software, one strategy has become increasingly popular: look for stylistic ‘tells’. It appears to us that the search for shortcuts in AI detection has led to what psychologists Shah and Oppenheimer call the “path of least resistance”. That is, we gravitate to cues that are easily perceived, easily evaluated, and easily judged. In other words, heuristics. Em dashes? Colons in titles? Specific words or expressions? All have been called out as signs of AI authorship. But here is the problem: these shortcuts don’t work. Worse, they normalise suspicion of well-crafted, edited and even creative writing. When we start to see polished punctuation and consistent tone as evidence of cheating, we inadvertently signal to students and our peers, that good writing is suspect.  

Why?

Let’s start with an example. On social media, we have seen commentators breezily claim that the use of the em dash—the long dash that can be used in place of parentheses or a semi-colon—is the “smoking gun” betraying that a text was authored by AI. As a self-professed fan of the em dash, this prompted Andrew to go searching. A cursory googling using the search phrase “the em dash as an indicator of AI content” revealed that this is a popular topic, with plenty of commentary being traded for and against the notion. Some suggest that using em dashes makes for well-styled and cadenced writing, while others claim that em dashes appear so infrequently in so-called ‘normal’ writing that seeing an em dash ‘in the wild’ is always suspect. But the conjecture and speculation doesn’t end with the em dash.  

The colon in titling: Another AI tell? 

So, we dug deeper. Another purported “give-away” is supposedly the use of the colon to separate titles and sub-titles in a body of text. This seemed a bit of a reach, as academic writing in particular often employs the colon to sub-divide or elaborate on concepts in titles and subtitles. At this point, we realized we needed to consult the proverbial source of these claims, so off to prominent large language models (LLMs) we went.

We each tried out different LLMs. In ChatGPT, Andrew started with the prompt “What are some common tells that a text is in fact AI-authored?” It came back with a list of 10. The list ranged from “Overly Formal or Polished Language” and “Overuse of Transitional Phrases” to “Too Balanced or Fence Sitting,” all of which could be claimed to be common in academic writing. Whereas, when Janine asked the same question, Gemini (2.4 Flash) replied: “repetitive phrasing, formulaic sentence structures, overly polished or generic language lacking deep insight or personal nuance, the frequent use of certain transitional phrases and words (e.g., “delve,” “tapestry”), and occasionally, factual errors or fabricated citations”. 

Great questions

Albeit, many of these were stylistic claims, rather than observations about punctuation, so we decided to dig deeper. When ChatGPT was asked “What about punctuation?” it replied, “Great question – punctuation is another area where AI-generated text can give itself away.” It seemed that ChatGPT was confirming the blogosphere’s punctuation and style concerns in relation to authenticity, noting that the overuse of things like commas, em dashes, colons, and parentheses are “punctuation-related tells.” Janine asked the same question, and Gemini replied “AI-authored text can exhibit “tells” through the overuse of specific punctuation marks like the em dash or colon, the consistent application of flawlessly textbook-correct punctuation lacking human variation or “casual inconsistencies,” and the absence of the nuanced stylistic choices typical of human writers.”

Both of our responses of “AI tells” included overly polished work, overuse of some phrases and the consistent, almost perfect use of specific punctuation like em dashes and colons. The similarities were obvious: consistently correct or textbook punctuation and limited to no typos or casual inconsistencies. Was grammatically correct and proof-read work now concerning? Or worse, the sole domain of LLMs? Should we, as authors and educators, be aiming (as it were) to be more “casually inconsistent” in our writing so as to not appear like we have used an LLM? And to teach our students this in-turn? 

On the spread of lazy AI detection heuristics  

In a fascinating paper, “The Path of Least Resistance: Using Easy-to-Access Information,” Princeton psychologists Shah and Oppenheimer have proposed a framework for understanding how people might use highly accessible cues in everyday decision making. Their framework, in which they explain how people tend to use easily perceived, produced, and evaluated cues to makes decisions, has particular relevance for a scenario in which a teacher is attempting to detect AI -generated text. As visible linguistic markers, punctuation could be regarded as an example of a highly accessible cue. After all, types and patterns of punctuation are easily perceived and evaluated by readers, much more so than more nebulous concepts such as “tone” and “complexity” of vocabulary. One could imagine why punctuation as a cue for detecting AI-generated work might make for a seductive proposition, and why it has become the subject of so much social media speculation. 

Whether the punctuation “rules of thumb” for AI detection being promoted on social media are credible or not is one matter. One thing is nevertheless certain: the idea of punctuation as a tool for AI detection is pernicious—the em dash and other proposed AI detection heuristics are now in the public consciousness and being talked about as if they are useful, despite noteworthy appeals to reason here, here, and here. Our concern as educators is this: Collectively, we may be in real danger of assimilating these “easy” cues and applying them (whether consciously or otherwise) to our own writing and when assessing the work of our students.  

Where might this end? 

Educators are not immune to bias. In the absence of certainty, it’s natural for us to lean into intuition. But intuition shaped by social media tropes is not a sound basis for academic judgement. Perhaps the deeper danger here is that lazy heuristics for AI detection reduce our ability to actually teach and also lower our expectations, as they cast suspicions on students and peers who have worked hard to improve their writing. 

What if we ‘normalise’ our expectations for authentic writing to be automatically suspicious of polish, punctuation, and proof-reading in our student’s work? Rules-of-thumb and looking for “AI tells” is not the answer. They may make for seductive heuristics in decision-making in the present AI-policing climate; but let’s be clear—they’re lazy and specious within the domains of scholarship and academic writing. Noone has an answer to AI detection; there is no silver bullet, algorithmic or otherwise, to help us. In some meaningful ways, the Turing test appears to have been passed.  But what is for sure: we need a new baseline.

Make sure you know the human

What that looks like is currently being debated across the globe. Some have returned to pen and paper, handwritten notebooks and oral defences, but in a context where good writing is suspect, this suggests one thing: if you can’t distinguish AI-generated text from human, make sure you know the human (in this case, your students). That, at least, is what we should be aiming for in Higher Ed. Small classes help; getting to know your students early in a course—even better. Whether you re-engage students with pen and paper, or utilize verbal presentations as components of teaching, learning, and assessment, one thing is clear: In the arms race of academic integrity in the age of AI, knowing your student, rather than relying on rules of thumb or expensive detection algorithms, is the path forward. And to Andrew’s earlier point—no, he will not stop using the em dash. 

Andrew Welsman is an educator in initial teacher education at Victoria University. A former secondary school teacher, his teaching and research interests centre on issues affecting STEM education and the pedagogical impacts of digital technologies.  Janine Arantes is a senior lecturer and Research Fellow at Victoria University with expertise in AI governance, education policy, and digital equity.

Think teacher education is to blame for shortages?

“If we could just fix initial teacher education,” the argument goes, “we would fix the teacher shortage.” But what if that diagnosis is not only inaccurate, it’s part of the problem?

The real issue driving early career teachers away from rural and regional schools isn’t inadequate preparation. It’s systemic neglect.

A Misdiagnosis

For over a decade, teacher education has been framed as the weak link in addressing workforce shortages. Policymakers suggest that better-prepared graduates would remain in the profession, especially in hard-to-staff schools. It’s a convenient narrative, but it relies on a flawed premise. The premise? That preparation is the primary issue, rather than the conditions graduates face once they enter the profession.

This thinking disregards the lived realities of rural and regional educators, the realities shaped by housing scarcity, limited mentoring, professional isolation, and chronic underfunding.

In interviews with preservice and early career teachers, these barriers were consistent and systemic: housing stress, disconnection, and a lack of support. These are not outliers. They reflect a broader policy failure.

They are not signs of inadequate teacher training. They are evidence of entrenched structural issues that go far beyond what any university curriculum can resolve. Yet the policy focus remains fixated on initial teacher education (ITE), casting it as the scapegoat for problems rooted elsewhere.

This serves a political purpose. When we blame universities, we divert attention away from underfunded schools, burnout, and the deep inequities facing rural and regional education. Real reform is sidestepped.

Teachers Are Ready. The System Isn’t

Australian teacher education is rigorously accredited. It is built on nationally consistent standards. Graduates emerge with strong curriculum expertise, classroom management skills, and substantial in-school experience.

Research consistently shows that effective mentoring, adequate staffing, and supportive induction are key to teacher success. Yet in many rural and regional schools, these supports are lacking due to chronic shortages and underinvestment.

The gap isn’t in teacher readiness. The gap is in the systems they enter. New teachers face fragmented support, housing insecurity, and overwhelming workloads. Even the most capable graduates cannot thrive without structural backing. Responsibility for retention doesn’t end at graduation; it requires a well-resourced and coordinated system, especially in the communities that need teachers most.

Burned by the System, Not Burnt Out

ITE equips teachers with the skills and knowledge to succeed, but their professional momentum often stalls upon entering environments shaped by instability and neglect. Early career stories that begin with enthusiasm often give way to exhaustion and disillusionment. This is not due to personal shortcomings, but due to structural failings.

Rural teaching offers meaningful and rewarding work. It is too often undermined by the realities of insecure housing, limited mentorship, and isolation. National policies intended to support these placements frequently fall short, unable to meet the hyper-local needs of diverse communities.

The burden of structural inequity continues to fall on individuals. Without meaningful investment in the systems new teachers step into, even the best-prepared graduates will leave. They will leave not from lack of readiness, but from lack of support.

From Survival to Sustainability

Graduates aren’t underprepared. They are entering unstable environments marked by workforce volatility and fluctuating policy. As the Independent Review into Regional, Rural and Remote Education makes clear, workforce stability depends not just on recruitment, but on sustained investment in housing, career development, and targeted support.

Until we address these foundational issues, rural and regional teaching will remain difficult to sustain. The solution lies not in another overhaul of teacher education, but in building stable, supported, and secure futures for the educators who choose to teach where they are needed most.

Sarah M James is a Senior Lecturer and Academic Lead for Professional Experience at Queensland University of Technology. Her research focuses on literacy, mentoring, education policy, and early career teacher support. She has secured multiple grants and currently investigates housing-related challenges faced by preservice teachers, early career teachers, and principals in rural, regional, and remote contexts.

When Numbers Deceive: Rethinking Equity for Culturally Diverse Doctoral Candidates

In Australian higher education, equity is often measured through population parity—the idea that when enrolment numbers for a group reflect their proportion in the general population, equity has been achieved. But what happens when parity is achieved and equity status revoked? What if those numbers plateau or even decline, and the group quietly disappears from policy focus? This has happened to culturally and linguistically diverse doctoral candidates in the 2016 Australian Council of Learned Academies’ (ACOLA) Report on Australian doctoral education. 

This is the central question we explored in our recent paper, Forgetting culturally diverse equity groups in Australian doctoral policy: what happens when population parity is reached? We use  Nancy Fraser’s concept of “participatory parity” and Foucauldian discourse analysis  to  expose how culturally and linguistically diverse (CALD) domestic doctoral candidates have been effectively written out of Australia’s equity agenda.

A cue to disengage

The turning point, we argue, was the 2016 ACOLA Report, which noted that domestic candidates from culturally and linguistically diverse backgrounds had reached participation ratios above population parity. The ACOLA Report suggests that ‘participation by candidates from a non-English speaking background is good with ratios well above 1 for most of the reporting period but with a notable decline in the last two years’. After this comment, there is no further reporting or policy commentary on this group of doctoral candidates and very little concern that their numbers were actually declining. 

But rather than celebrating this as a step forward and continuing the work of support and inclusion, policymakers took this as a cue to disengage. Subsequent reports and policy documents, including the 2024 Australian University Accord, no longer list CALD domestic doctoral candidates as a priority equity group.

This matters for several reasons.

First, it assumes cultural diversity is homogeneous and stable—that all CALD groups experience equal access, support, and outcomes. This is demonstrably false. The experiences of migrants, refugees, and ethnically diverse Australians differ widely. Participation data based on ‘language spoken at home’ is an inadequate proxy for cultural diversity. Yet this flawed metric continues to shape reporting and resourcing.

Second, declaring population parity ignores ongoing structural inequities, including racism, cultural misrecognition. It also ignores the dominance of Northern/Western knowledge systems in academia. There is nothing in the ACOLA Report, for example, that considers the cultural and linguistic knowledge and networks brought to Australian doctoral education by these candidates.   

As we note, achieving numeric parity does not dismantle these barriers. The real danger is that when equity is reduced to counting heads, the deeper project of epistemic justice – the recognition and valuing of diverse cultural knowledges – is sidelined.

So, what can the sector and universities do?

We suggest reframing and a more nuanced understanding of parity. Specifically, we recommend adopting Fraser’s idea of participatory parity, which includes three dimensions: redistribution (economic fairness), recognition (cultural legitimacy), and representation (political voice). Participatory parity seeks to address concerns about cultural hierarchies and offers equal recognition for all cultures. 

For CALD doctoral candidates, this means not only opening doors to enrolment but ensuring their knowledge systems, methodologies, and lived experiences are recognised and valued in research spaces.

In practical terms, this could involve:

  • Restoring CALD domestic candidates as an equity group in national and institutional reporting
  • Funding culturally responsive supervision and support programs
  • Expanding doctoral scholarships and mentorships specifically designed for diverse cultural communities
  • Embedding epistemic diversity into doctoral training and research assessment criteria.

Especially urgent

The insights in our paper are especially urgent at a time when Australian universities are under pressure to reimagine research training – often with an overwhelming focus on industry partnerships. As this shift accelerates, it is vital that we do not lose sight of equity and inclusion as foundational to the mission of higher education.

For outreach professionals and equity leaders, this is a call to action. Metrics matter, but only when they serve justice, not when they become a convenient endpoint. If our policy frameworks stop asking why disparities exist—and start assuming they’ve been solved—then we risk institutionalising silence where advocacy is needed most.

We must go beyond the numbers. Participatory parity offers a way to re-anchor equity in justice, culture, and voice. It’s time we brought CALD doctoral candidates back into view—not just as participants, but as powerful knowledge-makers in their own right.

Catherine Manathunga is professor of education research and co-director of the Indigenous and Transcultural Research Centre (ITRC) at the University of the Sunshine Coast (USC) . Jing Qi is a senior lecturer at RMIT University in the School of Global, Urban and Social Studies and the Social Equity Research Centre. Maria Raciti is a professor of marketing and co-director the of Indigenous and Transcultural Research Centre (ITRC) at the University of the Sunshine Coast

Childcare: When profit is the motivator, we should be worried

The Australian early childcare sector is experiencing a relentless surge in media attention. It has exposed significant concerns about children’s safety and the quality of early childhood education (ECE) across Australia. Coverage includes multiple and widespread abuse incidents, inappropriate discipline, unsafe sleep practices, serious mistreatment, and seemingly ineffectual regulation.

Evidence from the Early Learning Work Matters project points to systemic issues sitting beneath the diverse array of significant concerns across the Australian ECE sector. In ECE, where educator-child interactions are known to be the most significant contributor to individual child outcomes and service quality, educators’ experiences of work and children’s experiences of ECE are inextricably interwoven.

Current concerns around the diminishing quality of educator training programs, increasing casualisation of workforce, along with high turnover rates, attrition, and burnout – are all related to the current concerns around child safety. And yet quality education and care is about so much more than ‘just’ child safety – child safety should be a given.

Early Learning Work Matters

Our latest publication from the Early Learning Work Matters project focuses on educator workload. We surveyed 570 Australian ECE educators. They reported widespread concerns including heavy workloads, particularly non-contact work, regular unpaid hours, and limited and inconsistent access to entitled breaks. Critically, over 70% of educators reported feeling concerned that children are not receiving enough of their time. They also reported that educator workloads in their service are so heavy that they are reducing quality for children. Overall, educators report the nature of their workload and working conditions are reducing their capacity to engage in quality interactions, and to provide a quality program overall.

In the current climate, conditions for both educators and children at large are suboptimal. Genuine and meaningful reform requires thoughtful consideration of the system dynamics that have evolved allowing the current concerning conditions and associated risks to develop.

Systems Theory is an interdisciplinary framework that examines how different parts of a system interact and align to produce outcomes—often in complex, dynamic, and sometimes unintended ways. Rooted in the work of thinkers like Ludwig von Bertalanffy and refined through fields such as ecology, and organisational studies, Systems Theory encourages us to look beyond individual components to understand the interdependencies, feedback loops, and structural conditions that shape behaviours and outcomes.

We need a whole-of-system approach

The diverse concerns evident across ECE in Australia do not need a diverse array of isolated inquiries and solutions. What we need is a whole-of-system approach. That’s an approach where all concerns, such as risks to child safety, heavy workloads for educators and concerns around diminishing quality of educator training, are not separate but treated as facets of the same complex situation.

A key insight from Systems Theory is that elements within a system—people, organisations, regulations, funding flows—respond to the incentives and motivators embedded in the system design. These incentives can be explicit (such as financial rewards or compliance requirements) or implicit (such as reputational pressures). Critically, systems tend to ‘produce what they are designed to produce’. That doesn’t mean they necessarily produce their stated goals, or societal good more broadly. Rather, systems produce what the system design incentivises and promotes.

What’s the big picture?

To understand the ‘big picture’, we first need to ask: what does the current system incentivise and promote? And then critically, how can we shift this, such that quality education and care for young children remains front and centre?

In Australia, where 70% of long day care services are operated by for-profit providers, and 32% are large providers, Systems Theory offers a powerful lens for analysis. When profits and market competition are primary motivators, incentives may prioritise cost efficiency, occupancy rates, and shareholder returns over pedagogical quality or child wellbeing. This can shape service delivery in subtle but systemic ways, for example: limiting educator-child ratios, reducing opportunities for professional development, or skewing investments away from relational, high-quality interactions, toward standardised, scalable models.

As part of an earlier phase of the Early Learning Work Matters project, degree-qualified early childhood teachers were interviewed about their experiences and perspectives of work. Several participants commented on their experience of competing demands, with one sharing: “You’re constantly trying to keep all these different parties happy, and that’s before you even get to the kids, which really should be the number one, but aren’t”.

Not just the experiences of a few

At times like this, it is important to note that these are not just the experiences of a few, these are not isolated concerns. These are representative of system issues underpinning the sector at large.

A systems-informed analysis does not simply criticise individual providers but interrogates how regulatory settings, funding mechanisms, workforce conditions, and market structures interact to produce current patterns of care. It asks: what and who does the system currently reward, and what or who is neglected? Further, what are the system goals and how are other system elements aligned with them? Crucially for ECE, a systems-informed analysis should ask: what kind of system design do we need to ensure that the best interests of young children—rather than commercial returns—are the driving force of early education provision?

Understanding and applying Systems Theory in this way helps shift the focus. It shifts the focus from symptoms to structures, and from individual cases with isolated interventions to meaningful systemic change.

No more reactive policy

How can we do this? We need more reporting transparency. And we need more largescale data and more quality research. That’s to better understand the scope and complexity of issues we are now facing. We must give voice to our educators — they are the ones on the ground working in this system day-by-day. We need to understand the complexity of the issues our educators are experiencing. And finally, we need to take a big picture perspective to understand our ECE system in its entirety.

No more reactive policy. No more band aid solutions, and knee-jerk reactions. A coordinated and cohesive approach to system design is needed. We need that to safeguard children’s wellbeing at a minimum — and beyond that, to support every young child to flourish and thrive.

Erin is a Lecturer in Early Childhood Education at the University of Sydney, and an early childhood teacher, with a Bachelor of Education and over 15 years working with children from birth to five years of age.

Rachel is Professor of Social Impact at the University of Technology Sydney  and an internationally recognised expert in education. She has a long track record of diverse social science research looking at education, work, health, management, leadership, and broader human development.

Can we trust AERO’s independence now?

What do you get when governments pour millions of taxpayer dollars into a charity with the power to shape what happens in Australian classrooms? You get the Australian Education Research Organisation (AERO) – and with it, the risk that private and commercial interests can steer future directions in education policy and research. Defenders of AERO are quick to claim that it is trustworthy because it is a “publicly funded, independent body”. But in a complex education system where these words carry popular sway, it’s worth asking: independent from what, exactly?

The lowdown on AERO’s status and structure

AERO’s structure is different from other national education bodies that also receive public funding. For example, the Australian Curriculum Assessment and Reporting Authority (ACARA) is an independent statutory authority established by legislation. It takes direction from Australia’s education ministers. That’s quite different from AERO, which is a not-for-profit company limited by guarantee.

AERO’s members include all the ministers of education. Back in 2020, they agreed in principle to invest $50 million in AERO over four years, with the Australian Government to fund half, and the combined state and territory governments to fund the other half. Australia’s Education Ministers take advice from AERO. 

This is how AERO has achieved power and influence over education reform.

If AERO is publicly funded, why is it also registered as a charity?

There are several strategic, financial, and legal advantages to registering as a charity with the Australian Charities and Not-for-profits Commission (ACNC). Being officially registered can signal legitimacy and build public trust. Also, many government and philanthropic grants require ACNC registration and in some cases, Deductible Gift Recipients (DGR) status. 

The ACNC requires registered charities to report regularly to maintain their registration and eligibility for tax concessions. All registered charities must submit an Annual Information Statement within 6 months after the end of their reporting period. This statement must include information about responsible persons, the organisation’s activities during the year, beneficiaries served, financial information, and information that satisfies governance standards compliance. 

A large charity like AERO, with an annual revenue over $3 million must provide financial statements that comply with Australian Accounting Standards Simplified Disclosures, and an auditor’s report. These are published on the ACNC website.

Make no mistake, AERO has influential and cashed up backers

To truly understand AERO, it is important to understand its origins. In 2014, the UK Education Endowment Foundation (EEF) began funding ‘Evidence For Learning’ (owned by Social Ventures Australia or SVA, a venture philanthropic organisation). It was asserted that SVA’s ‘Evidence For Learning’ was a ‘pilot’ for AERO. Social Ventures Australia advocated and lobbied for AERO over a period of ten years. 

In 2016, then Treasurer Scott Morrison commissioned a formal inquiry into the ‘National Education Evidence Base’ via the Productivity Commission. The draft Productivity Commission report explicitly recommended modelling a new education evidence body on the UK Education Endowment Foundation, which would ‘leverage’ the work of Social Ventures Australia. The final Productivity Commission report was more tentative, although the Education Endowment Foundation features prominently and is upheld as a model institution.

In 2018, EEF launched a five year project titled ‘Building a Global Evidence Ecosystem in Teaching’. This was part of its ‘what works’ approach and the stated goal was to establish:

EEF-style organisations in partner countries to act as evidence brokers and encourage the adoption of evidence-based policy at a national level.

AERO was established shortly afterwards in 2020. The ‘expert board’ and directors that were appointed reflected these origins closely. For instance, the former CEO of EEF (Sir Kevan Collins) sat on AERO’s expert board for a long time, as did former SVA directors and donors. However, these connections with the Education Endowment Foundation (EEF) were never made explicit for the public.

Perhaps these connections are tenuous, but AERO’s approach to education ‘evidence’ inevitably mirrors the Education Endowment Foundation’s ideologies about ‘what works’ in education. As stated in AERO’s commissioned report, AERO is part of the “what works” movement.

The ‘what works’ movement promotes similar ideas, solutions and reform agendas. It is behind the push for the implementation of ‘cognitive science’ in the classroom and sees randomised controlled trials (RCTs) as best research practice. 

The ‘what works’ network does a good job of presenting itself as independent, while promoting contested but marketable ideas with flow-on benefits for private and commercial interests.

What does AERO’s status as a registered charity mean?

AERO is registered under the Australian Charities and Not-for-profits Commission (ACNC), with DGR status under Australian Tax Law. This means it is eligible to receive tax deductible donations, including from corporate philanthropy.

Registered charities are also allowed to engage in issue-based advocacy and influence public policy. AERO does this by leveraging its network of think-tanks, bloggers and podcasters, who amplify key messages. 

Some of the biggest brands in education research are registered charities

There are several research organisations that are registered as charities in Australia, but only some receive government funding. Some – like the Grattan Institute – acknowledge their supporters and indicate the amounts received on their websites. This sort of transparent disclosure builds public trust and accountability. It also makes it easy to evaluate whether certain funders or ideological leanings may be driving the sort of research they do and reforms they support. However, such disclosures are currently voluntary and some organisations – like The Centre for Independent Studies – choose not to identify donors.

We compiled the table below from publicly available financial statements lodged with the ACNC. This enables readers to compare how key organisations earned revenue in the financial year ended 30 June 2024 (the most recently published reporting period).

Revenue by organisation, FY 2023-2024

CharityTotal RevenueGovernmentGoods and ServicesDonations and BequestsInvestmentsOther
AERO$22,691,616.0094%1%0%5%0%
ACER$110,325,209.000%99.6%0%0.2%0.2%
The Centre for Independent Studies$5,144,419.000%14%84%2%0%
Grattan Institute$5,809,856.001%13%43%43%0%

AERO vs ACER: Compare the pair

The Australian Council for Educational Research (ACER) seems to offer a close alternative to AERO.

On its website, ACER describes its mission as being “to create and promote research-based knowledge, products and services that can be used to improve learning across the lifespan”. ACER has a reputation for producing robust and nuanced analyses of Australian students’ performance on international assessments like the OECD PISA and the Trends in International Mathematics and Science Study. Many teachers will be familiar with the Teacher magazine and podcast series as media for sharing research insights and practical guidance with teachers. 

A key difference between AERO and ACER is the level of government investment and oversight. ACER reported $0 revenue from the government – almost all (99%) of its revenue was generated by providing goods or services to the education sector. 

Scrutiny is a public duty

While AERO operates under its own Constitution and is governed by a Board of Directors, its origin story and affiliations prompt questions whether it is fully independent.

In this context, scrutiny over what value for public money is being delivered is essential.


Carly Sawatzki, senior lecturer, and Emma Rowe, associate professor, are education researchers in the School of Education at Deakin University and in the Centre for Research for Educational Impact (REDI). Carly researches the teaching of critical economic and financial literacies at school. Emma won a DECRA 2021–2024. Her research is interested in policy and politics in education.

What happens when student feedback is racist

Each semester, university staff receive anonymous student evaluations. These surveys are framed as neutral tools to support teaching staff’s ongoing reflection and improvement. On paper, they’re positioned to provide structured and constructive feedback to teaching staff, in a space that is intended to be safe for students (from academic recourse).

But what about the safety of the educators receiving them?  

As two Aboriginal women teaching in Critical Indigenous Studies, we know this question too well. The system of anonymous feedback, as it currently stands, can be a site of unfiltered racialised, gendered and deeply personal violence. Yet, because it arrives wrapped in bureaucratic language like “reflection” and “improvement”, it’s too often dismissed as just “part of the job”. But at what cost does this violent ‘part of the job’ come?  

Research into the abuse in anonymous feedback surveys has been well documented, with attacks on appearance, particularly amplified for women. These are often received through an impersonal, centralised email, and are often read in isolation.   

As Aboriginal women, we know too well the impact of these student surveys. Semester after semester we must brace ourselves for the things we know that will come. Yet it doesn’t matter how much we brace for it, it is always worse than we can imagine.  

But the truth is, nothing prepares you for seeing anonymous comments regarding Indigenous peoples, our knowledges, and histories, dragged through a system that was never built for or by us. And the comments post-Referendum? They’re louder, more entitled, more emboldened than ever.  

Real comments from anonymous student feedback

These are real comments pulled from anonymous student feedback. We’re told this is about improving teaching practice. As Distinguished Professor Bronwyn Carlson asks “where is the duty of care for us?” as Educators and as people?  

Collation of just some extracts of anonymous student feedback from Session 1 2025  

Student Feedback or a Strategy of the System? 

The intersection of racism, misogyny, and anti-Indigenous sentiment is heightened in these cohorts, particularly when students feel resentment for having to learning Critical Indigenous Studies.  

This is even further amplified when the rise of anti-intellectualism is coupled with right wing media which encourages this disdain and devaluation ofIndigenous peoples, knowledges and perspectives.  

 

This is particularly true when students encounter the idea of unpacking their own proximity to settler colonial systems. For some, it is more comfortable to deflect. They do this by attacking the Indigenous tutors teaching the course, and the discipline itself,  instead of sitting with the discomfort of learning.  

“I don’t like that it was taught by Indigenous people”, “Not properly qualified”, “Her distasteful demeanor” are violent racist, gendered and anti-intellectual critiques of the teacher not the teaching.  This is settler fragility manifesting through institutional structures.  and they remind us how deeply whiteness is protected, even in the name of education. 

Still, We Teach 

But there is a flipside of this unfiltered violence.  There are pockets of so much joy in receiving feedback about the transformative learning experiences students have had. Perhaps hearing you were their favourite teacher. Or your class challenged them in a good way. Or the constructive and clever feedback, or things you may not expect (for example, students do want in person lectures again!).   

But woven through the vitriol are also the words that remind us why we continue to show up.  

Words and comments like these: 

“I found myself constantly thinking of what I had learnt and how I wanted to change myself and the others around me for the better.” 

“At times a confronting unit but I feel that it has changed my world view.”

“This unit should be mandatory for every student, I loved it.” 

We remember those students. The ones who sat in their discomfort, who leaned in rather than recoiled. The ones who stayed back after a class to yarn. The ones who later tell us they switched majors, or took our reading list to their family, had hard conversations, or considered the value and complexity of different worldviews.  

Those comments are few, but they carry a different kind of weight – one that is relational, and moves across time and space. They remind us that teaching isn’t transactional – it is transformative. And transformative teaching and learning does not just impact the world today, but the futures of tomorrow for all people.  

The System Isn’t Broken — It’s Working Perfectly As Designed 

As Distinguished Professor Bronwyn Carlson tells us: “It is time for change, real change”.  

The system isn’t broken. It’s working exactly as designed; to reinforce whiteness, to privilege student comfort over educator safety, and to silence those of us pushing back against settler colonial norms. 

But we’re still here. Still teaching. Still resisting. 

We’ve seen the strength of Indigenous students who, for the first time, see themselves reflected in the curriculum — not as deficits, not as histories, but as sovereign peoples. 

We’ve watched non-Indigenous students confront their own relations to settler colonial systems. We’ve felt transformational shifts. But this requires tertiary systems that support rather than harm the people doing this work.  

Speaking truth to power: where is our safety? 

We want duty of care extended to all of us.  Not just to students, but also to the Indigenous people, particularly women and gender diverse peoples. We have defied layers of oppression to be standing at the front of the room. 

The student feedback system may never love us. But our communities do. Our students, the ones who are genuinely open to learning, do. And we remain committed to showing up, to voicing our truths, and to teaching with our whole selves. 

Because our presence in these institutions isn’t just resistance. It’s sovereignty

Tamika Worrell is from Gamilaroi Country, and has been nurtured by Dharug Ngurra (Country), in Western Sydney. She is a senior lecturer in Critical Indigenous Studies at Macquarie University, researching Indigenous representation in education and Indigenous digital lives, including AI.  

Ash Moorehead a Biripi Worimi woman, now living with/on Dunghutti Country. She is an associate lecturer in the Department of Critical Indigenous Studies at Macquarie University and PhD candidate. Her research explores Indigenous sovereignty at the intersection of Indigenous education and research

On AERO: Read this now. The critiques are well-founded.

KPMG is conducting a review of the Australian Educational Research Organisation (AERO), a ministerial-owned company funded by Commonwealth, state and territory governments.

I learned of the review, not through a media release or Ministerial announcement, but through a flurry of posts on social media, some critical of AERO and others leaping to its defence. The review includes in-person interviews but also includes an easily-gamed survey that has significant design flaws. Anyone can fill it out, any number of times and can pretend to be any kind of stakeholder.

So it is important that we look carefully at what is being said publicly. The recent posts critical of AERO make a range of valid points but there are some interesting patterns in the posts coming to its defence.

One pattern is where the concerns are re-articulated and then dismissed as “misunderstandings about” or, elsewhere, “resistance to” evidence-based practice. It’s a short walk from there to the other pattern, whereby education research and the academics who produce it are caricaturised as ill-informed or worse.  

Zombie products

The caricatures draw on a discourse that has long been in operation in England and imported here courtesy of the Abbott-Turnbull-Morrison government, championed by conservative think-tanks, like the Centre for Independent Studies and the Institute of Public Affairs.

This discourse conflates the re-circulation of zombie products, programs, and ideas (aka Brain Gym, and “VAK” Learning Styles) with current education research, researchers, and university Initial Teacher Education. Writ large.

Seldom do those wielding this discourse acknowledge the potential commercial benefit from a new “open market”, cleansed of ‘hapless’ university academics and teacher educators.

Nor is it acknowledged that there is huge diversity in education researchers and in teacher education programs.

Never is solid evidence provided to support claims of charlatanry. We’re just inundated with the same claims time and again.

US far right figure and former Donald Trump advisor Steve Bannon calls it “flooding the zone”. And legitimate critique is in danger of being drowned out in the process.

Black and white

Let’s look at what Dwyer, Fuller and Humberstone said in a recent AARE EduResearch Matters blog, as this blog achieved national media attention, and most likely prompted the defensive responses.

I won’t republish the whole blog here but present this excerpt as an example of legitimate critique:

The evidence and resources presented by AERO appear to position teachers as incapable of understanding and interpreting research, then making professional judgments based on their students and the school content. AERO presents the research as if it was black and white – “proven”, incontestable facts. The evidence is presented as an instruction manual, with no space for professional judgments or critique.

I don’t read this passage as a misunderstanding of or argument against evidence-based practice. Rather, the authors are taking issue with the surety with which AERO is making claims about the evidence it has selected, and the level of prescription in the materials they produce.

I share these concerns. Researchers are trained (especially those in the cognitive sciences) not to speak beyond the data or to make causal claims.

This is why you will find words like “suggest” and “indicate” in peer-reviewed research publications, even when something has been shown to “work”.

AERO’s materials have not been subjected to the same rigour and do not reflect the same caution. Dwyer, Fuller and Humberstone are quite right to call that out.

Right on another level

But these authors are right on another level too. While their appeal to professional judgement has elsewhere been dismissed with the old “choose your own adventure” chestnut, professional judgement is critical in classroom teaching because nothing works for all students, all of the time.  

In my own field, inclusive education, flexibility is key. The more prescriptive we are with teachers, the less they will be able to mix things up when they need to.

Before anyone paints me as an apologist for whole language, discovery learning, Brain Gym, VAK learning styles (take your pick of the education evils), I’m not.

In fact, for the last five years I’ve been leading a major research project funded by the Australian Research Council that melds insights from the cognitive and communication sciences with those from inclusive education.

I also think cognitive load theory has much to offer instructional design, but it is not everything. And the evangelism with which it and other favoured practices are being disseminated by AERO risks swinging us to the other extreme.

A key tension

In writing up the findings from our ARC Linkage project in a new book for educators, I’ve struggled with a key tension that AERO has either resolved to their satisfaction or never contemplated in the first place, that is: what and how much to put out there, against what to hold back and why.

Our research suggests* that enhancing the accessibility of summative assessment task sheets significantly increases achievement outcomes for students with and without disabilities impacting language and information processing.

Great, right? Yes, it is. And we want every student in every classroom across Australia to benefit from this evidence.

But how to achieve this? We *could* develop a stack of accessible assessment task sheets and even create a commercial enterprise to pump it all out, pronto. Teachers won’t have to do a thing, we contribute to solving the workload problem, and we earn precious research income for all our effort.

Win, win, right?

Wrong. If we did that, we would rob teachers of the knowledge and skills they need to design accessible assessment. Knowledge and skills that they can later draw on to create modules in their school’s online learning management system or when developing learning resources.

We would also rob them of the creative and intellectual pleasure that can be found in the creation of said learning materials.

We would risk de-professionalising teachers more than they already have been by canned curriculum resources that promise to save teachers’ time, but which are inflexible, not appropriate for students with disability, and difficult/time-consuming to adjust.

So, we’re not doing that. We have made our resources freely available on our website in the hope they will do good in the world but are leaving teachers to make decisions about the curriculum content to go in those resources, and supporting as many as we can to do this with ongoing professional learning.

An urgent course correction needed

This decision goes to the heart of the concerns raised about AERO and the appeal by education researchers to not just preserve but to nurture teachers’ professional expertise and judgement.

This can’t be achieved by flooding the zone with practice guides, infographics, and narrow prescriptions to teach for how some students learn (with “tiered interventions” for the rest).

We’ll find that out in time.

My only hope is that the KPMG review leads to an urgent course correction because the criticisms of AERO are well-founded. It is time for the Commonwealth state, and territory governments to listen – and act.

Professor Linda Graham is Director of The QUT Centre for Inclusive Education (C4IE) at Queensland University of Technology. She leads several externally funded research projects, including the Accessible Assessment ARC Linkage. Linda has published more than 100 books, chapters, and articles, including the best-selling “Inclusive Education for the 21st Century”.


*See what I did there?