artificial intelligence

Stop Policing Punctuation Now: Why AI Detection Needs a Rethink 

Like many educators, our social media feeds have been filled with commentary on the impact of AI on teaching, learning, and assessment. One problem appears intractable; namely, how can we tell when students have used AI-generated text in their work? We’re not writing to offer an answer to that question; indeed, at this point, it’s clear that there isn’t a reliable method of separating ‘synthetic’ text from ‘organic’. Instead, we want to bring attention to a troubling possibility, one that is of perhaps greater significance to our role as educators who don’t necessarily teach the finer points of literacy and writing while teaching just about every subject. It’s this: In our efforts to police the use of AI in learning and assessment, are we likely to diminish the quality and character of our own writing and that of our students in the process? 

While many of us have tried to keep up with new tools, evolving policies, and changes with detection software, one strategy has become increasingly popular: look for stylistic ‘tells’. It appears to us that the search for shortcuts in AI detection has led to what psychologists Shah and Oppenheimer call the “path of least resistance”. That is, we gravitate to cues that are easily perceived, easily evaluated, and easily judged. In other words, heuristics. Em dashes? Colons in titles? Specific words or expressions? All have been called out as signs of AI authorship. But here is the problem: these shortcuts don’t work. Worse, they normalise suspicion of well-crafted, edited and even creative writing. When we start to see polished punctuation and consistent tone as evidence of cheating, we inadvertently signal to students and our peers, that good writing is suspect.  

Why?

Let’s start with an example. On social media, we have seen commentators breezily claim that the use of the em dash—the long dash that can be used in place of parentheses or a semi-colon—is the “smoking gun” betraying that a text was authored by AI. As a self-professed fan of the em dash, this prompted Andrew to go searching. A cursory googling using the search phrase “the em dash as an indicator of AI content” revealed that this is a popular topic, with plenty of commentary being traded for and against the notion. Some suggest that using em dashes makes for well-styled and cadenced writing, while others claim that em dashes appear so infrequently in so-called ‘normal’ writing that seeing an em dash ‘in the wild’ is always suspect. But the conjecture and speculation doesn’t end with the em dash.  

The colon in titling: Another AI tell? 

So, we dug deeper. Another purported “give-away” is supposedly the use of the colon to separate titles and sub-titles in a body of text. This seemed a bit of a reach, as academic writing in particular often employs the colon to sub-divide or elaborate on concepts in titles and subtitles. At this point, we realized we needed to consult the proverbial source of these claims, so off to prominent large language models (LLMs) we went.

We each tried out different LLMs. In ChatGPT, Andrew started with the prompt “What are some common tells that a text is in fact AI-authored?” It came back with a list of 10. The list ranged from “Overly Formal or Polished Language” and “Overuse of Transitional Phrases” to “Too Balanced or Fence Sitting,” all of which could be claimed to be common in academic writing. Whereas, when Janine asked the same question, Gemini (2.4 Flash) replied: “repetitive phrasing, formulaic sentence structures, overly polished or generic language lacking deep insight or personal nuance, the frequent use of certain transitional phrases and words (e.g., “delve,” “tapestry”), and occasionally, factual errors or fabricated citations”. 

Great questions

Albeit, many of these were stylistic claims, rather than observations about punctuation, so we decided to dig deeper. When ChatGPT was asked “What about punctuation?” it replied, “Great question – punctuation is another area where AI-generated text can give itself away.” It seemed that ChatGPT was confirming the blogosphere’s punctuation and style concerns in relation to authenticity, noting that the overuse of things like commas, em dashes, colons, and parentheses are “punctuation-related tells.” Janine asked the same question, and Gemini replied “AI-authored text can exhibit “tells” through the overuse of specific punctuation marks like the em dash or colon, the consistent application of flawlessly textbook-correct punctuation lacking human variation or “casual inconsistencies,” and the absence of the nuanced stylistic choices typical of human writers.”

Both of our responses of “AI tells” included overly polished work, overuse of some phrases and the consistent, almost perfect use of specific punctuation like em dashes and colons. The similarities were obvious: consistently correct or textbook punctuation and limited to no typos or casual inconsistencies. Was grammatically correct and proof-read work now concerning? Or worse, the sole domain of LLMs? Should we, as authors and educators, be aiming (as it were) to be more “casually inconsistent” in our writing so as to not appear like we have used an LLM? And to teach our students this in-turn? 

On the spread of lazy AI detection heuristics  

In a fascinating paper, “The Path of Least Resistance: Using Easy-to-Access Information,” Princeton psychologists Shah and Oppenheimer have proposed a framework for understanding how people might use highly accessible cues in everyday decision making. Their framework, in which they explain how people tend to use easily perceived, produced, and evaluated cues to makes decisions, has particular relevance for a scenario in which a teacher is attempting to detect AI -generated text. As visible linguistic markers, punctuation could be regarded as an example of a highly accessible cue. After all, types and patterns of punctuation are easily perceived and evaluated by readers, much more so than more nebulous concepts such as “tone” and “complexity” of vocabulary. One could imagine why punctuation as a cue for detecting AI-generated work might make for a seductive proposition, and why it has become the subject of so much social media speculation. 

Whether the punctuation “rules of thumb” for AI detection being promoted on social media are credible or not is one matter. One thing is nevertheless certain: the idea of punctuation as a tool for AI detection is pernicious—the em dash and other proposed AI detection heuristics are now in the public consciousness and being talked about as if they are useful, despite noteworthy appeals to reason here, here, and here. Our concern as educators is this: Collectively, we may be in real danger of assimilating these “easy” cues and applying them (whether consciously or otherwise) to our own writing and when assessing the work of our students.  

Where might this end? 

Educators are not immune to bias. In the absence of certainty, it’s natural for us to lean into intuition. But intuition shaped by social media tropes is not a sound basis for academic judgement. Perhaps the deeper danger here is that lazy heuristics for AI detection reduce our ability to actually teach and also lower our expectations, as they cast suspicions on students and peers who have worked hard to improve their writing. 

What if we ‘normalise’ our expectations for authentic writing to be automatically suspicious of polish, punctuation, and proof-reading in our student’s work? Rules-of-thumb and looking for “AI tells” is not the answer. They may make for seductive heuristics in decision-making in the present AI-policing climate; but let’s be clear—they’re lazy and specious within the domains of scholarship and academic writing. Noone has an answer to AI detection; there is no silver bullet, algorithmic or otherwise, to help us. In some meaningful ways, the Turing test appears to have been passed.  But what is for sure: we need a new baseline.

Make sure you know the human

What that looks like is currently being debated across the globe. Some have returned to pen and paper, handwritten notebooks and oral defences, but in a context where good writing is suspect, this suggests one thing: if you can’t distinguish AI-generated text from human, make sure you know the human (in this case, your students). That, at least, is what we should be aiming for in Higher Ed. Small classes help; getting to know your students early in a course—even better. Whether you re-engage students with pen and paper, or utilize verbal presentations as components of teaching, learning, and assessment, one thing is clear: In the arms race of academic integrity in the age of AI, knowing your student, rather than relying on rules of thumb or expensive detection algorithms, is the path forward. And to Andrew’s earlier point—no, he will not stop using the em dash. 

Andrew Welsman is an educator in initial teacher education at Victoria University. A former secondary school teacher, his teaching and research interests centre on issues affecting STEM education and the pedagogical impacts of digital technologies.  Janine Arantes is a senior lecturer and Research Fellow at Victoria University with expertise in AI governance, education policy, and digital equity.

Are machines now appealing?

A colleague recently shared a polite email from a student appealing their assessment grades. Every rubric criterion was defended and addressed in tremendous detail. 

It felt optimised, and in an age of generative AI, maybe that’s exactly what it was.

We’re entering a new phase where students use AI not just to prepare assessments but to craft appeals, generating arguments perfectly shaped to align with criteria and maximise persuasive force.

To understand this development, we must first examine the role of rubrics in contemporary education. Assessment rubrics function as what Michel Foucault might recognise as disciplinary technologies, tools that standardise judgment and render subjective evaluation processes transparent and measurable. They represent institutional attempts to rationalise assessment, making explicit the criteria by which student work is evaluated, theoretically democratising access to success criteria. 

Constructing appeals with unprecedented precision

As Pierre Bourdieu reminds us, institutions often reward not just knowledge but the ability to navigate codes and expectations. When rubrics and standardised criteria are coupled with AI-augmented optimisation, however, we risk shifting learning’s centre from transformative engagement to compliance engineering, undermining the outcomes we are attempting to measure.

Students with access to sophisticated AI tools can now systematically analyse rubric language, identify optimisation opportunities, and construct appeals with unprecedented precision. This development represents what Jürgen Habermas would likely classify as the colonisation of educational lifeworlds by instrumental rationality, the reduction of learning processes to technical problems requiring algorithmic solutions.

When academic feedback like “this section lacks depth” gets treated as a technical problem to solve, however, rather than expert judgment to engage with, we transform educational dialogue. The more “optimised” the process, the less space for generosity, nuance, or authentic learning’s messy back-and-forth.

Jacques Rancière’s work on pedagogy suggests that educational relationships depend on the assumption of human mutuality, a recognition that both student and teacher are capable of thought and interpretation. AI-mediated appeals disrupt this dynamic. When students rely on AI to process feedback, the algorithm does not engage with feedback as a thinking subject but processes it as information to be optimised against. 

Recognition and validation

I expect that students using AI for appeals genuinely care. They want recognition and validation, in addition to graduating with their degree. Max Weber’s analysis of rationalisation processes, however,  helps us understand how well-intentioned actions can contribute to broader structural changes that undermine their original purposes. Weber observed how the rationalisation of social life (the systematic organisation of action according to calculated rules) tends to displace value-rational action (action oriented toward ultimate values) with instrumental rationality (action oriented toward efficiency). When students optimise appeals against rubric criteria, they engage in precisely this type of instrumental calculation, even when their underlying motivations remain value-oriented.

Academic assessment involves what Aristotle called phronesis: practical wisdom that cannot be reduced to rule-following. When educators evaluate student work, they exercise judgment that draws on disciplinary expertise, pedagogical experience, and contextual understanding. This judgment necessarily involves interpretation and cannot be fully systematised. AI-optimised appeals attempt to bypass this judgmental dimension by reducing assessment to rule application. This reduction represents what Herbert Marcuse might recognise as one-dimensional thinking, the flattening of complex educational relationships into technical procedures.

The proliferation of AI-mediated appeals has broader implications for educational institutions. If Anthony Giddens is correct that modern institutions depend on trust relationships between expert systems and lay participants, then the mechanisation of student appeals may erode the trust relationships that sustain educational institutions.

Educators as algorithmic systems

When students systematically optimise against assessment criteria rather than engaging with feedback as developmental guidance, they effectively treat educators as algorithmic systems rather than professional practitioners. This shift may prompt educators to become more defensive in their assessment practices, potentially reducing the pedagogical risk-taking that often produces meaningful learning experiences.

If appeals processes become dominated by AI optimisation, institutions may respond by developing counter-measures: AI systems to evaluate AI-generated appeals. In Jean Baudrillard’s terms, a simulation replaces real interaction with its mechanised imitation.

This broader context helps explain why AI-optimised appeals feel unsettling even when students’ motivations appear legitimate. The optimisation process treats educational relationships as data to be manipulated rather than human connections involving care, judgment, and mutual recognition.

The messy middle

We live in the messy middle where human and machine shape one another. It is a zone of entanglement where our judgements, our values and our decisions are increasingly mediated, supported or even challenged by machine outputs. Machines, however, do not care. Education’s meaning is formed in relational and ethical spaces. We must protect them.

Jonathan Boymal is an associate professor of economics in the School of Economics, Finance and Marketing, RMIT University’s College of Business and Law. He has 25 years of higher education leadership experience at the undergraduate and postgraduate levels across Melbourne, Hong Kong, Singapore and Vietnam, in roles including Associate Deputy Vice-Chancellor, Learning Teaching and Quality and Academic Director, Quality and Learning and Teaching Futures. Jonathan holds a PhD in Economics.

Why we should worry about smart glasses in schools

Smart glasses are the latest shiny object in the edtech world. Sleek, AI-powered, and promoted as the next evolution in learning, they promise to transform the classroom. Real-time feedback. Immersive experiences. Personalised accessibility support.  But here’s the thing: they also record. They see. They store data. And they’re being quietly rolled out in schools, because anyone can go to the Ray Ban Website, pay $500 and get them delivered to their door. Even if they live across the road from a school.  Without a serious national conversation about what’s at stake, there are some critical questions we believe need your attention.

While some may enthusiastically praise these devices and paint a picture of tech-enhanced chemistry labs and accessible support for neurodiversity as exciting. Useful, even. Check whether they mention ethics. (Scrolling through the Ray Ban Website….) 

Does it mention rights? 

Does it mention harm? 

What they do and don’t mention matters

Once a device can record, it can surveil. It can be used to monitor behaviour, capture images without consent, and stream content live to platforms beyond the classroom. In the hands of the wrong user, smart glasses aren’t just learning tools – they’re tools of manipulation, misuse, and control. And remember – anyone can buy smart glasses. This is a very different context than CCTV footage in schools.  To explain, and let’s not be naive about who’s influencing young minds right now.

The same students, staff, parents and onlookers that might wearing smart glasses may also be influenced by Andrew Tate on TikTok, YouTube, and Instagram. Imagine this: you’re teaching Maths. A student parrots Andrew Tate’s misogynistic views and live streams your response through the smart glasses they were wearing. You had no idea. Its not like they held up their phone. They were just looking at you, through their glasses. Within minutes, it’s online, weaponised, and fed to the Tate army. They didn’t mean to destroy you or your capacity to feel safe teaching. But they did. 

You leave teaching. 

Not because you wanted to. 

But because the damage was done. 

Are you being recorded?

Now, even walking the streets feels uneasy – you are left wondering if the glasses people are wearing are quietly recording you while you buy groceries or cross the road. Other teachers start to wonder. What if parent-teacher interviews had smart glasses? Or the swimming carnival? What if someone is just sitting outside the school with a pair of glasses on while kids are playing? They don’t have their phone out, so their activity doesn’t trigger concern. This isn’t speculation. We’ve already seen how images can quickly be used to make AI-generated deepfake nudes of girls in schools. And teachers aren’t exempt. What smart glasses do is lower the barrier between thought and action

They offer immediacy. Stealth. Power.  

We’ve already seen smart glasses banned from ATAR exams in WA. But what banning them from parent-teacher interviews? PE lessons? Swimming carnivals? Where are the boundaries? And while we are asking questions – who is collecting the data? And where is it all going? How does it align with emerging and current legislation?  And all of this is being marketed under the guise of innovation. But innovation without ethical frameworks can be weaponised. Smart glasses do not exist in a vacuum.

They exist in a world shaped by misogyny, online abuse, discrimination and algorithmic amplification of harm. If we ignore that, if we look only at the marketing promises and not at the sociocultural context, we are putting not only students, but teachers, parents, and society at the risk of harm. We need to stop treating “real-time feedback” as neutral. We need to stop pretending “immersive” means safe. And we need to seriously question who benefits from “innovation” when surveillance is embedded in the hardware and marketed by people with millions of followers, like Chris Hemsworth.  

Let’s be clear: this is not just about a gadget. 

Outside of schools, smart glasses are marketed as sleek, cutting-edge tools designed to enhance everyday life, work, and productivity. In the consumer market, they’re promoted as lifestyle wearables that offer hands-free access to navigation, messaging, music, and AI assistance – all wrapped in fashionable, discreet frames. In education smart glasses are being marketed as inclusive and dynamic. But in practice, they are building out a surveillance infrastructure inside schools. Anyone with smart glasses (students, parents, teachers, the person sitting outside the school) might be able to soon have access to real time facial recognition, eye tracking, emotion analysis, and real-time data sharing. That’s not innovation. That’s infrastructure. Infrastructure that we have legislation around to ensure our rights are being upheld.  

Which Brings Us to Chris Hemsworth 

You’ve probably seen the ads. Chris Hemsworth, superhero, Aussie icon, father of school-aged children, promoting AI-integrated smart glasses with enthusiasm and charm. He’s partnered with Ray Ban to showcase how wearable AI is the future. But here’s the thing: when a celebrity of his influence endorses surveillance tech, especially without reference to consent or harm, it’s not just a missed opportunity. Its reckless. Now, to be clear – this isn’t about criticising Chris Hemsworth – it’s a call to anyone with the power to shape public perception. If you have the platform, the reach, or the resources, you also have the responsibility to bring potential harms of emergent technologies in education into the conversation. Because ignoring those risks, especially when kids, parents, and teachers are watching, can’t be thought of as naïve. You have a social responsibility to consider if it is reckless. 

And it is reckless

Reckless means acting without thinking about the potential consequences – especially when those actions could cause harm. That’s why we need an awareness campaign. Kids look up to Chris Hemsworth. So do parents. So do teachers. If smart glasses are going to be marketed to schools and families, there must be transparency about what they do and what they risk. That’s why we need to have a conversation with Chris. Not about banning the tech. But about being responsible with his platform.

Technology will continue to be marketed aggressively, but those with the power to influence and implement it must take far greater responsibility for its impact. Imagine if Chris Hemsworth read this and considered the perspective of a teacher. In the middle of a teaching crisis, a teacher is trying to deliver a science lesson on a sweltering Friday afternoon in a 35-degree classroom packed with 30 students, only to come home and discover that a slip of the tongue, saying “orgasm” instead of “organism,” has been turned into viral content in the manosphere. One more teacher doesn’t return to the classroom, during a teaching crisis. 

Where Do We Go from Here? 

We are not saying “ban it all.” 

We are saying: Pause. Reflect. Regulate. 

We are also not blaming anyone. Because this isn’t about blame. It’s about responsibility. It’s our collective responsibility to adopt new technologies in ways that respect commonly held expectations of technology, especially in semi-private spaces. We already know not to film at swimming carnivals, in toilets, or change rooms and we wouldn’t wear our phones on our faces during a parent-teacher interview – but smart glasses would do exactly that. Imagine if you just ‘forgot to take the glasses off’… Further, teachers and schools shouldn’t be expected to manage the risks of tech like smart glasses alone. Meta, Ray-Ban, and others must embed safeguards, transparency, and safety by design, privacy by design and so on. And those with huge platforms, like Chris Hemsworth could use his platform not just to promote, but to help spark conversations about where this tech belongs.

What if Chris Hemsworth posted to his 52M followers that “Some devices are made for skydiving – not for schools” – would the conversation shift? We would love to hear your thoughts.

Bios

Janine Arantes is a researcher and educator at Victoria University and advocate exploring the social, ethical, and psychological impacts of emerging technologies in education. Andrew Welsman is a researcher and educator at Victoria University with expertise in STEM education, digital technologies, and initial teacher education.

#AARE 2024 now! Hello and welcome to the first day of our AARE conference blog

Day One, December 1, 2024.

We will update here during the day so please bookmark this page. Want to contribute? Contact jenna@aare.edu.au

Our EduResearch Matters social accounts are:

Please write, comment, participate about our AARE2024 blog on social media using this hashtag #AARE2024.

Matt Bower shares some thoughts on AI

The recent generation of increasingly powerful artificial intelligence is having a disruptive impact on education. Students can use any number of tools, such as ChatGPT, to help them complete any text based assignment tasks. But there are also a wide range of multimedia tools that can help them create images, videos, music, presentations and more. We need to fundamentally rethink our priorities when it comes to teaching – what should education be about?

Teachers at educational institutions understand they do need to change their work and they have understood that since the beginning of generative artificial intelligence, marked by ChatGPT. Most agree that they need to make major changes to what they teach, the way they teach and how they assess. But most teachers do not feel well-supported to make the requisite changes to their teaching, assessment and supportive practices.

Educational institutions are understandably striving to uphold academic integrity to ensure that students are using generative AI in ways that help them learn, rather than having AI supplant that learning. But there is an increasing acceptance that any student who wants to hide the fact that they’ve used generative AI can normally do so. 

One of the key messages is that we really need to work with students on dispositional aspects of learning, to help them understand that they will have greater benefits from their education if they use AI as a learning machine rather than an answer machine – that learning still needs to take place in the mind and you can’t have anyone else do your laps for you. AI has the potential to be a wonderful mindtool and amplifier of creativity, but we must ensure that students are motivated and know how to use AI well, rather than as a way to bypass their learning.

There’s an urgent need for research along a number of dimensions.  How  do students interact with these technologies inside and outside of classrooms? How we can effectively help students develop their AI literacies so they can engage with AI in ethical, critical, safe and productive ways. How should we need to rethink assessment to ensure that we are assessing humans and not artificial intelligence? How can teachers be best supported to navigate through this major educational transition? And how do we support educational leaders and the system as a whole to rethink policy and professional learning?

There are a number of ways that we can also use AI to help us conduct research. The way to do this ethically is an evolving area but we need to consider how we can use AI to expedite some of the more tedious and menial aspects of the research process, for instance, cleaning and coding of data to help accelerate our research progress in the education field. It’s an exciting time in educational research, and as always with technology, the benefits we derive will depend on how we use it.

Matt Bower is a professor or educational technology in the School of Education at Macquarie University. His work focuses on how contemporary and emerging technologies can be used to enhance learning.

Thanks to Steph Wescott and Ben Zunica for the images.

Gamilaroi woman Michelle Bishop speaks passionately about Reclaiming Research

By Ren Perkins
Michelle started off by proving an intimate and emotional Acknowledgement of Dharug
Country. In acknowledging Country and Ancestors, Michelle mentioned it was because of
them she was here.

Images below thanks to Ren Perkins, Naomi Barnes and Ben Zunica

In reclaiming the research space, Michelle spoke to Indigenous sovereignty in research. As
Michelle stated, “ education has been occurring here on so-called Australia for tens of
thousands of years”. This was emphasised by the words of Torres Strait Islander scholar,
Prof Martin Nakata, curriculum did not arrive by boat and pedagogy did not arrive by boat.
Also in reference to Nakata, Michelle stated that the education system in Australia was
designed by the colonisers for the colonisers. As Michelle said, the state of the schooling
system is not broken, it is working as intended. That is to promote the hierarchy of race,
individualism and meritocracy.

Michelle shared that research demonstrates that schools can be sites of harm for many
Aboriginal and Torres Strait Islander students. In fact, schools can re-traumatize, re-
marginalise and create experiences of racism for Aboriginal and Torres Strait Islander
students. As Michelle said, “There is evidence of how our kids are suffering”. Michelle
shared a traumatic experience where she witnessed first hand how an Aboriginal student
was treated by a senior school staff member. Michelle recalled the student was told, “ Well
what are we going to do with you, now we can’t use corporal punishment?”
Talking Indigenous research, Michelle asked the audience what they knew about Indigenous
research. This was to try and shift the focus of being the subject and object of research. As
Michelle stated categorically, “Nothing about us, without us!” To assist researchers, Michelle
outlined the AIATSIS research code of ethics, which is underlined by integrity and acting in
the right spirit.

The theme of AARE2024 is education in a changing world. Michelle posed the question to all
of us: What is our collective responsibility? For Michelle, her responsibility is towards
Ancestors, young people and future generations.

Michelle underlines this with three questions:
How to make schools safe®?
How to step outside colonial-controlled schooling?
How to assert our knowledge systems as rigorous and valid?
Michelle presented the Kin & Country Framework (Bishop & Tynan, 2024).
To finish, Michelle left us with the thought-provoking question, “How can we become good
Ancestors?”

Lightning Talks – thanks again to Steph Wescott who wrote about this session

Lighting Talks A 

Following a brilliant talk from Dr Michelle Bishop, we reconvened for the pre-conference lighting talks – three minutes to tell us about your research and two minutes for questions. Rapid-fire, no slides. This post provides a brief overview of the talks presented in one of two lightning talk sessions. 

Alice Elwell (Deakin University) 

Knowing differently means feeling differently: affect in the critical English classroom

Alice tells us she’s writing about ‘vibes’ (or, the affective intensities that occur in the classroom when teachers are using critical literacies). In the English classroom, Alice explains that when big topics are engaged with, ‘big’ things happen. These vibes are pedagogies, shaping what happens and what can be known. When you do this, what do people feel in the classroom? Alice introduces us to a set of metaphors she has designed to work through her data, leaving us ready to think and feel powerfully in our own work and classrooms. Alice is also wearing very cool earrings, so make sure you say hi to her today. 

Stef Rozitis (University of South Australia) 

“People need to know that we are doing important work here”: Early childhood educators in their own words

Stef’s research explores how do gendered of maternalistic discourses shape the identities of early childhood educators. Arguing that maternalism persists in the work of policy and in people’s perceptions of early childhood work, and using post-qualitative inquiry to find multiples meanings and resonances, Stef’s found the participants used multiple discourses to speak about their roles. Stef’s participants distanced from maternalism but also slid into at times, evoked discourses of care and care ethics, market discourses, complex discourses around value of the work, and discourses of being skilled and experienced workers. 

Stephanie Milford (Edith Cowen University) 

Parental Mediation in the Digital Age: Insights from My Research

Stephanie’s research explores the parental mediation of device use among children. She says that oarents’ roles are made difficult by conflicting messages they receive about children’s screen time; that there are both benefits and harms. But what should they do about it? Parents must navigate these complexities, but Stephanie is interested in what informs their choices. Her research found that both micro and macro factors influenced parents’ decisions, and that parent self-efficacy played an important role. Findings highlighted the need for clear, consistent and non-judgemental support for parent decision-making.

Giorgia Scuderi (Aarhus University) 

Crafting Creative Ways of Conducting Qualitative Research on Young People’s Analogue-Digital Relations

Giorgia shares that her PhD focusses on how gender is negotiated by young people and their parents, using ethnographic research in both Italy and Denmark. Giorgia also used workshop-based focus groups but encountered ethical problems around attempting to use relational approaches in her research. Giorgia is keen to chat through ethical barries others encounter in their research while she’s here at AARE! Giorgia also invokes ‘vibes’, which is beginning to emerge as a key theme of this session. She is also jetlagged as she travelled here from Italy; perhaps someone should buy her a drink this week! 

Tracey Sanderson (University of the Sunshine Coast) 

Supporting parents to promote a passion for reading

Tracey begins by telling us to get comfy while she tells us a story. This story is about a literature-loving teacher whose work aims to inspire a love of reading in her students and to develop a culture of reading in her classroom. At this point the audience begins to suspect that this story is about Tracey, but this remains unconfirmed. Tracey reminds us that if we want to know what kinds of support parents need to support reading in their homes, we need to ask them. Her research found that the stories of reading exist within families, not in textbooks. The story ends unexpectedly with our heroine working to develop an app to store resources and provide support to families looking to develop a love of reading in their children. 

Ben Archer (James Cook University) 

The Impact of Opportunity – Educational Access and Career Outcomes in Regional, Remote and Rural Australia

Ben wants to know what young people make the career choices they do. He tells us about his son, who was born vision impaired, and how that led him to consider a regional lifestyle for his family. However, the closest specialist was in Sydney, which led Ben to consider the skill shortage in regional places. This led him to his PhD journey, which traces students from year 7 to the time that young people make pivotal career decisions. He is looking at the ‘missing piece’, which he says is career advice. ‘What’s happening?’ he asks. He found that in year 7, students look at anything beyond rugby league player or TikTok influencer as ‘hard to get’; in particular, careers that require university entrance. Unfortunately, Ben is ‘stuck in ethics hell’, and is hoping to make progress and begin to conduct his work in schools. 

Amy Kaukau (Te Wananga Aronui O Tamaki Makau Rau – Auckland University of Technology) 

Exploring Mātauranga Māori in Bicultural Physical Education: A Tool Based Approach for Teacher Development

Amy is exploring bicultural experiences in physical education. Her ‘why’, she explains, is found within her family and her work as a teacher; she began to see the world from her children’s perspective and wanted to understand education from a Mātauranga Māoriperspective. She says that there is a need to understand the ‘how’ and ‘what’ in relation to what we incorporate into our curriculum and teaching programs. Amy’s research design is participatory action, and she believes in the transformational work that can take place in this space. Māori data sovereignty is important to her work, and participatory action research allows her to ensure that this is protected. In Amy’s research, she worked with knowledge leaders in Mātauranga Māori to design a tool that helps incorporateMātauranga Māoriknowledge into PE experiences, which has been shared with 4 teachers in their work. Amy hopes that she can develop something tangible at the end of the research that can be used for bicultural education. 

And that concludes this session of lightning talks. Be sure to catch these researchers’ other papers throughout the week! 

So what? What matters when it comes to research

Ben Zunica was at the panel discussion which offered perspectives on getting published.

Panellists were: Helen Watt, Stewart Riddle, Susanne Gannon and Stephanie Wescott. 

Here’s a brief summary.

This was a session designed to help early career researchers and postgraduates with getting published. It included tips on how to get published and what to do to make your articles more attractive.

Should it be quantity or quality? Our panellists agreed that quality mattered. Stewart Riddle spoke from his perspective as editor of the Australian Educational Researcher. He said that abstracts were crucial – more important than you think.

“Everything comes down to your abstract – it’s like an advertisement for your paper. If you stuff up the abstract, the editor will just desk reject. The abstract sells the paper to the team.”

He recommended signing up to be a reviewer for a journal as a good strategy for becoming a successful academic writer.

“You read other people’s work, read and provide feedback. Sign up to be a reviewer.”

Susanne Gannon talked about what made a good article – and how that provides inspiration for your own writing. Stephanie Wescott talked about how she began her career and had been published often. She had also engaged with the media. She said it was important to publish thorough and reliable work. 

It’s not about getting clicks, it’s about publishing good work. 

Helen Watt talked about the dos and don’ts of academic publishing and how to get onto the trajectory of getting published in the educational space. Her top twos – you need to have something important to say. That’s like the “so what?” mechanism. Publishing is not all about writing. Good writing will not save bad work. Networks and communities matter – not just to disseminate but to interact and join in the conversation. 

Bad work will follow you. Don’t do it. 

We stand on the shoulders of giants. Be clear about your point of departure about what is known and join the conversation.

There was also further discussion about the implications of AI and publishing, following on from Matt Bower’s at today’s keynote.

Teachers truly know students and how they learn. Does AI?

Time-strapped teachers are turning to advanced AI models like ChatGPT and Perplexity to streamline lesson planning. Simply by entering prompts like “Generate a comprehensive three-lesson sequence on geographical landforms,” they can quickly receive a detailed teaching program tailored to the lesson content, complete with learning outcomes, suggested resources, classroom management tips and more.

What’s not to like? This approach represents a pragmatic solution to educators’ overwhelming workloads. It also explains the rapid adoption of AI-driven planning tools by both schoolteachers and the universities that train them.  

And what do we say to the naysayers? Don’t waste your time raging against the machine. AI is here! AI is the future! 

Can AI know students and how they learn?

But what does wide-scale AI adoption mean for the fundamental skills and knowledge that lie at the heart of teaching – those that inform the Australian Professional Standards for Teachers? Take Standard 1.3, for example, “Know Students and how they learn”. This standard requires teachers to show that they understand teaching strategies that respond to the learning strengths and needs of students from diverse linguistic, cultural, religious, and socioeconomic backgrounds. Can AI handle this type of differentiation effectively? 

Of course! Teachers simply need to add the following phrase to the original prompt: “The lesson sequence should include strategies that differentiate for students from culturally and linguistically diverse backgrounds”. Hey presto! The revised lesson sequence now incorporates strategies such as getting students to write a list of definitions for key terms,  using scaffolding techniques, implementing explicit teaching, and allowing students to use their home languages from time to time

Even better, AI can create a worksheet that includes thoughtful questions such as, “What are some important landforms in your home country?”, “What do you call this type of landform in your home language?” and so on. With these modifications, we have effectively achieved differentiation for a culturally and linguistically diverse classroom. Problem solved! 

Can AI deal with the mix?

Or have we? Can AI truly comprehend the complexities of diversity within a single classroom? Consider this scenario: you are a teacher in western Sydney, where 95 per cent of your class comes from a Language Background other than English (LBOTE). This is not uncommon in NSW, where one in three students belongs to this category. 

Your class comprises a mix of high-achieving, gifted and talented individuals—some of whom are expert English users, while others are new arrivals who have been assessed as “Emerging” on the EALD Learning Progression. These students need targeted language support to comprehend the curriculum. 

Your students come from various backgrounds. Some are Aboriginal Australian students, while others come from Sudan, China, Afghanistan, Nepal, and Bangladesh. Some have spent over three years in refugee camps before arriving in Australia, with no access to formal education. Others live in Sydney without their families. Some are highly literate, while others have yet to master basic academic literacy skills in English.

Going beyond

In this context, simply handing out a worksheet and expecting students to write about landforms in their “home country” can be an overwhelming and confusing task. For some students, being asked to write or speak in their “home language” while the rest of the class communicates in English may trigger discomfort or even traumatic memories related to the conflicts they have escaped. Recognising these nuances is essential for effective differentiation and raises important questions about whether AI can sufficiently navigate the complexities of such diverse classrooms. 

Teachers must go beyond merely knowing their students’ countries of origin; they need to delve into their background stories. This includes appreciating and encouraging the language and cultural resources that students bring to the classroom—often referred to as their virtual schoolbag. Additionally, educators must recognise that access to material resources, such as technology and reading materials, can vary significantly among students. Understanding how students’ religious backgrounds may influence their perspectives and engagement with the content is equally important. Only by taking these factors into account can teachers create a truly inclusive and responsive learning environment.

Then there’s the content itself. Teachers need to critically evaluate the content they plan to teach by asking themselves several important questions. That includes: What are my own biases and blind spots related to this subject matter? What insights might my students have that I am unaware of? What sensitivities could arise in discussions about this content concerning values, knowledge, and language? Most importantly, how can I teach this material in a culturally and linguistically responsive  manner that promotes my students’ well-being and achievement?

One overarching concern

All of these questions point to one overarching concern: Can AI truly address all of these considerations, or are they essential to the inherently human and relational nature of teaching?

Australian linguist and emeritus professor of language and literacy education at the Melbourne Graduate School of Education Joseph Lo Bianco says the benefits of AI have been significantly overstated when it comes to addressing language and culture effectively in classroom teaching. 

Although AI excels at transmitting and synthesising information, it cannot replace the essential interpersonal connections and subjectivity necessary for authentic intercultural understanding. The emotions, creativity, and personalised approaches essential for meaningful teaching and learning are inherently human qualities. 

AI, an aid not a replacement

While AI tools like ChatGPT and Perplexity offer impressive efficiencies for lesson planning, they cannot replace the nuanced understanding and relational dynamics that define effective teaching in culturally and linguistically diverse classrooms. Teachers need to recognise that AI can aid in differentiation but lacks the capacity to fully comprehend students’ individual experiences, histories, and emotional landscapes. The complexities of student backgrounds, the significance of personal narratives, and the critical need for empathetic engagement cannot be reduced to algorithms. 

As we embrace AI in education, we must remain vigilant in advocating for a pedagogical approach that prioritises human connection and cultural responsiveness. Ultimately, teacher AI literacy should encompass not just the technical skills to integrate AI into classrooms but also the profound understanding of students as whole individuals, fostering an inclusive environment that values each learner’s unique contributions. In this way, we can harness the power of technology while ensuring it complements the irreplaceable art of teaching.



Sue Ollerhead is a senior lecturer in Languages and Literacy Education and the Director of the Secondary Education Program at Macquarie University. Her expertise lies in English language and literacy learning and teaching in multicultural and multilingual education contexts. Her research interests include translanguaging, multilingual pedagogies, literacy across the curriculum and oracy development in schools. 

A new sheriff is coming to the wild ChatGPT west

You know something big is happening when the CEO of Open AI, the creators of ChatGPT, starts advocating for “regulatory guardrails”. Sam Altman testified to the US Senate Judiciary Committee this week that the potential risks for misuse are significant, echoing other recent calls by former Google pioneer, the so-called “godfather of AI”, Geoffrey Hinton.

In contrast, teachers continue to be bombarded with a dazzling array of possibilities, seemingly without limit – the great plains and prairies of the AI “wild west”! One estimate recently made the claim “that around 2000 new AI tools were launched in March” alone!

Given teachers across the globe are heading into end of semester, or end of academic year, assessment and reporting, the sheer scale of new AI tools is a stark reminder that learning, teaching, assessment, and reporting are up for serious discussion in the AI hyper-charged world of 2023. Not even a pensive CEO’s reflection or an engineer’s growing concern has tempered expansion.

Until there is some regulation, proliferation of AI tools –  and voices spruiking their merits – will continue unabated. Selecting and integrating AI tools will remain contextual and evaluative work, regardless of regulation. Where does this leave schoolteachers and tertiary academics, and how do we do this with 2000 new tools in one month (is it even possible)?!?!

Some have jumped for joy and packed their bags for new horizons; some have recoiled in terror and impotence, bunkering down in their settled pedagogical “back east”. 

As if this was not enough to deal with, Columbia University undergraduate, Owen Terry, last week staked the claim that students are not using ChatGPT for “writing our essays for us”. Rather, they are breaking down the task into components, asking ChatGPT to analyse and predict suggestions for each component. They then use ideas suggested by ChatGPT to “modify the structure a bit where I deemed the computer’s reasoning flawed or lackluster”. He argues this makes detection of using ChatGPT “simply impossible”. 

It seems students are far savvier about how they use AI in education than we might give them credit, suggests Terry. They are not necessarily looking for the easy route but are engaging with the technology to enhance their understanding and express their ideas. They’re not looking to cheat, just collate ideas and information more efficiently.

Terry challenges us as educators and researchers to think that we might be underestimating the ethical desire for students to be more broadly educated, rather than automatons serving up predictive banality. His searing critique with how we are dealing with our “tools” is blunt – “very few people in power even understand that something is wrong…we’re not being forced to think anymore”. Perhaps contrary to how some might view the challenge, Terry suggests we might even:

need to move away from the take-home essay…and move on to AI-proof assignments like oral exams, in-class writing, or some new style of schoolwork better suited to the world of artificial intelligence.

The urgency of “what do I do with the 2000 new AI apps” seems even greater. These are only the ones released during March. Who knows how many will spring up this month, or next, or by the end of 2023? Who knows how long it will take partisan legislators to act, or what they will come up with in response? Until then, we have to make our own map.

Some have offered a range of educational maps based on alliterative Cs – 4Cs, 6Cs – so here’s a new 4Cs about how we might use AI effectively while we await legislators’ deliberations:

Curation – pick and choose apps which seem to serve the purpose of student learning. Avoid popularity or novelty for its own sake. In considering what this looks like in practice, it is useful to consider the etymology of the word curation which comes from the Latin word, cura, ‘to take care of.’ Indeed, if our primary charge is to educate from a holistic perspective, then consideration must be extended to our choice of AI or apps that will serve their learning needs and engagement.

The fostering of innate curiosity means being unafraid to trial things for ourselves and with and for our students. But this should not be to the detriment of the intended learning outcomes, rather to ensure they align more closely. When curating AI, be discerning in whether it adds to the richness of student learning.

Clarity – identify for students (and teachers) why any chosen app has educative value. It’s the elevator pitch of 2023 – if you can’t explain to students its relevance in 30 seconds, it’s a big stretch to ask them to be interested. With 2000 new offerings in March alone, the spectres of cognitive load theory and job demands-resources theory loom large.

Competence – don’t ask students to use it if you haven’t explored it sufficiently. Maslow’s wisdom on “having a hammer and seeing every problem as a nail”  resonates here. Having a hammer might mean I only see problems as nails, but at least it helps if I know how to use the hammer properly! After all, how many educators really optimise the power, breadth, and depth of Word or Excel…and they’ve been around for a few years now. The rapid proliferation makes developing competence in anything more than just a few key tools quite unrealistic. Further, it is already clear that skills in prompt engineering need to develop more fully in order to maximise AI usefulness. 

Character – Discussions around AI ethical concerns—including bias in datasets, discriminatory output, environmental costs, and academic integrity—can shape a student’s character and their approach to using AI technologies. Understanding the biases inherent in AI datasets helps students develop traits of fairness and justice, promoting actions that minimise harm. Comprehending the environmental impact of AI models fosters responsibility and stewardship, and may lead to both conscientious use and improvements in future models. Importantly for education, tackling academic integrity heightens students’ sense of honesty, accountability, and respect for others’ work. Students have already risen to the occasion, with local and international research capturing student concerns and their beliefs about the importance of learning to use these technologies ethically and responsibly. Holding challenging conversations about AI ethics prepares students for ethically complex situations, fostering the character necessary in the face of these technologies.

Launching these 4Cs is offered in the spirit of the agile manifesto undergirding development of software over the last twenty years – early and continuous delivery and deliver working software frequently. The rapid advance from ChatGPT3, to 3.5, and to 4 shows the manifesto remains a potent rallying call. New iterations of these 4Cs for AI should similarly invite critique, refinement, and improvement.

L to R: Dr Paul Kidson is Senior Lecturer in Educational Leadership at the Australian Catholic University, Dr Sarah Jefferson is Senior Lecturer in Education at Edith Cowan University, Leon Furze is a PhD student at Deakin University researching the intersection of AI and education.