Andrew Welsman

Stop Policing Punctuation Now: Why AI Detection Needs a Rethink 

Like many educators, our social media feeds have been filled with commentary on the impact of AI on teaching, learning, and assessment. One problem appears intractable; namely, how can we tell when students have used AI-generated text in their work? We’re not writing to offer an answer to that question; indeed, at this point, it’s clear that there isn’t a reliable method of separating ‘synthetic’ text from ‘organic’. Instead, we want to bring attention to a troubling possibility, one that is of perhaps greater significance to our role as educators who don’t necessarily teach the finer points of literacy and writing while teaching just about every subject. It’s this: In our efforts to police the use of AI in learning and assessment, are we likely to diminish the quality and character of our own writing and that of our students in the process? 

While many of us have tried to keep up with new tools, evolving policies, and changes with detection software, one strategy has become increasingly popular: look for stylistic ‘tells’. It appears to us that the search for shortcuts in AI detection has led to what psychologists Shah and Oppenheimer call the “path of least resistance”. That is, we gravitate to cues that are easily perceived, easily evaluated, and easily judged. In other words, heuristics. Em dashes? Colons in titles? Specific words or expressions? All have been called out as signs of AI authorship. But here is the problem: these shortcuts don’t work. Worse, they normalise suspicion of well-crafted, edited and even creative writing. When we start to see polished punctuation and consistent tone as evidence of cheating, we inadvertently signal to students and our peers, that good writing is suspect.  

Why?

Let’s start with an example. On social media, we have seen commentators breezily claim that the use of the em dash—the long dash that can be used in place of parentheses or a semi-colon—is the “smoking gun” betraying that a text was authored by AI. As a self-professed fan of the em dash, this prompted Andrew to go searching. A cursory googling using the search phrase “the em dash as an indicator of AI content” revealed that this is a popular topic, with plenty of commentary being traded for and against the notion. Some suggest that using em dashes makes for well-styled and cadenced writing, while others claim that em dashes appear so infrequently in so-called ‘normal’ writing that seeing an em dash ‘in the wild’ is always suspect. But the conjecture and speculation doesn’t end with the em dash.  

The colon in titling: Another AI tell? 

So, we dug deeper. Another purported “give-away” is supposedly the use of the colon to separate titles and sub-titles in a body of text. This seemed a bit of a reach, as academic writing in particular often employs the colon to sub-divide or elaborate on concepts in titles and subtitles. At this point, we realized we needed to consult the proverbial source of these claims, so off to prominent large language models (LLMs) we went.

We each tried out different LLMs. In ChatGPT, Andrew started with the prompt “What are some common tells that a text is in fact AI-authored?” It came back with a list of 10. The list ranged from “Overly Formal or Polished Language” and “Overuse of Transitional Phrases” to “Too Balanced or Fence Sitting,” all of which could be claimed to be common in academic writing. Whereas, when Janine asked the same question, Gemini (2.4 Flash) replied: “repetitive phrasing, formulaic sentence structures, overly polished or generic language lacking deep insight or personal nuance, the frequent use of certain transitional phrases and words (e.g., “delve,” “tapestry”), and occasionally, factual errors or fabricated citations”. 

Great questions

Albeit, many of these were stylistic claims, rather than observations about punctuation, so we decided to dig deeper. When ChatGPT was asked “What about punctuation?” it replied, “Great question – punctuation is another area where AI-generated text can give itself away.” It seemed that ChatGPT was confirming the blogosphere’s punctuation and style concerns in relation to authenticity, noting that the overuse of things like commas, em dashes, colons, and parentheses are “punctuation-related tells.” Janine asked the same question, and Gemini replied “AI-authored text can exhibit “tells” through the overuse of specific punctuation marks like the em dash or colon, the consistent application of flawlessly textbook-correct punctuation lacking human variation or “casual inconsistencies,” and the absence of the nuanced stylistic choices typical of human writers.”

Both of our responses of “AI tells” included overly polished work, overuse of some phrases and the consistent, almost perfect use of specific punctuation like em dashes and colons. The similarities were obvious: consistently correct or textbook punctuation and limited to no typos or casual inconsistencies. Was grammatically correct and proof-read work now concerning? Or worse, the sole domain of LLMs? Should we, as authors and educators, be aiming (as it were) to be more “casually inconsistent” in our writing so as to not appear like we have used an LLM? And to teach our students this in-turn? 

On the spread of lazy AI detection heuristics  

In a fascinating paper, “The Path of Least Resistance: Using Easy-to-Access Information,” Princeton psychologists Shah and Oppenheimer have proposed a framework for understanding how people might use highly accessible cues in everyday decision making. Their framework, in which they explain how people tend to use easily perceived, produced, and evaluated cues to makes decisions, has particular relevance for a scenario in which a teacher is attempting to detect AI -generated text. As visible linguistic markers, punctuation could be regarded as an example of a highly accessible cue. After all, types and patterns of punctuation are easily perceived and evaluated by readers, much more so than more nebulous concepts such as “tone” and “complexity” of vocabulary. One could imagine why punctuation as a cue for detecting AI-generated work might make for a seductive proposition, and why it has become the subject of so much social media speculation. 

Whether the punctuation “rules of thumb” for AI detection being promoted on social media are credible or not is one matter. One thing is nevertheless certain: the idea of punctuation as a tool for AI detection is pernicious—the em dash and other proposed AI detection heuristics are now in the public consciousness and being talked about as if they are useful, despite noteworthy appeals to reason here, here, and here. Our concern as educators is this: Collectively, we may be in real danger of assimilating these “easy” cues and applying them (whether consciously or otherwise) to our own writing and when assessing the work of our students.  

Where might this end? 

Educators are not immune to bias. In the absence of certainty, it’s natural for us to lean into intuition. But intuition shaped by social media tropes is not a sound basis for academic judgement. Perhaps the deeper danger here is that lazy heuristics for AI detection reduce our ability to actually teach and also lower our expectations, as they cast suspicions on students and peers who have worked hard to improve their writing. 

What if we ‘normalise’ our expectations for authentic writing to be automatically suspicious of polish, punctuation, and proof-reading in our student’s work? Rules-of-thumb and looking for “AI tells” is not the answer. They may make for seductive heuristics in decision-making in the present AI-policing climate; but let’s be clear—they’re lazy and specious within the domains of scholarship and academic writing. Noone has an answer to AI detection; there is no silver bullet, algorithmic or otherwise, to help us. In some meaningful ways, the Turing test appears to have been passed.  But what is for sure: we need a new baseline.

Make sure you know the human

What that looks like is currently being debated across the globe. Some have returned to pen and paper, handwritten notebooks and oral defences, but in a context where good writing is suspect, this suggests one thing: if you can’t distinguish AI-generated text from human, make sure you know the human (in this case, your students). That, at least, is what we should be aiming for in Higher Ed. Small classes help; getting to know your students early in a course—even better. Whether you re-engage students with pen and paper, or utilize verbal presentations as components of teaching, learning, and assessment, one thing is clear: In the arms race of academic integrity in the age of AI, knowing your student, rather than relying on rules of thumb or expensive detection algorithms, is the path forward. And to Andrew’s earlier point—no, he will not stop using the em dash. 

Andrew Welsman is an educator in initial teacher education at Victoria University. A former secondary school teacher, his teaching and research interests centre on issues affecting STEM education and the pedagogical impacts of digital technologies.  Janine Arantes is a senior lecturer and Research Fellow at Victoria University with expertise in AI governance, education policy, and digital equity.

Why we should worry about smart glasses in schools

Smart glasses are the latest shiny object in the edtech world. Sleek, AI-powered, and promoted as the next evolution in learning, they promise to transform the classroom. Real-time feedback. Immersive experiences. Personalised accessibility support.  But here’s the thing: they also record. They see. They store data. And they’re being quietly rolled out in schools, because anyone can go to the Ray Ban Website, pay $500 and get them delivered to their door. Even if they live across the road from a school.  Without a serious national conversation about what’s at stake, there are some critical questions we believe need your attention.

While some may enthusiastically praise these devices and paint a picture of tech-enhanced chemistry labs and accessible support for neurodiversity as exciting. Useful, even. Check whether they mention ethics. (Scrolling through the Ray Ban Website….) 

Does it mention rights? 

Does it mention harm? 

What they do and don’t mention matters

Once a device can record, it can surveil. It can be used to monitor behaviour, capture images without consent, and stream content live to platforms beyond the classroom. In the hands of the wrong user, smart glasses aren’t just learning tools – they’re tools of manipulation, misuse, and control. And remember – anyone can buy smart glasses. This is a very different context than CCTV footage in schools.  To explain, and let’s not be naive about who’s influencing young minds right now.

The same students, staff, parents and onlookers that might wearing smart glasses may also be influenced by Andrew Tate on TikTok, YouTube, and Instagram. Imagine this: you’re teaching Maths. A student parrots Andrew Tate’s misogynistic views and live streams your response through the smart glasses they were wearing. You had no idea. Its not like they held up their phone. They were just looking at you, through their glasses. Within minutes, it’s online, weaponised, and fed to the Tate army. They didn’t mean to destroy you or your capacity to feel safe teaching. But they did. 

You leave teaching. 

Not because you wanted to. 

But because the damage was done. 

Are you being recorded?

Now, even walking the streets feels uneasy – you are left wondering if the glasses people are wearing are quietly recording you while you buy groceries or cross the road. Other teachers start to wonder. What if parent-teacher interviews had smart glasses? Or the swimming carnival? What if someone is just sitting outside the school with a pair of glasses on while kids are playing? They don’t have their phone out, so their activity doesn’t trigger concern. This isn’t speculation. We’ve already seen how images can quickly be used to make AI-generated deepfake nudes of girls in schools. And teachers aren’t exempt. What smart glasses do is lower the barrier between thought and action

They offer immediacy. Stealth. Power.  

We’ve already seen smart glasses banned from ATAR exams in WA. But what banning them from parent-teacher interviews? PE lessons? Swimming carnivals? Where are the boundaries? And while we are asking questions – who is collecting the data? And where is it all going? How does it align with emerging and current legislation?  And all of this is being marketed under the guise of innovation. But innovation without ethical frameworks can be weaponised. Smart glasses do not exist in a vacuum.

They exist in a world shaped by misogyny, online abuse, discrimination and algorithmic amplification of harm. If we ignore that, if we look only at the marketing promises and not at the sociocultural context, we are putting not only students, but teachers, parents, and society at the risk of harm. We need to stop treating “real-time feedback” as neutral. We need to stop pretending “immersive” means safe. And we need to seriously question who benefits from “innovation” when surveillance is embedded in the hardware and marketed by people with millions of followers, like Chris Hemsworth.  

Let’s be clear: this is not just about a gadget. 

Outside of schools, smart glasses are marketed as sleek, cutting-edge tools designed to enhance everyday life, work, and productivity. In the consumer market, they’re promoted as lifestyle wearables that offer hands-free access to navigation, messaging, music, and AI assistance – all wrapped in fashionable, discreet frames. In education smart glasses are being marketed as inclusive and dynamic. But in practice, they are building out a surveillance infrastructure inside schools. Anyone with smart glasses (students, parents, teachers, the person sitting outside the school) might be able to soon have access to real time facial recognition, eye tracking, emotion analysis, and real-time data sharing. That’s not innovation. That’s infrastructure. Infrastructure that we have legislation around to ensure our rights are being upheld.  

Which Brings Us to Chris Hemsworth 

You’ve probably seen the ads. Chris Hemsworth, superhero, Aussie icon, father of school-aged children, promoting AI-integrated smart glasses with enthusiasm and charm. He’s partnered with Ray Ban to showcase how wearable AI is the future. But here’s the thing: when a celebrity of his influence endorses surveillance tech, especially without reference to consent or harm, it’s not just a missed opportunity. Its reckless. Now, to be clear – this isn’t about criticising Chris Hemsworth – it’s a call to anyone with the power to shape public perception. If you have the platform, the reach, or the resources, you also have the responsibility to bring potential harms of emergent technologies in education into the conversation. Because ignoring those risks, especially when kids, parents, and teachers are watching, can’t be thought of as naïve. You have a social responsibility to consider if it is reckless. 

And it is reckless

Reckless means acting without thinking about the potential consequences – especially when those actions could cause harm. That’s why we need an awareness campaign. Kids look up to Chris Hemsworth. So do parents. So do teachers. If smart glasses are going to be marketed to schools and families, there must be transparency about what they do and what they risk. That’s why we need to have a conversation with Chris. Not about banning the tech. But about being responsible with his platform.

Technology will continue to be marketed aggressively, but those with the power to influence and implement it must take far greater responsibility for its impact. Imagine if Chris Hemsworth read this and considered the perspective of a teacher. In the middle of a teaching crisis, a teacher is trying to deliver a science lesson on a sweltering Friday afternoon in a 35-degree classroom packed with 30 students, only to come home and discover that a slip of the tongue, saying “orgasm” instead of “organism,” has been turned into viral content in the manosphere. One more teacher doesn’t return to the classroom, during a teaching crisis. 

Where Do We Go from Here? 

We are not saying “ban it all.” 

We are saying: Pause. Reflect. Regulate. 

We are also not blaming anyone. Because this isn’t about blame. It’s about responsibility. It’s our collective responsibility to adopt new technologies in ways that respect commonly held expectations of technology, especially in semi-private spaces. We already know not to film at swimming carnivals, in toilets, or change rooms and we wouldn’t wear our phones on our faces during a parent-teacher interview – but smart glasses would do exactly that. Imagine if you just ‘forgot to take the glasses off’… Further, teachers and schools shouldn’t be expected to manage the risks of tech like smart glasses alone. Meta, Ray-Ban, and others must embed safeguards, transparency, and safety by design, privacy by design and so on. And those with huge platforms, like Chris Hemsworth could use his platform not just to promote, but to help spark conversations about where this tech belongs.

What if Chris Hemsworth posted to his 52M followers that “Some devices are made for skydiving – not for schools” – would the conversation shift? We would love to hear your thoughts.

Bios

Janine Arantes is a researcher and educator at Victoria University and advocate exploring the social, ethical, and psychological impacts of emerging technologies in education. Andrew Welsman is a researcher and educator at Victoria University with expertise in STEM education, digital technologies, and initial teacher education.