Generative AI

GenAI: Will It Deepen the Digital Divide in Australian Classrooms?

Edtech advocate and promote generative artificial intelligence (GenAI) tools as transformative, offering personalised scalable, and interactive learning experience. That only works for some schools. While some experiment with AI-driven platforms and policies, others lack basic digital infrastructure. The risk is clear: GenAI may not democratise education. Instead it might deepen existing divides unless we have targeted policies and equitable implementation.

From “AI for ALL”  to Unequal Access

GenAI has potential. It can adapt explanations to student needs, offer 24/7 academic support, and automate repetitive tasks that are very time consuming for teachers. The major assumption in the use of GenAI is that all students and teachers are equipped enough, have access to the technology, the skills to use it, and the literacy to question it.

The UNESCO report on Global education Monitoring Report, 2023: technology in education:a tool on whose terms? revealsaccess to AI-enhanced learning tools remains highly unequal across socioeconomic lines. Many students in rural and remote communities, as well as those from culturally and linguistically diverse backgrounds, are less likely to benefit. Reasons?  Limited access to devices, patchy internetand language bias in AI systems. Low Earth Orbit (LEO) satellite internet once installed offers a promising solution for bridging the digital divide  in rural areas.

The OECD Education Policy Outlook 2024 reports on building equitable societies, includinghelping manage teacher workload through the support of AI. However, it also suggests that students without digital literacy may rely on AI outputs without critical understanding, leading to superficial or inaccurate learning. Australia’sdigital divide is already evident more pronounced between urban and regional schools. Some private and well funded schools are integrating AI literacy into their classroom practices, while public schools in lower SES areas are still navigating foundational digital inclusion.

What’s Happening in Australian Classrooms?

A range of state-level and system-wide GenAI pilot programs are already underway. These examples illustrate both innovation and inequality. 

In Western Australia, a $4.7 million AI pilot program funded by the federal and state governments is exploring how AI can reduce teacher workload through lesson planning and administrative support. Eight public schools are involved in this initial phase. 

In South Australia, the Department of Education partnered with Microsoft to launch EdChat, a generative chatbot trialled in eight high schools in an initial trial completed in August 2023. Designed to support inquiry-based learning, the chatbot raised questions around student data privacy, accuracy of feedback and classroom integration.  

In NSW, the government developed NSWEduChat, an AI assistant being trialled in 16 public schools. Unlike ChatGPT, it prompts students to reflect and reason, rather than delivering direct answers. The tool aims to align with pedagogical goals, but requires teachers to mediate use and guide students’ understanding. 

In Queensland, Brisbane Catholic Education developed Catholic CoPilot, grounded in Catholic teaching, tradition and theology. Teachers use it for lesson planning, report writing, and generating resources, showing that customised AI is feasible within institutional values.

In Victoria, an independent school in Melbourne, Haileybury Keysborough , embraced ChatGPT and created school-wide protocols to teach students ethical and effective AI use. These include critical thinking tasks and assessments designed to discourage AI overreliance.

 These examples show the growing momentum but also risk of a two-speed system. Well-resourced schools move quickly, underfunded ones lag behind, potentially widening the gap in both learning outcomes and digital literacy.

AI Literacy:The New Divide

One of the most overlooked challenges in this debate is the literacy gap around AI. Knowing how to access GenAI is not enough. Students and teachers need to understand how these tools work, their limitations and how to verify output generated by these AI systems. 

A 2024 Australian Education Union Survey Victoria Branch revealed 1,560 underfunded public schools in Victoria.  Public schools, particularly in regional and low-income areas, have limited opportunities. According to the Australian Education Union’s (AEU) 2024 article on future skills for educators, teachers are “left to navigate the ethical and pedagogical risks of AI on their own”, often without clear national guidance, curriculum-aligned training, or digital infrastructure to experiment safely.

This leads to a two-tier system. n one, students and teachers are supported to use AI thoughtfully as a scaffold for learning collaboration, and innovation. In the other, they are either excluded from AI use altogether, or exposed to it in ways that lack context, clarity, critical literacy, or alignment with pedagogy. This new two-tier of inequality will produce students who can interrogate technology critically, and those who treat it as an unquestioned authority.

Even more concerning, the AEU notes that “students already at a disadvantage are most at risk of falling further behind” if AI adoption is left to market forces or uneven state-by-state initiatives.

Design, Governance, and Inclusion

GenAI tools are not culturally neutral. They reflect the data they are trained on, mostly English language, Western centric internet sources. Without careful consideration, they can reinforce linguistic, cultural, and cognitive bias.

Bender, in On the Dangers of Stochastic Parrots,  warns that the large language models (LLM), on which GenAI is based, often reproduce harmful stereotypes and misinformation unless explicitly mitigated. This risk is amplified in educational settings where students may lack the critical skills to identify inaccuracies.

Equity in AI use means more than access, it demands representation, transparency, and contextual sensitivity. We need AI tools aligned to local curricula, respectful of cultural knowledge systems, and available in accessible formats.

What Can Be Done?

Invest in infrastructure for underserved schools to ensure that all public schools particularly in regional, remote and low SES areas have reliable internet, updated devices, and tech support.

Moving beyond one-off briefings. Teachers need ongoing training that is curriculum-aligned, classroom tested, and critically reflective. Professional learning communities, and microcredentials in AI pedagogy could help bridge the gap.

Engage learners, families, and communities in conversations about AI use in education and developing investing in open source AI that reflects Australia’s educational and cultural diversity.

As educators, researchers, and policymakers we have a choice. We can let technology set the pace, or we can slow down, ask critical questions, and build systems that centre human dignity and learning equity. Let us ensure that GenAI supports the public good, not just private innovation.

Let us ensure no learner is left behind in the age of artificial intelligence.

Meena Jha is an accomplished researcher, educator, and leader in the field of computer science and information technology, currently serving as Head of the Technology and Pedagogy Cluster at Central Queensland University (CQU), Sydney, Australia.

Does the new AI Framework serve schools or edtech?

On 30 November, 2023, the Australian federal government released its Australian Framework for Generative AI in Schools. This is an important step forward. It provides much-needed advice for schools following the November 2022 release of ChatGPT, a technological product capable of creating human-like text and other content. This Framework has undergone several rounds of consultation across the education sector. The Framework does important work in acknowledging opportunities while also foregrounding the importance of human wellbeing, privacy, security and safety.

Out of date already? 

However, in this fast-moving space, the policy may already be out of date. Following early enthusiasm (despite a ban in many schools), the hype around generative AI in education is shifting. As experts in generative AI in education,researching it for some years now, we have moved to a much more cautious stance. A recent UNESCO article stated that “AI must be kept in check in schools”. The challenges in using generative AI safely and ethically, for human flourishing, are increasingly becoming apparent.

Some questions and suggestions

In this article, we suggest some of the ways that the policy already needs to be updated and improved to better reflect emerging understandings of generative AI’s threats and limitations. With a 12-month review cycle, teachers may find the Framework provides less policy support than hoped. We also wonder to what extent the educational technology industry’s influence has affected the tone of this policy work.

What is the Framework?

The Framework addresses six “core principles” of generative AI in education: Teaching and Learning; Human and Social Wellbeing; Transparency; Fairness, Accountability; and Privacy, Security and Safety. It provides guiding statements under each concept. However, some of these concepts are much less straightforward than the Framework suggests.

Problems with generative AI

Over time, users have become increasingly aware that generative AI does not provide reliable information. It is inherently biased, through the biased material it has “read” in its training. It is prone to data leaks and malfunctions. Its workings cannot be readily perceived or understood by its own makers and vendors; it is therefore not transparent. It is the subject of global claims of copyright infringement in its development and use. It is vulnerable to power and broadband outages, suggesting the dangers of developing reliance on it for composing content.

Impossible expectations

The Framework may therefore have expectations of schools and teachers that are impossible to fulfil. It suggests schools and teachers can use tools that are inherently flawed, biased, mysterious and insecure, in ways that are sound, un-biased, transparent and ethical. If teachers feel their heads are spinning on reading the Framework, it is not surprising! Creators of the Framework need to interrogate their own assumptions, for example that “safe” and “high quality” generative AI exists, and who these assumptions serve.

As a policy document, the Framework also puts an extraordinary onus on schools and teachers to do high-stakes work for which they may not be qualified (such as conducting risk assessments of algorithms), or that they do not have time or funding to complete. The latter include designing appropriate learning experiences, revising assessments, consulting with communities, learning about and applying intellectual property rights and copyright law and becoming expert in the use of generative AI. It is not clear how this can possibly be achieved within existing workloads, and when the nature and ethics of generative AI are complex and contested.

What needs to change in the next iteration?

  1. A better definition: At the outset, the definition of generative AI needs to acknowledge that it is, in most cases, a proprietary tool that may involve the extraction of school and student data. 
  2. A more honest stance on generative AI: As a tool, generative AI is deeply flawed. As computer scientist Deborah Raji says, experts need to stop talking about it “as if it works”. The Framework misunderstands that generative AI is always biased, in that it is trained on limited datasets and with motivated “guardrails” created largely by white, male and United States-based developers. For example, a current version of ChatGPT does not speak in or use Australian First Nations words, for valid reasons related to the integrity of cultural knowledges. However, this indicates the whiteness of its “voice” and the problems inherent in requiring students to use or rely on this “voice”. The “potential” bias mentioned in the Framework would be better framed as “inevitable”. Policy also needs to acknowledge that generative AI is already creating profound harms, for example to children, to students, and to climate through its unsustainable environmental impacts.
  3. A more honest stance on edtech and the digital divide: A recent UNESCO report has confirmed there is little evidence of any improvement to learning from the use of digital technology in classrooms over decades. The use of technology does not automatically improve teaching and learning. This honest stance also needs to acknowledge that there is an existing digital divide related to basic technological access (to hardware, software and connectivity) that means that students will not have equitable experiences of generative AI from the outset.
  4. Evidence: Education is meant to be evidence-informed. Given there is little research that demonstrates the benefits of generative AI use in education, but research does show the harms of algorithms, policymakers and educators should proceed with caution. Schools need support to develop processes and procedures to monitor and evaluate the use of generative AI by both staff and students. This should not be a form of surveillance, but rather take the form of teacher-led action research, to provide future high-quality and deeply contextual evidence. 
  5. Locating policy in existing research: This policy has missed an opportunity to connect to extensive policy, theory, research and practice around digital literacies since the 1990s, especially in English and literacy education, so that all disciplines could benefit from this. The policy has similarly missed an opportunity to foreground how digital AI-literacies need to be embedded across the curriculum, supported by relevant existing Frameworks, such as the Literacy in 3D model (developed for cross curricular work), with its focus on operational, cultural and critical dimensions of any technological literacy. Another key concept from digital literacies is the need to learn “with” and “about” generative AI. Education policy needs to reference educational concepts, principles and issues, also including automated essay scoring, learning styles, personalised learning, machine instruction and so on, with a glossary of terms.
  6. Acknowledging the known dangers of bots: It would also be useful for policy to be framed by long-standing research that demonstrates the dangers of chatbots, and their compelling capacity to shut down human creativity and criticality and suggest ways to mitigate these effects from the outset. This is particularly important given the threats to democracy posed by misinformation and disinformation generated at scale by humans using generative AI. 
  7. Teacher transparency: All use of generative AI in schools needs to be disclosed. The use of generative AI by staff in the preparation of teaching materials and the planning of lessons needs to be disclosed to management, peers, students and families. The Framework seems to focus on students and their activities, whereas “academic integrity” needs to be modelled first by teachers and school leaders. Trust and investment in meaningful communication depend on readers knowing the sources of content, or cynicism may result. This disclosure is also necessary to monitor and manage the threat to teacher professionalism through the replacement of teacher intellectual labour by generative AI.
  8. Stronger acknowledgement of teacher expertise: Teachers are experts in more than just subject matter. They are expert in the pedagogical content knowledge of their disciplines, or how to teach those disciplines. They are also expert in their contexts, and in their students’ needs. Policy needs to support education in countering the rhetoric of edtech that teachers need to be removed or replaced by generative AI and remain only in support roles. The complex profession of teaching, based in relationality and community, needs to be elevated, not relegated to “knowing stuff about content”. 
  9. Leadership around ethical assessment: OpenAI made a clear statement in 2023 that generative AI should not be used for summative assessment, and that this should be done by humans. It is unfortunate the Australian government did not reinforce this advice at a national policy level, to uphold the rights of students and protect the intellectual labour of teachers.
  10. More detail: While acknowledging this is a high-level policy document and Framework, we call for more detail to assist the implementation of policy in schools. Given the aim of “defining what safe, ethical and responsible use of generative AI should look like” the document would benefit from more detail; a related  education document from the US runs to 67 pages.

A radical policy imagination

At the 2023 Australian Association for Research in Education (AARE) conference, Jane Kenway encouraged participants to develop radical research imaginations. The extraordinary impacts of generative AI require a radical policy imagination, rather than timid or bland statements balancing opportunities and threats. It is increasingly clear that the threats cannot readily be dealt with by schools. The recent thoughts of UNESCO’s Assistant Director-General for Education on generative AI are sobering.

A significant part of this policy imagination needs to find the financial and other resources to support slow and safe implementation. It also needs to acknowledge, at the highest possible level, that if you identify as female, if you are a First Nations Australian, indeed, if you are anything other than white, male, affluent, able-bodied, heterosexual and compliant with multiple other norms of “mainstream” society, it is highly likely that generative AI does not speak for you. Policy must define a role for schools in developing students who can shape a more just future generative AI, not just use existing tools effectively.

Who is in charge . . . and who benefits?

Policy needs to enable and elevate the work of teachers and education researchers around generative AI, and the work of the education discipline overall, to contribute to raising the status of teachers. We look forward to some of the above suggestions being taken up in future iterations of the Framework. We also hope that all future work in this area will be led by teachers, not merely involve consultation with them. This includes the forthcoming work by Education Services Australia on evaluating generative AI tools. We trust that no staff or consultants on that project will have any links whatsoever to the edtech, or broader technology industries. This is the kind of detail that may help the general public decide exactly who educational policy serves.

Generative AI was not used at any stage in the writing of this article.

The header image was definitely produced using Generative AI.

An edited and shorter version of this piece appeared in The Conversation.

Lucinda McKnight is an Australian Research Council Senior Research Fellow in the Research for Educational Impact Centre at Deakin University, undertaking a national study into the teaching of writing with generative AI. Leon Furze is a PhD Candidate at Deakin University studying the implications of Generative Artificial Intelligence in education, particularly for teachers of writing. Leon blogs about Generative AI, reading and writing.