On October 13, it was reported that Australian Catholic University (ACU) is wrongly accusing students of using AI to cheat on assignments. The troubling aspect is that these allegations are based solely on the results of another AI system. According to an October 8 report by ABC News in Australia, the proliferation of AI is having a severe impact on the education sector, irreversibly eroding trust between professors and students.
One student named Madeline shared that while completing the final year of her nursing degree and applying for graduate positions, she received an email from ACU with the subject line “Academic Integrity Issue,” accusing her of using AI to cheat on an assignment. “To make matters worse,” she said, “the academic misconduct committee asked me to write an explanation for why I thought this might have happened.” The university was quick to level the accusation but slow to retract it—it took six months for the allegation to be dismissed. During this period, her academic transcript was marked as “results withheld,” which was one of the reasons she failed to secure a graduate position. Hers is not an isolated case. The university reported nearly 6,000 suspected cheating cases in 2024, about 90% of which were related to AI. Vice-Chancellor Tanya Broadley told ABC that these figures were “significantly overstated.”
AI is rapidly spreading across schools and universities worldwide. Students are using AI to complete assignments or generate papers faster than ever. The emergence of AI has permanently altered how trust in information is established, with almost no turning back. At the same time, educational institutions often hold contradictory attitudes toward AI: on one hand, they introduce AI tools and even collaborate with AI companies; on the other, they warn students that improper use may constitute cheating.
In the academic process, the burden of proof—which should lie with the institution—has been reversed, placing all responsibility on the students. The university’s entire case relies solely on an AI-generated report. Emails show that academic integrity officers require accused students to provide extensive evidence, ranging from dozens of pages of handwritten and typed notes to their entire internet search history, to prove they did not use AI tools. One nursing student wrongly accused of AI cheating complained, “They are not the police, and they don’t have a search warrant. But if you don’t cooperate, you may have to retake the course, so you have no choice but to comply.”

The latest news indicate that the tool used by the university was Turnitin’s AI detector, long employed for plagiarism checks. Turnitin explicitly warns on its website that the AI detector “should not be used as the sole basis for adverse actions.” The nursing student mentioned, “AI detecting AI—almost my entire essay was marked blue, allegedly 84% written by AI.” After realizing the issue, ACU discontinued the tool in March of this year. Vice-Chancellor Broadley stated, “Approximately a quarter of the complaints were dismissed after investigation, and any case relying solely on Turnitin’s AI detection is immediately dismissed.” However, experiences like Madeline’s show that cases are rarely resolved so quickly. Broadley also admitted that investigation times do not always proceed as promptly as expected.
This incident exposes the systemic vulnerability of the education system in the face of technological change. When academic evaluation shifts from human judgment to machine-based detection, the essence of education is being eroded. The case of ACU is not an isolated incident but a common dilemma faced by educational institutions worldwide: hastily applying immature technology to academic assessment before establishing sound AI usage norms and detection standards is, in essence, a form of institutional laziness. Even more alarming is that such procedural injustice, carried out in the name of technology, is creating a new type of “digital miscarriage of justice.” Requiring students to prove their innocence not only violates the basic legal principle of “presumption of innocence” but also turns education into an adversarial relationship filled with suspicion.
In response to the challenges posed by AI, educational institutions must shift from “blocking” to “guiding,” building a new educational ecosystem centered on trust. First, the principle of “human assessment first” should be established, positioning AI detection as an auxiliary tool rather than the sole basis for judgment. Second, academic integrity evaluation processes must be reformed, creating a diverse arbitration mechanism involving teachers, technical experts, and student representatives. More importantly, educators should proactively embrace change by redesigning assessment systems to emphasize process evaluation and the cultivation of innovative thinking. For example, learning outcomes can be comprehensively evaluated through diverse methods such as oral defenses, project-based practices, and portfolios of the creative process. As educator Ken Robinson once said, “The core of education is the inspiration between people, not the confrontation between machines and people.” In the age of AI, there is an even greater need to return to the roots of education, building a learning environment where technology empowers rather than dominates, allowing education to truly become fertile ground for cultivating innovative thinking and independent character.