As generative artificial intelligence (AI) becomes an increasingly powerful and accessible tool, Australian universities are facing a critical challenge: how to ensure academic integrity and educational relevance in a world where AI is ubiquitous.
In a recent seminar hosted at the Australian National University (ANU), Professor Danny Lui presented a compelling case for moving away from reactive, enforcement-based approaches and toward a model focused on possibility, pedagogy, and trust.
This article outlines the key ideas from the seminar and offers practical strategies for educators at ANU who are looking to navigate the complex intersection of AI, learning, and assessment. The full session can also be viewed below:
Rethinking the Problem: Not Policing, but Learning
Rather than asking “How do we stop students from using AI?”, Professor Lui urges educators to ask “How do we ensure that students are still learning when they use AI?”
Recent surveys show that many students already use AI in assessments, even when it is officially banned. More importantly, students often see AI as synonymous with cheating, which fosters secrecy and prevents educators from engaging them in open, constructive conversations.
Instead of trying to enforce unenforceable bans, Lui argues for a fundamental shift: from policing to possibilities.
Introducing the CRAFT Framework
The CRAFT framework offers five practical areas of action to guide institutions, faculties, and individual educators in their response to AI:
| CRAFT Element | Focus Area |
| Culture | Move from fear and denial to a mindset of opportunity and experimentation |
| Rules | Create clear, realistic, forward-looking policies that educators can apply |
| Access | Ensure equitable access to safe, high-quality AI tools and infrastructure |
| Familiarity | Build comfort, literacy, and ethical understanding of AI among staff and students |
| Trust | Foster trust across the educational ecosystem, from students to the wider public |
Educators are encouraged to assess where they and their programs sit across each element, and to focus on what lies within their sphere of control (such as course design and personal teaching practice), rather than fixating on what is beyond their control (such as off-platform student use of AI).
Changing the Language of AI
Words like “permitted”, “approved”, or “authorised” imply that AI use can be monitored and controlled. In reality, unsupervised assessments make enforcement nearly impossible.
Instead, Lui recommends framing AI use in terms of:
- Helpful vs. unhelpful for learning
- Human-AI collaboration rather than replacement
- Transparency and accountability, rather than fear and suspicion
This language shift supports a more educationally productive mindset and helps students see AI as a learning tool, not just a shortcut or threat.
Implications for Assessment at ANU
A key message is that AI cannot be “proofed against” in take-home assessments. Instead, assessments should be designed so that:
- Learning is required to succeed, even with AI tools.
- The use of AI becomes part of the learning process, not a way around it.
- If a student uses AI in a way that prevents learning, they may fail—not due to academic misconduct, but because they did not meet the learning outcomes.
Professor Lui also calls on educators to be more comfortable with the idea that some students may fail if they do not learn. In a post-AI world, failing students based on poor learning outcomes is not punitive—it is essential for maintaining the value and credibility of a degree.
Maintaining the Integrity and Relevance of Higher Education
Professor Lui argues that academic programs now need to uphold two key values:
- Integrity: ensuring students truly learn what they are meant to learn
- Relevance: preparing students to use AI effectively, ethically, and responsibly in the world beyond university
A degree must signal both that a graduate has mastered critical capabilities, and that they are ready to participate in a world where AI is deeply embedded in work and society.
Examples of AI-Positive Practice
The seminar included a range of examples where AI was used constructively:
- “Lane 2” assessment in data visualisation: students were encouraged to use any AI tools to support their design and storytelling, with full transparency.
- AI as a critical friend: students used AI to test ideas, gather feedback, and reflect on their thinking.
- Staff using AI to draft rubrics: AI supported, but did not replace, academic judgment.
These examples show how AI can enhance both learning and teaching when thoughtfully integrated.
Building Trust and Community
Trust was a recurring theme. Students must trust that educators are using AI ethically and transparently. Educators must trust that students are engaging honestly. Employers and society must trust that university degrees reflect real learning.
If students use AI to pass without learning, and educators use AI to mark without understanding, trust breaks down. This scenario undermines not just individual courses but the credibility of higher education as a whole.
Universities must demonstrate that they are preparing students for the real world—and that means equipping them to use AI well, not pretending they won’t.
Final Reflections
Professor Lui concluded with this provocation: “What horizon are we preparing our students for?” Are we redesigning learning for the next two years—or for the world they’ll live and work in for the next 20?
It is not enough to police AI out of our classrooms. Instead, we must teach students how to live, learn, and thrive with it.
Resources and Further Reading
- CRAFT Whitepaper (APRU)
- Staff AI Guide (University of Sydney)
- Student AI Guide (University of Sydney)
View/Download presentation slides
More support and ideas
This wrap up post was created from the transcript of the workshop with the assistance of AI (ChatGPT)