Vulnerable assessments
Assessments that are vulnerable or at risk to the emergence of AI, are those:
- not secured through methods such as invigilation, and
- involve tasks such as:
- essay writing
- multiple choice quizzes
- translation exercises
- basic programming assessments
- data analysis and calculations.
They cannot guarantee that students have not used AI in the process of completing an assessment task.
Only highly secure assessments that prohibit AI use (e.g., invigilated live oral assessments requiring students to respond to unforeseen prompts) can limit the misuse of AI. Note that securing all assessments in a course through methods of invigilation is not recommended, as it is resource-intensive and does not contribute value to the learning process for students.
Is my assessment vulnerable to AI?
To evaluate the risk of your assessment design:
- Test your assessment(s) using generative AI tools (e.g., Copilot, Adobe Firefly) and determine the risk to academic integrity. For example, you can put the assessment task instructions into the ANU Enterprise AI tool, Copilot, to test and input follow-up prompts to identify ways that students may use AI and the outputs.
- Consider if your assessment(s) apply to any of these general categories:
- At risk. AI is able to generate an output of passable quality and it would be difficult to find evidence of student learning and assessment completion process.
- Examples of assessments:
- Essay tasks with generic and broad topics
- Multiple choice questions
- Programming (e.g., code solutions)
- Language translation
- Presentation slides and script
- Basic data analysis
- Summary tasks based on readings or course material
- Simple design tasks (e.g., design logos)
- Examples of assessments:
- Lower risk. AI is still able to generate outputs for these assessments, but it may become less relevant and not as valuable for students to use or produce responses that are not of passable quality. These assessments also require students to show evidence of their individual thought and learning processes.
- Examples of assessments:
- Assessment topics and text types that are more authentic and relate to specific contexts
- Viva voces* and Interactive oral assessments**
- Assessments involving personal reflection and higher order thinking skills (e.g., based on personal/course experiences)
- Interactive and collaborative group projects and team-based learning
- Project-based learning with submission checkpoints
- Nested and staged tasks that build on one another (e.g., portfolio task)
- Industry related case studies that are in recent years
- Examples of assessments:
- Unapplicable. AI does not pose a threat to academic integrity of these assessments as it is unapplicable to them.
- Examples of assessments:
- Invigilated timed practical exams in certain disciplines (e.g., Objective Structured Clinical Examination in medical education)
- Laboratory practicals
- Work-integrated learning or work placements
- Fieldwork
- Live performances
- Improvisation tasks (e.g., music)
- Craftsmanship (e.g., pottery)
- Examples of assessments:
- At risk. AI is able to generate an output of passable quality and it would be difficult to find evidence of student learning and assessment completion process.
*Viva voce is a type of oral assessment that requires students to demonstrate their understanding in an unscripted environment through dialogic interaction between a student and an examiner or other students. A common setting for this type of assessment is a thesis defence.
**An interactive oral assessment, as with viva voce, provides an opportunity for students to demonstrate their understanding in a more authentic setting that is similar to a workplace environment.