This resource includes answers to some frequently asked questions around AI and the use of AI in learning and teaching. This is a living document, where the questions and answers may be updated and edited to adapt to the evolving landscape of AI. The content in this resource is intended as guidance. Links to relevant ANU policies and procedures are provided where relevant.
Currently, Artificial Intelligence (AI) of various kinds is already integrated into many of our common daily tools, such as predictive text and grammar correction, image and speech recognition, “smart assistants”, car navigation systems and even video games. Typically, this kind of AI is referred to as “traditional”, as it follows predetermined rules and processes and excels at specified tasks with predictable outcomes.
“Generative AI” (Gen AI) differs from traditional forms, as it is “trained” on huge volumes of data to be adaptable and to generate creative and unpredictable outputs. This is achieved through the use of a “neural network” by which the AI tool learns the underlying patterns and associations in the data it is provided. It then uses this learning to create new images, video, text, music, programming code and even speech.
The following tools are currently available with enterprise licenses when accessing using an ANU account.
- Copilot Enterprise (formerly known as Bing Chat Enterprise) is available by signing in with an ANU email address. Copilot is a versatile AI assistant capable of various tasks such as information retrieval, text generation, language translation and image creation.
- Adobe Firefly is available by signing in with an ANU “u number” email address. Firefly is a generative AI image tool that allows users to create AI images from text, and to perform various image processing tasks such as “generative fill” (e.g. removing objects or adding new ones), text effects and many others. Firefly’s model is trained on the dataset of Adobe Stock, meaning the content is either appropriately licensed or in the public domain where copyright has expired.
The main benefit of using Copilot Enterprise and Adobe Firefly with an ANU account, is that personal and company data is protected and neither platform utilises user data for AI training purposes. To access the enterprise accounts, login with a “u number” email address. This ensures that staff and students are not required to set up external accounts or expose their personal information to 3rd parties. For more information, please refer to the Copilot Enterprise and Adobe Firefly access guides.
Note that currently, access to Copilot Enterprise and Adobe Firefly is free to all staff when using an ANU account. Students also have access to Copilot Enterprise when using their ANU accounts, and some students from specific courses and disciplines have access to Adobe Firefly. Note also that usage may still be constrained by technical limitations and content generation quotas particular to the tool you are using.
At present, educators should refer to the ANU AI Institutional Principles, which offer guidance on engagement with and application of AI in learning, teaching and research endeavours. Additionally, the Centre for Learning and Teaching (CLT) is developing a comprehensive guide specifically addressing GenAI use in assessments. This guide is expected to be published in the near future.
It is essential for all students to uphold values of academic integrity and ensure their submitted work is their own. Where other sources or AI tools are used with permission, it must be acknowledged in their work.
Assessment instructions, guidance and boundaries (in line with policy) should also be clear on whether AI use is permitted or not permitted in course-related work. This can be modelled and practised in other, non-assessed tasks and activities in order to develop the necessary understanding and skills that help ensure academic integrity. If AI is allowed in your course, provide specific instructions on how to acknowledge and declare the use of generative AI when it has been used as a tool in producing an assignment piece.
Using AI-generated content when not permitted and claiming authorship without acknowledgment constitutes a breach of academic integrity. There are overlaps between the use of generative AI and actions that are considered breaches of academic integrity (See ANU Types of Academic Misconduct. In particular:
- Ghost writing and contract cheating: when we submit work written or rewritten by another party, in part or whole, as our own.
- Plagiarism: when we submit words or ideas copied, in part or whole, from other sources without acknowledgment,
The ANU procedure for suspected misuse of generative AI is the same as for any misconduct, which includes giving a student the opportunity to respond. Details can be found on the staff-only pages on Academic Integrity.
For non-invigilated assessments, it is a challenge to determine if a student has used AI in the process of completing their work. However, uploading student work to AI detection software may constitute a breach of student privacy, as students retain Intellectual Property ownership of their assignments, even when they have been submitted for assessment. Thus, for privacy and data security reasons, uploading student work directly into an AI platform without their consent is not permitted (Generative AI and data governance).
Consider the reasons why you may need to prevent students using AI, such as the specific skills and knowledge that require assessment security (e.g., translation tasks). For assessments that require students to demonstrate knowledge and skills without AI use, give clear instructions that the use of AI is not permitted. In mid-long term, where possible in the classroom or assessment tasks, try to provide opportunities for students to develop AI proficiencies and digital capabilities. Also refer your students to these resources:
- Guide for students: best practice when using Generative AI
- ANU Library: Artificial Intelligence including generative AI
- Academic integrity
Please note that the ANU procedure for suspected misuse of generative AI is the same as for any academic misconduct, which includes giving a student the opportunity to respond. Details can be found on the staff-only pages on Academic Integrity.
From an accessibility and inclusion perspective, many assistive technologies (AT) used by staff and students have AI enabled functionality, e.g. screen readers which can now produce an image description if the image’s alt text is not already included. When considering not permitting the use of AI for a specific assessment, it is also important to review any provisions for the use of AI-enabled AT to ensure the assessment remains equitable and does not discriminate against students who depend on those technologies.
Students retain Intellectual Property ownership of their assignments, even when they have been submitted for assessment. For privacy and data security reasons, uploading student work directly into an AI platform without their consent is not permitted, including for the purpose of feedback or marking.
The ANU Generative AI and data governance page provides further guidance around data governance. For more information and assistance with managing privacy and personal data or contacting the ANU Privacy Officer, access the ANU Privacy page.
AI literacy is contextual and variable and can be interpreted and developed in different ways depending on various discipline contexts. Each educators’ content expertise is incredibly valuable when guiding discussions with students about generative AI and its use in critical, ethical and authentic ways.
Promoting AI literacy should prioritise the development of knowledge and skills that enable people to make informed, ethical and responsible decisions about AI technologies and their use. This includes understanding and strengths and limitations of AI to perform specific tasks and critically evaluating the information that is generated by AI in order to identify potential bias and faulty information. As such, developing AI literacy becomes integral to developing academic literacy as a whole. It is important for educators to understand how to use AI in safe, ethical and responsible ways while setting a positive example for their students.
Emphasise the importance of academic integrity with students and its relation to how AI is used, e.g. that it should be a learning tool rather than a shortcut to complete work. You may also discuss or use AI in the classroom in ways that consider and explore the role and impact of AI on current and future professions in the discipline. An example of this is to ask students to generate AI images and critique the cultural stereotypes and biases that may appear, or to reflect on issues of copyright and originality in the creative arts.
The goal is not to instruct students in how to use specific AI tools but to teach them how to apply their knowledge, skills and understanding to critically and thoughtfully evaluate the presence and use of AI in their various contexts.
Take small steps and open up the conversation with students in the classroom. Use these conversations to acknowledge AI, set boundaries around use of AI in coursework, and clearly communicate your expectations and rationale for the use of AI in the course. Once you have established how you will use AI in the classroom, you can encourage students to use AI as a planning or discovery tool for a project or assessment.
For example, instead of designing a summative assessment which integrates AI, begin with low-stake formative tasks or in-class activities that incorporate AI use. This may help to minimise the need to redesign much or any of your coursework. Once you and your students are more confident using AI, you may start planning and designing for future iterations of your course that will incorporate AI in more sophisticated ways, e.g. as an assistive tool or a component of assessment.
Essays and other traditional assessments, such as quizzes and multiple-choice tests may be more vulnerable to AI misuse. An essay task with a generic or broad topic is quite vulnerable, as is any task with a singular or highly prescriptive outcome, such as programming code, multiple-choice questions or direct language translation. Less vulnerable assessment includes those with more authentic topics and tasks that relate to specific contexts, current industry data, bespoke data, specific case-studies or personal experiences. In-person assessments, such as interviews and demonstrations, and collaborative tasks, such as group projects, are also less vulnerable.
Test your assessment design using a generative AI tool such as Copilot Enterprise and determine the risk to academic integrity. For example, you can input the assessment instructions into an ANU enterprise AI tool to test and identify the possible ways that students may use AI and the outputs that it may generate.
Some possible ways to redesign your task include:
- More unique and less predictable assessment questions or prompts,
- Industry specific and current case studies and scenarios,
- Unique data, such as from primary research,
- Tasks requiring personal speculations, recommendations, justifications and reflection,
- Tasks with stages that build on one another and require synthesis, analysis and reflection across tasks.
Reflect also on the type of assessment you have chosen and the intrinsic skills and outcomes that you hope students will achieve, such as communication skills, critical thinking and the application of knowledge. Work within those needs to consider how students can still learn and present their learning but in different and alternative ways.
The use of AI tools in learning and teaching is currently optional. However, as the prevalence of AI grows in both educational institutions and workplaces, it becomes more important to offer opportunities for students to critically analyse the role and impact of AI in their discipline specific contexts and to understand the benefits and limitations of the technology in their current and future professions.
If such a situation arises, it is important to first understand the student’s specific concerns in regard to the use of AI. For instance, they might have questions about ethics or privacy, or they might not understand the purpose and relevance of AI to the task. As with any aspect of teaching, the goal is to create an environment in which students feel safe and supported while exploring their learning.
Ensure that the student understands the rationale for using AI technology, such as its relevance and benefit to the learning, or its authenticity in the field. Address privacy and ethical concerns by emphasising the importance of developing AI literacies and directing the student to available resources such as:
- Guide for students: best practice when using Generative AI
- Artificial Intelligence including generative AI
You can also explain other measures that have been taken to help alleviate these issues, such as Microsoft Copilot’s protected environment.
Finally, you may allow some modification of the task, provided that the necessary outcomes are still met. For example, you might allow the student to negotiate the degree of AI-use that they will be comfortable with; or ask the student to critically reflect on their experience of using AI in that context.
After you have introduced Generative AI through incremental steps, and as educators and students become more comfortable with using the technology, more integral and sophisticated implementations can be explored. To encourage this growth, ensure that you are open and transparent with your students with regards to why and how AI is being used. Have those discussions about reliability, academic integrity, bias and intellectual property. Where applicable, discuss the authentic role of AI in study and the workplace.
When it comes to actual AI implementation, spend some time building capability with smaller, non-assessed activities using AI as a tool in class and for preparatory work. Students might, for example, utilise AI to generate prompts for investigation and discussion. You may even link the outcomes of these activities with discussions of ethics and academic integrity mentioned above.
With more time to develop your resources, classroom activities and assessment, you could consider developing learning materials, activities and assessment that embed AI in practical ways. Here are some examples:
- Students design prototypes using Adobe Firefly and CoPilot generated images based on their own research to identify a problem in their discipline area.
- Students evaluate and critique AI generated image, text or code (e.g. critique a policy or recommendations written by AI).
- Students have AI as a virtual “group member” in their group assessment and document and critique the AI’s contribution.
Additionally, forthcoming resources from both the Centre for Learning and Teaching (CLT) and the Library shall provide educators and students with more practical advice and support. Information on these resources will be published as it becomes available.
GenAI tools and technologies are also becoming increasingly prevalent and integrated into a range of applications and software that students may already be using, such as grammar checkers, word processors, search engines and digital assistants. When using services, tools and apps that are not endorsed or supported by ANU, it is important to consider equity, accessibility and privacy, as some services may require students to disclose personal information and data or to pay for services.
Using ANU supported AI tools, such as Copilot Enterprise and Adobe Firefly with an ANU account, helps to ensure that personal and company data and intellectual property is protected, and that user data is not utilised for AI training purposes. It also ensures that staff and students who access these services are not required to set up external accounts or expose their personal information to 3rd parties.
Generative AI can be viewed as part of a student’s academic toolbox and as one component in the learning process. As such, it is most beneficial when integrated in conjunction with other tools, methods and approaches for the enabling and enhancement of learning.
However, as the presence of AI products and AI-derived content grows more widespread in our daily lives, classrooms and workplaces, the development of effective AI literacy and competencies becomes more important. This can be complementary to the development of other vital skills and knowledge in the students.
Of course, approaches and needs will vary. Like any tool, generative AI should be used with careful consideration of what is most appropriate for the task and context.
Generative AI models are trained on huge volumes of diverse data that is publicly available on the Internet. Typically, they are not trained on data that requires payment, subscription or limited licence to access. Adobe Firefly’s model is trained on a dataset of Adobe Stock images, meaning the content is either appropriately licensed or in the public domain where copyright has expired.
Training is an iterative process by which the AI model analyses its training data to identify, synthesise and learn various patterns and structures in the information. When prompted by the user, an AI like Microsoft Copilot will derive its answers from these complex patterns and structures. However, it does not typically have awareness of which specific sources have contributed to its training on any particular topic. If it gathers information from the Internet or provides links as citations, it finds these in real-time using Internet search by keyword and context.
It is also important to note that, as training is an intricate and time-consuming process, the data used for training is often not current to the present day. It may also have other deficiencies that can affect the accuracy or completeness of the output. Thus, it is always necessary to engage with AI generated content critically and thoughtfully in an “AI literate” way.
The potential for inaccuracy or bias in the output of generative AI is a valid concern of both developers and users. Such AI learn by identifying the connections and patterns in the data we provide them in a process called “training”. However, the AI model cannot understand the social or ethical implications of those connections and patterns; thus, any biases, omissions or other insufficiencies in the training data may also be represented in the output of the AI model.
At a user level, not much can be done about the specific training of an AI tool except by providing feedback to its developers. However, there are steps you can take to help minimise the potential for bias:
- Developing your own awareness of the problem
- Using more specific and inclusive prompts when instructing and questioning the AI
- Prompt the AI for alternative and varied responses
- Applying critical evaluation of the prompts being used and the output the AI provides
- Communicating clear guidelines for the ethical and responsible use of AI tools.
There is also an AI Prompting guide available which goes into more detail about many of these points.
The steps above help ensure that both students and educators can be more actively involved in developing and promoting necessary AI literacies and skills to deal with the inherent challenges and limitations of the AI tools they regularly use.