Artificial Intelligence (AI) is a rapidly evolving and widely available technology which is having a dramatic impact on many aspects of society, including education. Of particular relevance to education, generative AI in the form of Large Language Models (LLMs) can rapidly generate text, computer code and increasingly graphics, in response to prompts. If appropriately used, LLMs can contribute positively to learning, but they have also been seen as a potential threat to current educational norms and to academic integrity.
At Oxford we want to support students and teaching staff to use AI ethically and appropriately in their work, and to regard AI literacy as a key skill. However, AI cannot be seen as a short-cut or replacement of the individual effort needed to acquire the intellectual skills of a university graduate.
Oxford University contributed to and has adopted the Russell Group principles on the use of generative AI tools in education, which state that:
- Universities will support students and staff to become AI-literate
- Staff should be equipped to support students to use generative AI tools effectively and appropriately in their learning experience
- Universities will adapt teaching and assessment to incorporate the ethical use of generative AI and support equal access
- Universities will ensure academic rigour and integrity is upheld
- Universities will work collaboratively to share best practice as the technology and its application in education evolves.
Consistent with the Russell Group principles, Oxford’s overarching position on the use of AI in teaching, learning and assessment is as follows:
- The use of AI can be a supportive tool in learning, so long as that use is ethical and appropriate
- In some instances, academic staff, departments and colleges may give more detailed guidance on how they expect AI tools to be used (or not used) for different tasks or on specific assignments. Students should always follow the guidance of their tutors, supervisors and department or faculty
- Whenever AI is used, similar safeguards to those relating to plagiarism should be adopted. Authors should never pass off ideas or text gleaned from AI as their own, and there should be a clear acknowledgement of how AI has been used in the work
- Given that the output of LLMs can be incorrect or entirely fictitious, users of these tools must recognise that they retain responsibility for the accuracy of what they write.
Further guidance is available: