Principles for Generative AI in learning and teaching
This page sets out the principles which guide the use of Generative AI (GenAI) in any aspect of learning and teaching at the University of Sheffield.
Overview
The following principles are the University’s response to GenAI in learning and teaching. They inform the approach to GenAI within schools, programmes and modules.
These principles are intended to be relevant to a range of AI and GenAI tools and applications:
- AI in this instance refers to tools such as grammar checking tools, translation tools and summarising tools.
- GenAI refers to tools that generate content, including but not limited to images, written text and code.
These principles support a positive approach to AI, support staff and students in understanding the limitations and challenges of using such tools, and enable staff and students to use GenAI tools appropriately, responsibly and ethically.
We encourage students to reflect on these principles when using GenAI tools as part of their studies or when preparing for and undertaking different forms of assessment.
Information for staff
University staff can find information about our approach to GenAI on the Staff hub (University login required).
A positive approach to AI
The University of Sheffield takes a positive approach to AI, and this means:
- Staff will be provided with appropriate development and training opportunities to support the embedding of AI literacy within the curriculum, taking a student-centric approach and engaging students in this process.
- All students will have the opportunity to develop their AI literacy during their time with the University.
- Clarity will be provided within assessment criteria regarding how AI can contribute to or detract from the achievement of learning objectives, including recognition for appropriate and responsible use.
- Taking a proactive approach to engaging with employers and other external stakeholders, so that we best understand their needs from our future graduates.
- These activities are supported by central services including Elevate, 301 Academic Skills, the Library, ELTC, and the Careers and Employability Service.
The limitations of AI tools
We recognise the limitations of GenAI tools and will support students and staff to engage critically with the tools and their outputs, and this means:
- Recognising the potential for bias, as a result of existing biases within training datasets, and also the potential of AI tools to provide inaccurate information, false references or other hallucinations.
- Foregrounding the primacy of learning as a human process; taking a considered approach to how AI can be used as a tool to support learning, whilst also recognising when it might hinder the acquisition of key skills, knowledge and critical thinking.
- Recognising that the algorithms employed by many GenAI tools are hidden from the end user, and as such are not necessarily replicable, testable or understood.
Inclusivity and accessibility
We endorse the use of AI as an assistive technology, ensuring barriers to the use of AI tools are removed for all staff and students. This means:
- Due consideration is given to the equity of access to AI tools used in learning and teaching activities, ensuring that student access does not require any additional cost to the student
- All students and staff have access to Google Gemini (University login required) as the institutionally supported GenAI tool. Where possible, Gemini should be used to support learning and teaching activities.
- Digital accessibility (University login required) is a priority when integrating AI tools into teaching, recognising that some AI tools have accessibility issues. Where these are identified, accessibility statements will be used to manage expectations and suggest alternatives.
- An awareness that AI tools can benefit students who are neurodivergent, anxious or managing other invisible barriers to learning, and we engage with students to find out how these technologies could aid their learning and participation.
Ethical use of AI tools
We recognise the importance of ethical use of AI tools, including issues regarding environmental, social and economic factors. This means:
- Raising critical awareness of the potential environmental, social and economic impacts of using GenAI tools. This includes the consumption of electricity and other natural resources, as well as the exploitation of workers in the development of GenAI tools.
- Recognising the role of AI tools in perpetuating existing inequalities inherent to the datasets they are trained on, user inputs and the opaque way in which these tools arrive at a particular answer.
- Acknowledging there are potential copyright implications in the use of GenAI tools, and that this is currently a partially understood and contested area. This includes the potential for AI to reuse original content without acknowledging its creators.
- Ensuring students and staff are familiar with the University ethics policy in relation to the use of AI tools.
Data protection and safe use of AI tools
We practise the safe use of AI tools, including recognition of issues regarding data, privacy and intellectual property implications. This means:
- Students and staff are educated in how personal data is processed or stored within AI tools.
- Ensuring staff and students are aware of, and have access to, training and guidance relating to information security and data protection in the context of AI use, recognising ways in which companies developing the tools may access and use this data.
- Ensuring staff and students are aware of how to safely use AI for research purposes, in accordance with the University’s research principles.
GenAI and academic integrity
GenAI can be a powerful tool to support learning, but it is important to understand the boundary between academic integrity and misconduct. We will support students to uphold academic integrity in their use of GenAI. This means:
- Providing clarity in expectations regarding the appropriate use of AI, with a specific focus on how this would apply in assessments. These expectations are clearly communicated to students in programme and module handbooks and align with the academic misconduct policy principles and underpinning values.
- Giving a clear and consistent approach for students to declare their use of AI tools within assessment activities. The Acknowledge, Describe, Evidence approach is provided as a model for this.
- Giving students the opportunity to engage in conversations about their use of AI tools, including as assistive technology, and to know where to go for guidance and support when needed (both school and central support).
- Where the use of AI is not encouraged or is prohibited, providing clarity on what this includes and why this decision has been made.
Next steps
When giving consideration to the above principles, we recognise that the use of AI within learning and teaching is developing at pace. These principles will be revisited and updated regularly.
These principles were last updated on 25 July 2025.