Principles for Generative AI in learning and teaching

This page sets out the principles which guide the use of Generative AI in learning and teaching at the University of Sheffield.

On

Overview

The University’s response to Generative AI (GenAI) in learning and teaching is being guided by the following principles. 

These are intended to be relevant to a range of AI tools and applications, including, but not limited to, GenAI tools such as Google Gemini (formerly known as Google Bard).  They inform the approach to GenAI within schools, programmes and modules.

This supports a positive approach to AI, supports staff and students in understanding the limitations and challenges of using such tools, and enables staff and students to use GenAI tools appropriately, responsibly and ethically.

Information for students and staff

Students and staff can find information about our approach to Generative AI on the Student and Staff hubs (University login required).

Student hub

Staff hub


A positive approach to AI

The University will take a positive approach to AI, and will ensure students and staff are AI literate, by:

  • Seeking opportunities to embed AI in teaching and assessment activities to develop the AI literacy of our students, and prepare them to use AI tools both during and after their time with the University.
  • Engaging with employers and other external stakeholders, so we understand their needs from our future graduates.
  • Providing clarity within assessment criteria about how AI can contribute to or detract from achieving learning objectives and developing students’ academic and critical skills.
  • Taking an approach that is student-centric and informed by student voice, actively engaging students in developing how AI is used within learning and teaching activities and assessment.
  • Providing development opportunities to ensure staff and students are competent using AI tools (eg prompt engineering) and are also critically literate about the use of these tools and the results they produce. This will include signposting support offered by 301 Academic Skills Centre, the Library, English Language Teaching Centre, and the Careers and Employability Service. 

The limitations of AI tools

We recognise the limitations of GenAI tools and can engage critically with the tools and their outputs. We will:

  • Recognise the potential for bias as a result of the biases within the datasets AI tools are trained on.
  • Recognise the potential of AI tools to provide inaccurate or misleading information, for example, false references or other forms of hallucination.
  • Acknowledge that the outputs from AI tools often lack anything beyond a surface-level engagement with the topic, and can lack the originality and creativity of human-generated responses.
  • Recognise that GenAI tools often lack any understanding of real world context, and have no inherent capacity to derive critical meaning or understanding.
  • Recognise that the algorithms employed by many GenAI tools are hidden from the end user, and as such are not necessarily replicable, testable or understood compared to other forms of academic analytical works (eg statistical tests).
  • Take a considered approach to how AI can be used as a tool to support learning whilst also recognising when it might hinder the acquisition of key skills and knowledge.

Equity of access

We will prioritise fair access to AI tools:

  • Consideration will be given to the equity of access to the AI tools used in learning and teaching activities, ensuring this does not require any additional cost to students.
  • Google Gemini is the University’s supported GenAI tool and provides equal levels of access to all students and staff. Where possible Gemini should be used. However, there may be some cases where it is more appropriate to use other tools for a specific purpose. When this is the case, how students access these tools will be considered. For example, are there different subscription levels that could give access to different levels of functionality.
  • Where other tools are made available to students for specific purposes, appropriate processes will be followed to ensure they comply with data protection and information security policies. 
  • Where other tools are used, students are made aware how their personal data would be used and alternative tools made available where students do not wish to provide such data.
  • Digital accessibility is also considered when using AI tools in teaching. We recognise that some AI tools will have accessibility issues and minimise the use of these tools and/or recommend other solutions.

Ethical use of AI tools

We understand the ethical use of AI tools, including recognition of issues regarding data, privacy and intellectual property implications, by:

  • Ensuring students and staff know to not input any personal data into AI tools, recognising the potential ways in which the companies developing the tools may access and use this data.
  • Ensuring students and staff are aware of the implications of putting research data for analysis purposes into AI tools.
  • Ensuring students and staff know not to input any commercially sensitive, research sensitive or information provided under non-disclosure agreements into AI tools and the dangers of third party access to such information.
  • Ensuring students and staff are familiar with the University ethics policy and how this relates to the use of AI tools.
  • Recognising that there are potential copyright implications in the use of GenAI tools, and that this is currently a poorly understood and contested area.
  • Ensuring students and staff are aware of and given access to training and guidance relating to information security and data protection, and understand how this would relate to the use of AI tools specifically. 
  • Raising awareness of the potential environmental, social and economic impacts that GenAI tools can cause. For example, the consumption of electricity and other natural resources, and the exploitation of workers in the development of AI tools.

Academic integrity and unfair means

We will consider the implications for academic integrity and unfair means by:

  • Providing clarity in expectations regarding the appropriate use of AI, with a specific focus on how this would apply in assessments. This is clearly communicated to students, for example, in programme and module handbooks, assessment criteria etc.
  • Giving a clear and consistent approach for students to declare their use of AI tools within assessment activities. The Acknowledge, Describe, Evidence approach is provided as a model for this.
  • Giving students the opportunity to engage in conversations about the appropriate uses of AI tools, and to know where to go for guidance and support when needed (both school and from central services such as 301 Academic Skills Centre, the Library and the English Language Teaching Centre).
  • Giving consideration to where students are using AI tools as assistive technology and that this is appropriately supported.
  • Where the use of AI is not encouraged or is prohibited, providing clarity on what this includes and why this decision has been made.

When giving consideration to the above principles, we recognise that the use of AI within learning and teaching is developing at pace. These principles will be revisited and updated regularly.

These principles were last updated on 21 June 2024.

They will also be supported by central guidance and support.

A global reputation

Sheffield is a world top-100 research university with a global reputation for excellence. We're a member of the Russell Group: one of the 24 leading UK universities for research and teaching.