Principles for the use of Generative AI for productivity purposes
These principles outline the University’s approach to the responsible and effective use of Artificial Intelligence (AI) tools to enhance productivity across all administration aspects of University work, including professional services, research support and teaching-related support tasks.
Introduction
These principles support a positive approach to AI, support staff in understanding the limitations and challenges of using such tools, and enable staff to use GenAI tools appropriately, responsibly and ethically.
Using GenAI is entirely optional. This technology is intended to be a supportive resource, not a mandatory requirement for any role. If you have ethical concerns, believe that a task requires human judgment, or simply prefer to use traditional methods, you are not required to use AI tools. We value your professional judgment and these tools are available to help enhance your work, not replace it.
Ethical and responsible use of AI tools
- Human oversight and accountability: AI tools can streamline tasks like calendar scheduling, email and document drafting, minute taking and document summarisation. However, the tools are intended to compliment, not replace, human judgment and decision-making. Users remain ultimately responsible and accountable for the outcomes produced with AI assistance.
- Fairness and bias mitigation: Users must be aware of potential biases in AI outputs and take steps to mitigate them. Data used with AI tools should be diverse, representative, and ethically sourced.
- Transparency and explainability: Where appropriate, users should understand how AI tools generate their outputs and be able to explain the rationale behind decisions informed by AI.
- Privacy and Data Protection: All use of AI must comply with GDPR and the University's data protection policies. Sensitive or confidential information should not be entered into public or unregulated AI models. Users must exercise caution and ensure data is anonymised and personal identifiers are removed where necessary.
Accuracy and verification
- Fact-checking and validation: Outputs generated by AI tools should never be accepted without critical evaluation. Users must verify the accuracy of information, data, and content produced by AI, especially for factual claims, research findings, or official communications.
- Contextual understanding: AI tools may lack nuanced understanding of context. Users should apply their domain expertise to ensure AI-generated content is appropriate and relevant to the specific situation.
- Avoiding misinformation and disinformation: Users are responsible for ensuring that AI-generated content does not contribute to the spread of misinformation or disinformation.
Learning and development
AI tools can be used to enhance existing skills and develop new competencies, fostering innovation and efficiency. We encourage staff members to stay informed about advancements in AI technologies and best practices for their use.
The University will provide resources, training, and guidance to help staff understand and effectively utilise AI tools for productivity. Further resources and announcements will be made in the coming months for staff.
The following is currently available to staff and students:
- Elev-AI-te: Workshops on GenAI (open to all staff)
- Student guidance and resources on getting the most out of GenAI
Data protection and safe use of AI tools
- Data security: Users must adhere to the University's IT security policies when using AI tools, especially concerning the handling of University data.
- Software and tool approval: Only University-approved AI tools should be used for official University business. All staff have access to Google Gemini (Student hub access required) as the institutionally-supported GenAI tool. Where possible, Gemini should be used to support productivity activities. Where tools other than Google Gemini are made available to staff on a use-case basis, the New IT Solution Request Process (Staff hub access required) is followed to ensure they comply with data protection and information security policies.
- Intellectual Property and Copyright: Users must respect intellectual property rights and copyright laws when using AI tools, ensuring that content generated or analysed by AI does not infringe on existing rights. Staff should acknowledge there are potential copyright implications in the use of genAI tools, and that this is currently a partially understood and contested area. This includes the potential for AI to reuse original content without acknowledging its creators.
Review and adaptation
These principles will be regularly reviewed and updated to reflect advancements in AI technology, evolving best practices, and changes in University policy or regulatory requirements.
We’d value your feedback on the practical application of these principles to continuously improve their effectiveness and relevance.
For further information about the use of Generative AI (GenAI) for teaching and learning, and for Research, please visit the following pages:
- Artificial intelligence in learning and teaching (Staff hub access required)
- Principles for using GenAI in Research and Innovation (Student hub access required)