Purpose and Intended Audience
This document has been developed as an initial guideline for students, faculty, and staff for the use of Artificial Intelligence (AI) and Generative AI tools.
There are important considerations to keep in mind when using or experimenting with AI, including information security, data privacy, compliance, responsibility, and ethics. Generative AI is both transformative and disruptive. By adhering to these guidelines, the University of Ottawa can harness its benefits more securely.
The University recognizes the need to define its risk tolerance for the use of AI. A balanced approach will be adopted, considering both the potential benefits and the associated risks. AI systems that could potentially affect safety, fundamental rights, or introduce significant ethical issues will be considered high risk.
AI is a rapidly evolving technology, and Information Technology will continue to monitor developments and incorporate feedback from the University community to update its guidelines accordingly.
What is AI and Generative AI
An AI system is an engineered or machine-based system that can, for a given set of objectives, generate outputs such as predictions, recommendations, or decisions influencing real or virtual environments. AI systems are designed to operate with varying levels of autonomy (NIST AI RMF 1.0).
Generative AI refers to an artificial intelligence technology that synthesizes new versions of text, audio, or visual imagery from large bodies of data in response to user prompts. GenAI models can be used in stand-alone applications, such as ChatGPT, Microsoft Copilot, Gemini, or incorporated into other applications such as internet search engines or word processing applications.
Principles
All parties designing, developing, deploying or using AI systems have a shared responsibility to determine whether the AI technology is an appropriate or necessary tool for the relevant context or purpose, and how to use it responsibly.
The decision to deploy or use an AI system should be based on each context of use, a security and privacy risk assessment conducted by Information Security and Access to Information and Privacy Office (AIPO), and consideration of the principles outlined below:
Safety
Safety should be considered throughout the lifetime of an AI system to prevent failures or conditions that pose a potential risk of serious harm to human life, health, property or the environment. Consult efforts and guidelines for safety in services delivered by the University, and align with existing sector- or application-specific guidelines or standards.
Fairness and bias detection
To address issues such as harmful bias and discrimination in AI systems, we all must actively work to ensure the fairness of these systems. Bias can become ingrained in automated systems that help make decisions in our daily activities. They can amplify and perpetuate biases that harm individuals, groups, communities, organisations and society. AI systems and their outputs should empower all individuals, while promoting equality and equity.
Transparency
Information about the AI system and its outputs should be available to users interacting with or using the system. Users should be made explicitly aware when AI is being used, the intended use and purpose, the data collected, used or disclosed, and the decisions or outputs created and by whom.
Accountability
In designing, developing, deploying or using AI systems as part of individual and organisation activities, you are responsible for complying with the principles outlined herein, the university’s policies and guidelines, and applicable regulatory requirements. Accountability for the outputs and decisions of AI systems rests with individuals and organisations, and not with the automated system used to support these activities.
Consult the University’s Academic regulation A-4 - Academic Integrity and Academic Misconduct, the website for Academic Integrity for students, and the Guidelines for Teaching with Generative Artificial Intelligence issued by Teaching and Learning Support Service (TLSS).
Security
AI systems should employ mechanisms to protect against, respond to, or recover from security concerns (e.g. unauthorized access and use) related to the confidentiality, integrity, and availability of the system. Guidance from organizational policies and industry frameworks should be applied to the general security of the underlying software and hardware for AI systems (e.g. existing user access and permissions to information and systems should be respected and the necessary approvals from the data owners should be obtained when implementing an AI solution).
Security risks can arise at any point in the system lifecycle or any link in the supply chain, including the sources of data, components and software that compose the AI system. For example: if a data source is manipulated without knowledge, AI results and the related decisions could be negatively affected. In cases where AI systems guide human decision making, understanding the origin of the data, and the steps of data processing will help demonstrate the trustworthiness of the results.
Respect for privacy
Safeguards to protect personal information and mitigate privacy-related risks should guide the use of AI systems, throughout the lifetime of the system. Privacy-enhancing technologies and data minimizing methods, such as de-identification, should be considered in the design, development, deployment or use of any AI system.
Consult the AIPO’s Guide on reasonable use of artificial intelligence while protecting personal information for more information on how to meet the University’s privacy obligations.
Reliability and validity
AI systems and outputs should be assessed through ongoing testing or monitoring that confirms the AI system is performing as intended under conditions of expected use and over a given period of time, throughout the lifetime of the system. Failures can occur in expected and unexpected settings, which can cause greater harm to people, the University or the University’s resources. Efforts should prioritise the minimization of potential negative impacts and may require human intervention.