Security and Privacy Guidelines for the Usage, Procurement and Deployment of AI

For the purposes of this guideline, the term 'AI' broadly covers all forms of Artificial Intelligence, including traditional AI systems and emerging Generative AI (GenAI) technologies.

Date : 2024-05-24

Purpose and Intended Audience

This document has been developed as an initial guideline for students, faculty, and staff for the use of Artificial Intelligence (AI) and Generative AI tools.

There are important considerations to keep in mind when using or experimenting with AI, including information security, data privacy, compliance, responsibility, and ethics. Generative AI is both transformative and disruptive. By adhering to these guidelines, the University of Ottawa can harness its benefits more securely.

The University recognizes the need to define its risk tolerance for the use of AI. A balanced approach will be adopted, considering both the potential benefits and the associated risks. AI systems that could potentially affect safety, fundamental rights, or introduce significant ethical issues will be considered high risk.

AI is a rapidly evolving technology, and Information Technology will continue to monitor developments and incorporate feedback from the University community to update its guidelines accordingly.

What is AI and Generative AI

An AI system is an engineered or machine-based system that can, for a given set of objectives, generate outputs such as predictions, recommendations, or decisions influencing real or virtual environments. AI systems are designed to operate with varying levels of autonomy (NIST AI RMF 1.0).

Generative AI refers to an artificial intelligence technology that synthesizes new versions of text, audio, or visual imagery from large bodies of data in response to user prompts. GenAI models can be used in stand-alone applications, such as ChatGPT, Microsoft Copilot, Gemini, or incorporated into other applications such as internet search engines or word processing applications.

Principles

All parties designing, developing, deploying or using AI systems have a shared responsibility to determine whether the AI technology is an appropriate or necessary tool for the relevant context or purpose, and how to use it responsibly.

The decision to deploy or use an AI system should be based on each context of use, a security and privacy risk assessment conducted by Information Security and Access to Information and Privacy Office (AIPO), and consideration of the principles outlined below:

  • Safety

Safety should be considered throughout the lifetime of an AI system to prevent failures or conditions that pose a potential risk of serious harm to human life, health, property or the environment. Consult efforts and guidelines for safety in services delivered by the University, and align with existing sector- or application-specific guidelines or standards.

  • Fairness and bias detection

To address issues such as harmful bias and discrimination in AI systems, we all must actively work to ensure the fairness of these systems. Bias can become ingrained in automated systems that help make decisions in our daily activities. They can amplify and perpetuate biases that harm individuals, groups, communities, organisations and society. AI systems and their outputs should empower all individuals, while promoting equality and equity.

  • Transparency

Information about the AI system and its outputs should be available to users interacting with or using the system. Users should be made explicitly aware when AI is being used, the intended use and purpose, the data collected, used or disclosed, and the decisions or outputs created and by whom.

  • Accountability

In designing, developing, deploying or using AI systems as part of individual and organisation activities, you are responsible for complying with the principles outlined herein, the university’s policies and guidelines, and applicable regulatory requirements. Accountability for the outputs and decisions of AI systems rests with individuals and organisations, and not with the automated system used to support these activities.

Consult the University’s Academic regulation A-4 - Academic Integrity and Academic Misconduct, the website for Academic Integrity for students, and the Guidelines for Teaching with Generative Artificial Intelligence issued by Teaching and Learning Support Service (TLSS).

  • Security

AI systems should employ mechanisms to protect against, respond to, or recover from security concerns (e.g. unauthorized access and use) related to the confidentiality, integrity, and availability of the system. Guidance from organizational policies and industry frameworks should be applied to the general security of the underlying software and hardware for AI systems (e.g. existing user access and permissions to information and systems should be respected and the necessary approvals from the data owners should be obtained when implementing an AI solution).

Security risks can arise at any point in the system lifecycle or any link in the supply chain, including the sources of data, components and software that compose the AI system. For example: if a data source is manipulated without knowledge, AI results and the related decisions could be negatively affected. In cases where AI systems guide human decision making, understanding the origin of the data, and the steps of data processing will help demonstrate the trustworthiness of the results.

  • Respect for privacy

Safeguards to protect personal information and mitigate privacy-related risks should guide the use of AI systems, throughout the lifetime of the system. Privacy-enhancing technologies and data minimizing methods, such as de-identification, should be considered in the design, development, deployment or use of any AI system. 

Consult the AIPO’s Guide on reasonable use of artificial intelligence while protecting personal information for more information on how to meet the University’s privacy obligations.

  • Reliability and validity

AI systems and outputs should be assessed through ongoing testing or monitoring that confirms the AI system is performing as intended under conditions of expected use and over a given period of time, throughout the lifetime of the system. Failures can occur in expected and unexpected settings, which can cause greater harm to people, the University or the University’s resources. Efforts should prioritise the minimization of potential negative impacts and may require human intervention.

Responsibilities

All members of the University community
  • As general rule, all members of the University community must take reasonable measures to safeguard the security of the University’s computer systems and networks, including AI-based systems. This includes implementing appropriate security controls to protect against malicious attacks and ensuring that users are informed of the risks associated with using the technology.
  • The adoptions of AI tools introduce new considerations, particularly the application of privacy principles. The default configurations of many AI tools tend to favor functionality, increasing the risk of inadvertently disclosing confidential or sensitive data to unauthorized individuals, or of such data being utilized to train AI models.
  • Users should avoid entering any information classified as Internal, Confidential or Restrictedincluding non-public research data, into an AI tool. Data protection at the University of Ottawa is a shared responsibility under Policy 117.
  • Community members directly involved in the use of AI tools are responsible for identifying and mitigating the risks associated with its use and ensuring that appropriate safeguards are in place. Responsibilities include evaluating datasets to prevent or address biases, scrutinizing generated outcomes, and bolstering defenses against malicious attacks, among other critical tasks.
System Business Owners and Product Managers

System business owners and product managers must submit a security and privacy assessment:

  • when procuring or deploying any AI tool..
  • when third-party suppliers add new AI functionalities to existing systems.
  • when planning to integrate an existing system with an AI tool (e.g. API, plug-ins, connectors, etc.).
Information Security and AIPO
  • The Chief Information Security Officer is responsible for monitoring risks related to the University’s information and IT assets, including AI tools and technologies.
  • The Information Security and AIPO offices work together to conduct Security and Privacy Assessmentsto identify vulnerabilities, evaluate the risks, and recommend mitigation measures in accordance with the University’s obligations.
Information Technology (IT)
  • Information Technology provides, implements and supports standard services and products, and advises the community in the use and deployment of any software technology, including AI based software.
  • Information Technology is responsible to reduce inherent risks associated to the implementation of new technologies by implementing processes and frameworks (e.g. architecture review board, project review committee, change approval board, Secure Software Development LifeCycle, etc).
How to get support with AI tools

IT Solutions and the Architect Working Group (AWG) provide assistance and expert advice to help community members. You can open a request for support by visiting the IT Self-Service Centre.

Implementation and review

The University’s of Ottawa’s Chief Information Security Officer (CISO) is responsible for the implementation, review, and approval of this guideline, and to initiate a review as necessary and at least annually, to ensure alignment with internal and external requirements and regulations.

Questions about this guideline may be referred to the Information Security Office.

References