When using AI are you safeguarding trust and data privacy?
AI can create efficiencies and transform your day-to-day tasks. However, some AI tools save and repurpose data for future responses to the public. Check out these eight tips, based on the University of Ottawa’s AI security and privacy guidelines, for harnessing the benefits of AI safely.
- Ensure safety. Follow IT security practices, and University guidelines for safe use of information technology to prevent misuse or harmful consequences.
- Protect privacy. Avoid inputting sensitive or restricted data—student numbers, SINs, email addresses, intellectual property, credit card numbers, etc. The less information you include, the less risk of a privacy breach.
- Review data control settings. Evaluate the tool’s privacy and security settings, including options to disable the tool’s ability to reuse your data to train its AI.
- Be aware of bias. Review the data sources and generated results for bias to ensure transparency, accountability, fairness and respect for privacy.
- Check for reliability and validity. Use systems with positive reputations and consistently provide valid results. Review quoted sources as a marker for reliability.
- Take accountability. Make sure the information is accurate, appropriate, and adheres to University policies and guidelines before sharing. Not all AI-generated content is accurate or factual.
- Uphold copyrights and academic integrity. Cite any material that was referenced or quoted. AI draws upon many Internet sources and can fabricate content. Let readers know where AI has been used.
Reach out about concerns. Consult with Information Security and Access to Information and Privacy Office (AIPO) for security and privacy risk assessments before deploying an AI system, or about recommended systems.
Follow these tips to keep you and our University community safe. More information is available in Policy 117 on Information classification and handling and the AIPO’s Guide on responsible use of artificial intelligence.