At a recent conference for IT services in Canadian universities, the most in-demand keynote was a professor from New York who spoke about his long involvement in teaching and research with AI. His description of what AI can really do today and what is underway was interesting. He covered robotics, art and and analyzed real-life situations. My staff was so engrossed in the topic, that we held two sessions together to further explore the implications. Concerns were raised on the ethics of AI, and the potential security pitfalls of deploying AI operationally for certain types of applications.
A number of higher ed institutions have formulated statements and guidelines on AI systems, such as ChatGPT, in reference to academic integrity. It is a natural evolution that AI evolve from research projects to applications in teaching and learning. What is less discussed is the use case of AI in operational activities in universities.
The most common application that people talk about in this realm are chatbots. These are considered mainstream, and few concerns have risen from their usage. In everyday news, we read about wonderful potential in health, environmental uses, business, and agriculture. So, what are the reservations?
At the core of the angst expressed about AI is how to get ahead of advancing responsible and ethical use. The methodologies surrounding learning algorithms, sensors and robots are a black box for most of us. There is an uneasy feeling about the whole human-computer interaction that movies have long promoted that doesn’t seem to go away regardless of the increasing expertise the AI field has demonstrated.
So, what should we do at the University of Ottawa? In various discussions with other CIOs in higher ed, most are taking a wait & see approach. Of course, the AI field in academia has existed since at least the 1980s. However, on the administrative side, things are less evolved. I believe that we should be more proactive. We can check if the systems we buy have any AI integrated in their code, what AI roadmap potential vendors have when they respond to an RFP, develop a more prominent voice on campus to start ahead of the curve, before AI in systems becomes mainstream and the cat is out of the bag.
Cyber security could greatly be impacted by nefarious AI coding. Once it’s immersed in a system, it becomes a lot harder to deal with cyber-attacks. An AI thinking group has developed organically among several of my staff members. I look forward to their interactions and their proposed guidelines to be considered for Information Technology at the University.
I am interested in hearing what you think about AI so feel free to send me an email to discuss it.