Professor Ian Kerr Intervened Before the House of Commons to discuss PIPEDA

Centre for Law, Technology and Society
Technology Law, Ethics and Policy
aerial view of uottawa
Yesterday, Professor Ian Kerr appeared before the House of Commons’ Standing Committee on Access to Information, Privacy and Ethics. The Committee is currently engaged in a review of the Personal Information Protection and Electronic Documents Act (PIPEDA).
Professor Ian Kerr

Professor Ian Kerr, a member of the Centre, is a Full Professor at University of Ottawa’s Faculty of Law, where he holds the Canada Research Chair, Ethics, Law & Technology.

  • His speaking notes are reproduced below.
  • The audio recording of the Tuesday hearing is available here.

Prof. Florian Martin-Bariteau, Teresa Scassa,  Valerie Steeves, Michael Geist, members of the CLTS, and CIPPIC also appeared in the previous weeks before the Committee.


Speaking Notes

Mr. Chair, honorable members, good afternoon and thank you for the opportunity to appear before you today as part of your review of PIPEDA—a statute in desperate need of legal reform.

My name is Ian Kerr. I am a Professor at the University of Ottawa where I hold a unique four-way appointment in the Faculty of Law, the Faculty of Medicine, the School of Information Studies and the Department of Philosophy. For the past 17 years, I have held the Canada Research Chair in Ethics, Law & Technology. Canada Research Chairs are awarded to (quote) “outstanding researchers acknowledged by their peers as world leaders in their fields.” (end quote)

I come before you today in my personal capacity.

I would like to begin by reinforcing points that have already been made in previous testimony.

First, to put it colloquially, the call for stronger enforcement through order making powers, the ability of the OPC to impose meaningful penalties, including fines, is by now a total no-brainer. As Michael Vonn of the BCCLA recently testified before you, “there is no longer any credible argument for retaining the so-called ombudsperson model.” This has already been acknowledged by Commissioner Therrien, and former Commissioners Stoddart and Bernier, fortified by testimony from Canadian jurisdictions that already have order making power, which Commissioners Clayton and McArthur have described as advantageous. Strong investigatory and order making powers are a necessary component of effective privacy enforcement, especially in a global environment. Lets get it done.

Second, I agree with Commissioner Stoddart and with overlapping testimony by Professor Valerie Steeves, both of whom have stated that PIPEDA’s language needs to be strengthened in a way that reasserts its orientation towards human rights. As Professor Steeves attests, privacy rights are not reducible to data protection, which itself is not reducible to a balancing of interests. Enshrining privacy as a human right, as PIPEDA does, reflects a profound and crucial set of underlying democratic values and commitments. Privacy rights are not merely a tradeoff for business or governmental convenience. PIPEDA needs stronger human rights language.

Finally, I also agree with those who recognize the need for meaningful consent. In particular, I agree with the four-point proposal put forward by Professor Michael Geist. I have a further proposal outside of my main submission, which I would like very much to share with you. I am hoping that one of you will jot this down and ask me about it during the question period.

Having reinforced these views, the majority of my remarks will focus on two of the central themes raised by this study: transparency and meaningful consent. I will use this framing language to orient your thinking but, in truth, both of these concepts require expansion in light of dizzying technological progress.

When PIPEDA was enacted, the dominant privacy metaphor was George Orwell’s 1984—“Big Brother is Watching you.” Strong privacy rights were seen as an antidote to the new possibility of dataveillance—the application of information technologies by government or industry to watch, track and monitor individuals by investigating the data trail they leave through their activities. Though perhaps no panacea, PIPEDA’s technology neutral attempt to limit collection, use and disclosure was thought to be a sufficient corrective.

However, technological developments in the 17 years since PIPEDA go well beyond watching. Today, I will focus on a single example: the use of AI to perform risk assessment and delegated decision-making. The substitution of machines for humans shifts the metaphor away from the watchful eye of Big Brother, towards what Professor Daniel Solove has characterized as “a more thoughtless process of bureaucratic indifference, arbitrary errors, and dehumanization, a world where people feel powerless and vulnerable, without any meaningful form of participation in the collection and use of their information.” This isn't George Orwell’s 1984—its Franz Kafka’s Trial of Joseph K. Since the enactment of PIPEDA, the world we now occupy permits complex, inscrutable AI to make significant decisions that affect our life chances and opportunities. These decisions are often processed with little to no input from the people they affect, and little to no explanation as to how or why these decisions were made. Such decisions may be “unnerving, unfair, unsafe, unpredictable, unaccountable”—and unconstitutional. They interfere with fundamental rights, including the right to due process and even the presumption of innocence.

It is worth taking a moment to drill down with real life examples. IBM Watson is used by H&R Block to make expert decisions about people’s tax returns. At the same time, governments are using AI to determine who is cheating on their taxes. Big Law uses ROSS to help its clients avoid legal risk. Meanwhile, law enforcement agencies use similar applications to decide which individuals will commit crimes and which prisoners will reoffend. Banks use AI to decide who will default on a loan, universities to decide which students should be admitted, employers to decide who gets the job, and so on.

But here is the rub. These AIs are designed in ways that raise unique challenges for privacy. Many use machine learning to excel at decision-making; this means AIs can go beyond their original programming, to make “discoveries” in the data that human decision-makers would neither see nor understand.

This “emergent behavior” is what makes these AIs so useful. It also makes these AIs inscrutable. Machine learning, Knowledge Discovery in Databases, and other AI techniques produce decision-making models differing so radically from the way humans make decisions that they resist our ability to make sense of them. Ironically, AIs display great accuracy—but those who use them (even their programmers) often don’t know exactly how or why.

Permitting such decisions without an ability to understand them can have the effect of eliminating challenges that are essential to the rule of law. When an institution uses your own personal information and data about you to decide that you don’t get a loan, or that your neighborhood will undergo more police surveillance, or that you don’t get to go to university, you don’t get the job, or you don’t get out of jail, and those decisions cannot be explained by anyone in any meaningful way, such uses of your data interfere with your privacy rights.

I think that this is the sort of reason a number of experts have told you that we need greater transparency or, as some have put it: “algorithmic transparency.” But it is my respectful submission that transparency does not go far enough. It is not enough for companies or governments to disclose what information has been collected or used. When AI-decisions affect our life chances and opportunities, those who use the AIs have a duty to explain those decisions in a way that allows us to challenge the decision-making process itself. This is a basic privacy principle enshrined in data protection law worldwide.

I would therefore submit that PIPEDA requires a duty to explain decision-making by machines. A duty to explain addresses “transparency” and “meaningful consent” concerns but goes further in order to ensure fundamental rights to due process and the presumption of innocence. Arguably, such a duty is enshrined in the forthcoming EU GDPR (General Data Protection Regulation). Even if it is not, Canada should lead by enacting such a duty. I would go even further, following EU GDPR Article 22, and suggest that PIPEDA should also to enshrine “a right not to be subject to a decision that is based solely on automated processing.” PIPEDA was enacted to protect human beings from technological encroachment. Decision-making about people must therefore maintain meaningful human control. PIPEDA should prohibit fully automated decision-making that does not permit human understanding or intervention. And, to be clear, I make these submissions not to ensure EU “adequacy.” I make these submissions because they are necessary to protect privacy and human rights.

Mama raised me right. Among other things, she taught me that you don’t accept a dinner invitation and then complain to your hosts about what is being served. Mama’s gentle wisdom notwithstanding, I would like to conclude my remarks today by making two uncomfortable observations.

First, as I appear before you today, I think it is fair to say that my sense of déjà vu is not unwarranted. With the exception of a few new points (such as my submission in favour of a new duty to explain), much of what I have said, indeed much of what everyone who has appeared before you has said, has all been said before. Although many of the honourable members of this Committee are new to these issues, those who have done their homework will surely know that we have already done this dance in hearings on Bill S-4, Bill C-13, the Privacy Act, the hearings on Privacy and Social Media and, of course, the PIPEDA Review of 2006. And yet we have seen very little in the way of substantive legislative change. Although ongoing study is important, I say, with respect, that you are not Zamboni drivers. The time has come to stop circling round the same old ice. It is time now for you to make some much needed changes to the law.

Second, as I prepare for question period, I look around the table and all I see is men. Inexplicably, your Committee itself is composed entirely of men. Yes, I realize that you have called upon a number of women to testify during the course of these proceedings. This, of course, makes sense. After all, a significant majority of privacy professionals are women. Indeed, I think it is fair to say that global thought-leadership in the field of privacy is, by majority, the result of contributions made by women. So I find it astonishing and unjustifiable that you have no women on your Committee, a decision as incomprehensible as any made by an algorithm. And so, I feel compelled to close my remarks by making this observation a part of the public record.

Thank you for your careful attention and I look forward to your questions.