Open Dialogue

Abolish A.I. Proctoring

Evan Selinger in conversation with Chris Gilliard

This is Open Dialogue, an interview series from OneZero about technology and ethics.

During the pandemic, educational technology companies experienced a 900% increase in business once schools started shutting down campuses and restricting visitors. These companies swooped in with A.I.-infused software designed to prevent students from cheating. These proctoring algorithms can verify who is taking an exam through facial verification. They can also monitor test-takers, scrutinizing their behavior for signs of irregularities that might indicate cheating, like looking away from the screen.

Critics contend the software promotes unfairness, invasions of privacy, and unduly inflicted anxiety. The situation is so dire that the Electronic Privacy Information Center (EPIC), a public interest research organization, filed a complaint with the D.C. attorney general’s office against some leading companies: Proctorio, ProctorU, Honorlock, Examity, and Respondus.

Once the pandemic ends, everything won’t go back to normal. Controversial ed-tech software will remain an essential testing infrastructure if the fight against it doesn’t intensify. Crucially, we need to consider the long-term impacts of forcing students to conform to these systems. Lindsay Oliver, an activism project manager at the Electronic Frontier Foundation, worries that educational surveillance might have enduring negative repercussions. She’s concerned that students subjected to current forms of educational monitoring “may well be less likely to rebel against spyware deployed by their bosses at work or by abusive partners.”

To get to the root of these problems, take stock of the damage, and explore possible solutions, I’m thrilled to talk with Chris Gilliard, a professor of English at Macomb Community College. Chris is a leading voice in surveillance studies, and much of his work focuses on the intersections between privacy, civil liberties, race, class, and technology. Over the years, regular conversations with Chris about the disparate impacts of surveillance and the misalignments between corporate priorities and the values a liberal democracy should prioritize have profoundly influenced my thinking about technological design and technology policy.

Our conversation has been edited and condensed for clarity.

Evan: To create a context for our discussion of new proctoring methods, let’s begin by reflecting on an old-school approach. Teachers regularly test students, and it’s long been standard practice that when instructors give exams in classrooms, they don’t presume all students will follow the rules. Instead, they look around to discourage cheating.

Frequent glances, even quick ones, can deter bad behavior, like sharing answers and looking at cheat sheets. This inspection is a form of surveillance. So long as the scrutiny is fair, many consider it a legitimate approach to maintaining academic integrity. A key feature of fairness is teachers should focus their attention without prejudice. If instructors only zero in on minorities, for example, their behavior suggests these groups deserve heightened inquiry — that they’re less trustworthy than others.

Acknowledging there are many ways to proctor in-class tests poorly, let’s consider an ideal case. In this scenario, teachers observe students fairly in all of the essential ways. Is this appropriate surveillance?

Chris: I’m glad you’re beginning our conversation by referring to this observation as surveillance and not just teachers walking around the room. I often have to convince others of this point! Your ideal case, however, is less instructive than it seems on the surface. It sits at the intersection of ethics and pedagogy in a manner that few real-world scenarios reflect.

Evan: Interesting! I started with this question so we could establish a baseline for judging various approaches to proctoring. But since you’re resisting the hypothetical, you’re implying it’s better to address the issue with a different framing of the underlying problem. Why?

Chris: The scenario doesn’t mirror the assumptions people make in the real world when assessing job performance. If an attorney represents me, I hope they review the relevant case law before going to court. If I present a puzzling medical case to a doctor, I hope they read up on the diagnostic possibilities. In so many cases, we don’t judge whether a professional is knowledgeable or skilled based on the metric of how well they do on a closed-book exam. Instead, we make contextual assessments that factor in how well people do with a range of resources at their disposal.

Evan: I see what you mean. If you were to test me right now on many topics that I’ve written about, I’d do poorly. The content no longer resides in my short-term memory.

Chris: Right. I’m saying that for the hypothetical proctoring scenario to have the appearance of being ideal, you have to smuggle discredited pedagogical assumptions into the narrative. The testing occurring in the seemingly perfect case doesn’t represent sound evaluation.

Evan: Here’s another example, one that pivots us closer to the controversial instances of A.I. proctoring. Some teachers believe it’s okay to use software to prevent students taking exams on computers from searching the internet for answers. These instructors see this restriction as harmless — as merely a way to make taking tests on computers more like filling in answers on blue books and Scantron sheets.

I take it you disagree. Does this perspective also mistakenly presume that educational monitoring and educational assessment are two separate topics?

Chris: Yes, for starters. And if we dig a little further, we’ll find the software used for this seemingly benign purpose has discriminatory values embedded in it and that the companies selling it use exploitative terms of service.

Once the pandemic ends, everything won’t go back to normal. Controversial ed-tech software will remain an essential testing infrastructure if the fight against it doesn’t intensify.

Evan: Let’s talk about practical constraints. Due to pandemic conditions, teachers are increasing the number of graded assignments that students complete at home. In some cases, this doesn’t require any proctoring. As a humanities professor, I’m lucky. I can create customized writing assignments that are difficult to plagiarize and bespoke open-book exams that are sufficiently challenging. But other teachers will say they can’t do their jobs effectively without relying on closed-book tests. And they’ll further maintain ensuring students take these exams honestly necessitates monitoring.

Do you believe the appeal to necessary surveillance is genuine given the underlying educational realities — that fact the teachers often have to meet curricular requirements that revolve around standardized tests and don’t have the time to grade more labor-intensive forms of assessment? I’m asking this question because the criterion of necessity is a fundamental ethical requirement for conducting surveillance. If surveillance doesn’t have to occur to solve a problem, people shouldn’t do it.

Chris: Even if I want to give the most generous response and position it from the perspective of someone who doesn’t share my philosophies and doesn’t know what I know about educational technology and the companies that sell their services, time still matters. There’s a difference between saying something was necessary when the pandemic began and making excuses for it now. Initially, people had to act quickly to deal with a new emergency. But we’re a year into the pandemic, and it’s clear that proctoring software is racist and ableist. Given these harms, there’s absolutely no valid reason to use it. All previous justifications are invalid.

Evan: Looks like we’re ready to address the current proctoring controversies. What approaches to educational monitoring are students, educators, and activists criticizing? And why are these techniques drawing condemnation for, among other things, reinforcing “white supremacy, sexism, ableism, and transphobia” and violating students’ privacy and civil rights?

Chris: The first problem, which at this point is widely publicized, is that these systems have issues recognizing dark faces — the faces of Black and Brown students who have to shine bright lights on themselves for the duration of exams just to be seen by algorithms.

Evan: Is facial verification used to prove that students are taking their own exams and not having someone else fill in the answers?

Chris: Right, the goal is to establish identity. And when the software doesn’t recognize dark-skinned students and refuses to authorize them to take exams, people have to find another way into a testing system. They usually have to call customer service and have a degrading conversation — say, I need help because the software won’t recognize me as a human being.

Evan: Are you suggesting there’s a tension between what ed-tech companies say they do and how they behave? On the one hand, these companies state they’re offering a responsible service because students have more than one way to prove they are who they claim to be. On the other hand, when we stop viewing this claim as an abstraction, we see students saddled with the undue burden of providing a dehumanizing means of authentication.

Chris: Exactly. And this leads to other problems. Test-taking is stressful. Imagine how much more stress is added by having to prove you’re actually a person. Further stress then occurs when cameras track your eye and head movements to ensure you’re not looking for answers off-screen. These systems don’t adequately consider just how much diversity there is — for example, the fact that some people are disposed to looking around when thinking. To be honest, these systems spot cheating by using ableist norms of bodily presence that stigmatize and potentially punish a range of involuntary facial and eye movements.

Evan: What happens when an automated proctoring system detects behavior that it’s programmed to register as improper?

Chris: The behavior is flagged as suspicious.

Evan: Do instructors decide what to do when this happens?

Chris: For now. Let’s see what future versions of the technology are like if the trend of ed-tech automation continues to intensify.

Evan: Do you think factors like automation bias can adversely impact how instructors judge the students that algorithms flag? In other words, while current monitoring software keeps humans in the loop, can the design affordances shape their conclusions?

Chris: Of course, that’s a fundamental lesson from the history of technological development. Technology is never neutral.

Evan: What additional problems do these systems have?

Chris: Not everyone has reliable internet access or a digital camera. Beyond access issues, not everyone can take an exam in a quiet place.

Evan: Why is noise a relevant factor?

Chris: Imagine you’re a student taking an exam in a room where your siblings are playing or on school online, or your parents or housemates are working, or your dog is moving around. Sudden and loud noises will naturally lead people to move their heads. Sometimes, through no fault of your own, you can’t tune out your environment. But this can be enough to get you flagged for suspiciously looking away from the camera.

Evan: What are the privacy implications?

Chris: Beyond the standard data access and storage issues, we should think about product development. Here’s a hypothetical worth considering. Some of the proctoring programs require students to scan their rooms to prove they’re not hiding answers. What if companies decide to analyze that information to study whether, for example, there’s a correlation between clutter and poor test performance? If this happens, students, who aren’t giving valid consent, will be forced to help companies develop A.I. systems that continue to make unfair correlations and predictions.

Evan: Is this situation any different from the following one? Companies that provide course management software analyze student behavior for many reasons, including product improvement.

Chris: It’s the same thing, which is why I have the same response to both situations — situations authorized in the terms of service. Companies shouldn’t be able to take something from me — whether it’s information about my face, information about my location, or information about what my room looks like — through coercion or duress. Doing so is unfair data extraction. That’s how the entire architecture of the commercial internet is optimized.

Evan: Given the perfect storm of problems you’ve discussed, are there studies that prove the proctoring software works?

Chris: If you mean studies that companies aren’t conducting in-house, then no, I’m not aware of any. Companies simply state their products are effective, and administrators accept their word. This pattern occurs throughout Silicon Valley. Why do people believe smart home surveillance cameras deter crime? They’re not persuaded by well-conducted studies. It’s because the idea has the veneer of a commonsense truth.

As a surveillance scholar, I see the recurring dogma as something like a self-justifying perpetual motion machine. Presume that surveillance works. And if it doesn’t, don’t revisit your guiding assumptions. Just continue to add more surveillance.

Prof. Philosophy at RIT. Latest book: “Re-Engineering Humanity.” Bylines everywhere. http://eselinger.org/

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store