University of Illinois engineering professor Matt West’s “Introductory Dynamics” course can have as many as 600 students enrolled in a single semester. Adequately assessing each one’s mastery of the material in a course that size is a challenging enough task in normal times, but it’s even harder these days. Like most college courses over the past eight months, West’s has been forced online by COVID-19.
When the pandemic struck in the middle of the spring semester this year, educators in similar situations around the world were forced to adapt quickly, seizing on any tools available that would allow them to teach remotely. One of the primary beneficiaries of that upheaval was the exam proctoring software industry—companies like Proctorio, Respondus, and ExamSoft, whose programs turn students’ computers into surveillance tools that track head and eye movements, mouse clicks, and other metrics to identify supposed cheaters. In a recent poll, 77 percent of responding colleges said they already use or are considering using proctoring software.
But the pandemic is also accelerating a shift in the opposite direction: Educators like West who object to proctoring software for ethical or logistical reasons are looking for alternative means to assess students in ways that are fair, protect privacy, and also result in better educational outcomes than high-stress, high stakes exams.
“We didn’t want to have no exam security … but we didn’t want to go this full-on, you have to install spyware on your computer type of route,” West told Motherboard.
A few years ago, West and several colleagues at the University of Illinois developed PrairieLearn, a highly adaptable testing ecosystem, to help students practice material as often as they wanted. It has grown in importance on the campus, especially since the pandemic, and exemplifies how some institutions are using technology to focus on discouraging cheating before it happens, rather than penalizing students after the fact.
The program allows professors in virtually any discipline to design model problems, ranging from simple multiple choice questions to assignments that require students to draw technical blueprints or work with 3D graphics. PrairieLearn then generates randomized versions of those questions that are identical in degree of difficulty and the subject matter being tested, but unique in their wording and variables, and automatically grades them.
The randomized questions prevent students from cheating, and the tool is particularly valuable for large courses, where it would be impossible for professors to develop different tests for each student.
About 14,000 University of Illinois students took 25,000 exams last semester using PrairieLearn, West said. To further ensure academic integrity, many of them were proctored over Zoom by humans, most of whom were graduate students working in the testing center, to ensure that the correct person took the test.
The University of Illinois gives professors the choice between PrairieLearn and proctoring software like Proctorio, which tracks students’ head and eye movements through their webcams and logs every click of the mouse, stroke of the keyboard, and time spent on each question. Any “abnormality”—typing more or less than the class average or looking off into the distance, for example—may result in a student being flagged for suspicious behavior.
Students' preference for PrairieLearn is overwhelming: In one recent Introductory Dynamics class of 153 students, more than 76 percent strongly disagreed with the statement “I would prefer to use Proctorio or ProctorU” over the combination of PrairieLearn and Zoom over proctoring software, according to survey data West shared with Motherboard.
The tool’s success at University of Illinois and the need to find remote learning solutions has prompted other schools, including the University of California Berkeley, University of Maryland, and University of British Columbia, to begin using PrairieLearn.
The pandemic has accelerated existing trends in higher education that will likely impact students long after they return to their classrooms. One of them is the polarization among educators when it comes to the best way to assess students and prevent cheating, Noelle Lopez, assistant director for equity and inclusion at Harvard University’s Derek Bok Center for Teaching and Learning, told Motherboard.
“This is a classic debate between law-and-order—that comes from surveillance and making sure that there’s control, which is what this [proctoring] software offers in some way—and then a kind of model where there’s space for agency and trust” between students and faculty, she said.
Proctoring software companies aggressively push the narrative of rampant cheating on college campuses, citing statistics that suggest as many as 68 percent of undergraduates cheat at some point in their college career. Proctorio CEO Mike Olsen has even gone so far as to say that schools who don’t have anti-cheating measures like his software in place risk handing out “corona degrees” that employers won’t accept as valid.
But some of the premiere schools on the continent have chosen to rely on simpler tools to prevent cheating: new forms of assessment and their honor codes.
Harvard strongly discourages the use of proctoring software in its undergraduate courses, instead suggesting that professors who still prefer timed, closed-book exams proctor the tests themselves over Zoom. And as the school’s student newspaper has reported, a growing number are opting to switch to alternate class models. Assessments like group projects, creating podcasts, and open-book tests are “coming up more as people are thinking about the job market that exists, and ways that students can be learning other kinds of skills,” Lopez said.
Stanford University and McGill University, in Montreal, both have outright bans on proctoring software. In Stanford’s honor code, the faculty body commits to its “confidence in the honor of its students by refraining from proctoring examinations … [and] will also avoid, as far as practicable, academic procedures that create temptations to violate the Honor Code.”
“Shifting to unproctored take-home examinations eliminated the need for special timing accommodations for students with disabilities and also allowed students to complete assessments at a time best for them, particularly if they were in a different time zone, had constraints due to living conditions, or caretaker responsibilities,” Shirley Cardenas, a spokeswoman for McGill University, told Motherboard.
Preventing, not penalizing
When the pandemic hit, U.C. Berkeley also decided to ban the use of proctoring software due to concerns about how it would impact the school’s diverse student body. Vice Chancellor for Undergraduate Education Catherine Koshland says she has been tracking the transition through surveys, pilot projects, and conversations with faculty.
Some professors were not happy with the choice and experienced elevated rates of cheating, she told Motherboard, but there were several trends: The highest rates of cheating appeared to be among students who planned to apply for medical school, as well as those in classes that were prerequisites for entry into majors with enrollment caps. The more competitive the environment and the more students believed their peers could be cheating, the more students cheated themselves.
Like other schools, U.C. Berkeley is working with faculty on strategies to reduce both the opportunities and the pressures that lead to cheating. That includes piloting a version of PrairieLearn, as well as encouraging instructors to stop curving on a grade and to start offering frequent, low-stakes tests rather than two make-or-break exams.
“We need to do everything we can to make environments where cheating just doesn’t happen in the first place,” West, from the University of Illinois, said. “It doesn’t have to be a massive change from what people are doing, but some of these key ingredients make a big difference as far as students and faculty happiness.”