Notre Dame Journal of Emerging Technology, Jan 2022 |
The widespread use of online proctoring software during the Covid pandemic prompted a deluge of horror stories, generating significant public backlash. This Article offers a nuanced analysis of the online proctoring technosocial system, which not only includes proctoring software, but also its implementation by educators and schools. Online proctoring software relies on controversial technologies – facial recognition, artificial intelligence, and biometric surveillance in intimate surroundings – without vendors or schools accounting for their biases or acknowledging the lack of evidence supporting automated proctoring accuracy and efficacy. This Article then examines legal and extra-legal levers that can promote—or push—companies and schools to adopt more responsible policies and eschew unreliable and unproven automated tools. None of these, however, will cure the fundamental flaws with proctoring technologies, which should only be deployed after significant reform to ensure fairness and due process and under limited circumstances where their use will promote, rather than undermine, equity by expanding access to education.