We are now nearing the two-year mark of a global pandemic that has had such a profound effect on every aspect of our lives. As students, educators, administrators, and researchers, we have had to adapt our academic practice in ways that blurred the lines between our public personas and our private lives. We have had to learn about and embrace various forms of technology in order to enable remote teaching, learning, and collaborations, all with little control over the scale and extent of the invasiveness made possible by these technologies. It is at this crucial conjuncture that we offer this Themed Issue of the Journal of Interactive Technology and Pedagogy on surveillance and educational technologies.
As we said in the call for papers for this issue: “The COVID-19 pandemic has served as a magnifying glass, revealing all the ways our systems are broken.” Indeed, social fault lines have not only been exposed and exacerbated in the harsh light of the pandemic response, but even more so through the ways many institutions chose to ignore it while hoping to continue with some version of business as usual. But, just as a magnifying glass reveals faults, it shows us opportunities for repair: we cannot simply fix what is broken, but also must work toward eliminating systems that are not “broken” but working as designed—to the detriment of marginalized and vulnerable populations. Thus, while we develop counternarratives and critiques, we can also draw on more expansive visions of abolition, which demand “that we change one thing, which is everything” (Gilmore 2018).
Bluntly, many of these surveillance systems and computational tools shouldn’t exist. While the emergency circumstances under which people and institutions have adopted them during the pandemic make such developments somewhat understandable, now that we have a better understanding of the concrete consequences in our pedagogy and in our students’ lives, we really have no justification to continue using these tools in these ways. We must change not only the punitive technology we use, but the educational mindset and broader world that rationalizes it.
This issue is not the first, nor will it be the last, collection of such critical work, but as we spend more time within this pandemic paradigm, we are accumulating clearer and stronger evidence and narratives of the harms surveillance-oriented educational technology brings, making it much less understandable for justice-oriented educators to excuse their use. The pandemic has crucially highlighted the need for consent, compassion, and care, and one of the striking things about many of the pieces in this issue is that they are self-reflective rather than analytical. Many of the authors situate themselves in a fraught system of monitoring and punishment and analyze or question their roles in bringing potentially harmful surveillance to bear on others, especially the students they are meant to nurture. It’s also notable how many of these projects address remote proctoring and Learning Management Systems (LMSs). For many institutions, the pandemic supercharged the already pervasive use of LMSs and proctoring systems in the transition to remote instruction. As the uptake of these tools and protocols increased, so did the outcry around the invasiveness and consequences of their use. Consider these pieces both as scholarly research, and as a call to action for justice, within and beyond education.
In “Back Doors, Trap Doors, and Fourth-Party Deals: How You End up with Harmful Academic Surveillance Technology on Your Campus without Even Knowing,” Autumm Caines and Sarah Silverman alert us to the dangers and complications of allowing fourth-party vendors access to institutional data through backdoors created by third-party relationships. With Proctorio as the primary example, they unpack these relationships in an accessible and clear way, while outlining the different kinds of fourth-party partnerships that institutions might unknowingly find themselves in. Caines and Silverman also lay out a harm index, a useful framework to measure the levels and scale of harms that remote proctoring services can cause. The authors include an example of their collaborative autoethnographic reflection, which provides a glimpse into the tedious but necessary steps needed to thwart corporate control over faculty and student data.
In “Resisting Surveillance, Practicing/Imagining the End of Grading,” Marianne Madoré, Anna Zeemont, Joaly Burgos, Jane Guskin, Hailey Lam, and Andréa Stella assert that grading systems are an element of larger systems of surveillance at educational institutions and that grading is incompatible with antiracist pedagogies. They offer a variety of experiences where they either individually or collectively operated against or outside the schema of grading, and push us to “reimagine the purpose of schooling” in light of these struggles.
For Issue 20, we also wanted to create space to explore issues around educational surveillance that wasn’t constrained by the formality of more traditional journal articles, so we invited submissions to our Views from the Field section. We are very pleased to present five thought-provoking pieces that critically engage with the experience of being surveilled by educational technology and the potential consequences of this surveillance on our collective wellbeing.
We start off with “Why Don’t You Trust Us?”, a compelling piece from undergraduate student Sinéad Doyle, who generously shares her own experience of being subjected to additional surveillance during the pandemic and how this sort of invasive surveillance can blur the lines between public and private in counterproductive ways. Lance Eaton’s “The New LMS Rule: Transparency Working Both Ways” imagines what it would look like if we turned the tables and gave students the same level of access to instructor activity on LMSs as these platforms give instructors to student activity, noting the power imbalances built into conventional LMSs. In “Pedagogy and the Expansion of Surveillance at the City University of New York,” Marc Kagan continues the exploration of the potentially insidious nature of LMSs by pointing out the dangers of allowing unfettered and unregulated administrative access to online courses, highlighting the potential role of labor organizations in challenging this threat. “Black Mirror Pedagogy: Dystopian Stories for Technoskeptical Imaginations,” by Daniel G. Krutka, Autumm Caines, Marie K. Heath, and K. Bret Staudt Willet, provides a way to help students interrogate their own techno-optimism through the use of Black Mirror-inspired speculative-fiction narrative building. And finally, Chris Miciek’s creative text, “Field Notes from the Education to Employment Pipeline: A Career Development Perspective,” gives us a bird’s-eye view history of the contested imbrication of education and labor-market requirements, highlighting the historical and ongoing processes wherein students are inured to the use of technological surveillance in readiness for workplace surveillance.
In addition to the pieces on surveillance in education, we are pleased to include two general-interest articles before we pause publication for our migration to a new publishing platform.
Our second general-interest article, “Poetry in Your Pocket: Streaming Playlists and the Pedagogy of Poetic Interpretation” by Stephen Grandchamp, shares how the use of Spotify playlists made poetry more accessible to students and helped to recontextualize poetry in a more contemporary setting. This approach helped students understand and participate in the shifting meaning and significance of poetry, and gives hope for those of us who find interpreting poetry a little intimidating.
We want to acknowledge the patient and incredible work that managing editor, Patrick DeDauw, and editorial assistant, Chanta Palmer, have done to keep us on track and wrangle the many moving pieces that needed to come together to produce this issue. Our deep gratitude to the members of the JITP editorial collective for all their behind-the-scenes work and support. We also want to acknowledge the reviewers who took time out of their busy schedules to provide valuable feedback to our authors and note that it has been a privilege to be able to work with the authors to bring you Issue 20. We are deeply grateful that we were all able to come together during this pandemic to give shape and space to this important conversation, and we hope you will join us in doing what we can to ensure an equitable and surveillance-free educational future.
Chris Gilliard is a Visiting Research Fellow at the Harvard Kennedy School Shorenstein Center. His ideas have been featured in The New York Times, The Washington Post, Wired Magazine, The Chronicle of Higher Ed, and Vice Magazine. He is a member of the UCLA Center for Critical Internet Inquiry Scholars Council, and a member of the Surveillance Technology Oversight Project community advisory board.
sava saheli singh is an independent researcher who just completed a postdoctoral fellowship with the eQuality Project and the AI + Society Initiative, both at the University of Ottawa. She created the award-winning Screening Surveillance, a series of short, near-future speculative fiction films. This public education and knowledge translation project calls attention to the potential human consequences of big data surveillance. She co-produced the first three films as a postdoctoral fellow with the Surveillance Studies Centre at Queen’s University in Kingston, Ontario, and is currently in post-production on the fourth film in the series which she also co-wrote and co-produced. sava received her PhD from New York University’s Educational Communication and Technology program. As an interdisciplinary scholar, her current research interests include educational surveillance; digital labour and surveillance capitalism; restorative justice and abolition; speculative fiction; and critically examining the effects of technology and techno-utopianism on society.
A future with ubiquitous academic surveillance is not sealed, not yet. In this essay, I discuss how online proctoring companies sell their technology with stories inspired by the edtech imaginary. Higher education institutions, in turn, often repeat these narratives, as evidenced by the ways institutions frame the technology as neutral, convenient tools for facilitating assessments. I propose a possible path toward abolishing online proctoring by authoring counter-narratives. I identify two spaces for constructing counter-narratives. First, we can apply a cognitive perspective to policy implementation to shift individual educators’ understanding of online proctoring through dissonance-producing institutional resources. Second, we can build collective partnerships between administrators, staff, faculty, and students to achieve deep change in our assessment practices. This potential path forward is guided by dual commitments: to reject online proctoring and the intersectional harms endured by students forced to use the technology; and to uproot the underlying pedagogies of policing and punishment that support online proctoring and replace them with pedagogies of educational dignity. I end my essay with a call to adopt an abolitionist approach to ridding education of online proctoring. By exercising abolitionist principles of refusal and care, along with a rejection of reform as an acceptable middle ground, we can move closer to creating the kinds of learning environments and relationships that cultivate students’ educational dignity.
The story of online proctoring is difficult to disentangle from surveillance and policing. Companies with names like Honorlock and Respondus Monitor conjure images of a patriarchal panopticon. Then there’s Proctortrack’s origin story. The chief technology officer for Verificient Technologies, the company that developed Proctortrack, arrived at the idea for the online proctoring technology after working on a Transportation Security Administration (TSA) project that included searching video footage for facial expressions deemed abnormal (Singer 2015). A version of the TSA’s security theater, online proctoring is further evidence of Sasha Costanza-Chock’s observation that “The same cisnormative, racist, and ableist approach that is used to train the models of the millimeter wave scanners [used by the TSA] is now being used to develop AI in nearly every domain” (2020, 5). I worry that the mechanized dehumanization experienced by individuals from nondominant groups at airport security is now being normalized in education due to online proctoring.
The attempts to make prejudiced technology prosaic are facilitated by online proctoring companies and their commitment to an edtech imaginary and its powerful storytelling. Audrey Watters describes the edtech imaginary as a collection of “stories we invent to explain the necessity of technology, the promises of technology; the stories we use to describe how we got here and where we are headed” (2020). Read the statements from online proctoring CEOs and the claims made by companies on their websites, and you can see the edtech imaginary at work. Online proctoring is supposedly necessary because, in the words of ProctorU’s CEO, without it, cheating will increase and pose “a severe threat to all higher education’” (Feathers and Rose 2020). The hollowness of the edtech imaginary is further illustrated in the diminishing story sold by Proctorio. Beginning in January 2019, the company promised institutions their “software eliminates human error [and] bias” (Proctorio 2019). The company’s homepage declared their software’s impressive capability until April 2021. On April 19, 2021, the Federal Trade Commission warned companies not to claim their algorithms can erase bias (Jillson 2021). Within days Proctorio’s promise of unbiased technology shrank to “Our software attempts to remove human bias and error” (Proctorio 2021). Visit the company website today, and you will find the edited sentence has disappeared. The edtech imaginary features many such revisionist narrators.
I want to consider the other elements of the edtech imaginary described by Watters: how we got here and where we might go. I’m trying to understand how institutions and people in power too often come to believe edtech’s glossy narratives about the past, present, and future. I’m also searching for the sites where we can share our counter-narratives. Alongside counter-narratives, I’m seeking ways we might uproot the pedagogies of policing and punishment that make online proctoring possible and replace them with pedagogies of educational dignity.
Educational dignity is critical for enacting a just present and future (Espinoza and Vossoughi 2014; Espinoza et al. 2020). Educational dignity is “the multifaceted sense of a person’s value generated via substantive intra- and inter-personal learning experiences that recognize and cultivate one’s mind, humanity, and potential” (Espinoza et al. 2020, 326). Online proctoring and its shallow definitions of learning are incompatible with educational dignity because of the technology’s hostility toward every individual forced beneath a webcam’s glare. The technology can harden internalized oppression, especially for nondominant students (Bali 2021), through its built-in racism and ableism. Further, online proctoring positions educators as police officers and students as criminals, straining inter-personal learning experiences. Online proctoring, I should note, is not the sole source of negative intra-personal and inter-personal learning experiences. Acknowledging its encoded opposition to educational dignity, however, can encourage us to view its abolishment as a part of a larger project to help educators develop and practice pedagogies of educational dignity.
I turn now to possible ways of conducting that larger project. I will review the institutionalization of online proctoring; describe the importance of how institutional resources frame online proctoring; offer a case for how to create deep change in the ways educators understand online proctoring and its alternatives; and conclude with a call to take an abolitionist approach to ridding online proctoring from education.
A future with ubiquitous academic surveillance is not sealed, not yet.
How Did We Get Here?
The critiques of online proctoring are numerous. Online proctoring replicates inadequate assessment methods (Leafstadt 2017). Online proctoring can exacerbate a student’s anxiety, particularly a student with high anxiety (Woldeab and Brothen 2019), which in turn can have a negative impact on students’ ability to demonstrate their learning (Eyler 2018). Online proctoring technology is racist (Feathers 2021; Swauger 2020); ableist (Brown 2020; Zhu 2021); and it invades students’ privacy (Cahn et al. 2020; Germain 2020). Put another way, online proctoring not only reinforces ineffective, harmful pedagogies; it’s also a deeply unethical technology.
Joining these critiques are the thousands of students who have documented and shared their experiences with online proctoring:
I know that I’m going to have to try a couple times before the camera recognizes me…I have a light beaming into my eyes for the entire exam…That’s hard when you’re actively trying not to look away, which could make it look like you’re cheating…[The software] is just not accurate. So I don’t know if it’s seeing things that aren’t there because of the pigment of my skin.
—Femi Yemi-Ese, student at the University of Texas at Austin (quoted in Caplan-Bricker 2021)
I’ve despised using this software…. On one occasion, I was “flagged” for movement and obscuring my eyes. I have trichotillomania triggered by my anxiety, which is why my hand was near my face. Explaining this to my professor was nightmarish.
—Bea, student at Tarrant County College (quoted in Retta 2020)
It’s really cruel to have students come to class and expect to learn, and then treat them, essentially, like criminals and make them install programs that look for all their information and force them to give tours of their home.
—Anonymous, student at the University of Washington (quoted in Hipolito 2020)
The chorus of student criticism has apparently not done enough to slow institutions and faculty from deploying the technology against students. For example, Proctorio’s CEO claimed his company helped to proctor 25 million exams at 1,000 institutions in 2020 (Harwell 2020).
To help to explain the growth of such an apparently toxic technology, it’s important to note that institutional use of online proctoring predates the coronavirus pandemic. The existing institutional knowledge and resource infrastructure, combined with the coronavirus pandemic’s demands for quick and cheap solutions to complex teaching and learning problems, meant online proctoring could take root farther and faster than might otherwise have been the case. The upheaval also presented educational technology companies an opportunity to activate the edtech imaginary and present themselves as partners ready and able to assist institutions’ pivot to remote emergency teaching. In some cases, non-online proctoring companies joined together with online proctoring companies, marketing their wares directly to educators for free (Top Hat 2020). The companies’ beneficence can be understood as an attempt to deepen their connections to institutions as well as to circumscribe what we imagine when we envision online learning and its possibilities.
Once a technology becomes well-established at an institution, it can be difficult to uproot (Arthur 1994). As a consequence of the pandemic, institutions have made substantial financial investments in online proctoring technology. The University of California at Santa Cruz, for instance, spent $200,000 for online proctoring in 2020–2021, and the institution’s leadership plans to continue to fund online proctoring (Harwood 2021). In addition to the monetary cost, institutions and their employees incurred a labor cost, too. Staff members had to learn how to use and support the technology. Faculty who decided to use the technology learned enough to do so, or they may have relied on staff and graduate students to troubleshoot technical problems, which meant any staff member or graduate student called upon to troubleshoot must have known how to fix the problems, and if not, they may have turned to the companies themselves for help. And finally, students, who rarely have a say in the matter, learned how to use the technology if they wanted to pass a class.
The money and labor sunk into online proctoring moves the institution, its employees, and its students further down the online proctoring path in a process of increasing returns (Pierson 2000) and software sedimentation (Weller 2020) so that change is difficult to contemplate let alone implement. As we’ve seen, the edtech imaginary is invested in software sedimentation. In response to criticism, online proctoring CEOs have promised friendlier interfaces and faster loading times (Deighton 2021), design “upgrades” presumably meant to make online proctoring more acceptable and ready for further sedimentation.
Another sedimentation tactic used by online proctoring’s defenders is to argue students have long been surveilled (Global Silicon Valley 2021). Since surveilling students is not new, these advocates observe, then contemporary warnings about academic surveillance are unfair. I read this argument as an attempt to make online proctoring more palatable—and thus more profitable—by conflating the technology with in-person proctoring. However, online proctoring is invasive in ways in-person proctoring is not (Fitzgerald 2021).
An in-person proctor does not demand to view a student’s bedroom. An in-person proctor is not an unflinching gaze trained to interpret students’ behavior through the singular lens of suspicion. When online proctoring executives and other adherents of online proctoring collapse the differences between in-person and online proctoring, they are reaching into the edtech imaginary. The story that emerges is a history of assessment practices meant to make their technology appear to be an uncontroversial extension of how students have always completed homework, quizzes, and tests. Do not trust online proctoring companies to be credible narrators. Their business depends on selling a specific tale of how we got here and where we should be going, and if nothing else, their public relations version of education history should be met with profound skepticism.
Elsewhere, online proctoring has been equated with older online learning technologies like “poorly recorded video lectures [and] inactive LMS discussion boards” (Selwyn et al. 2021, 13). I am concerned about the ways the edtech imaginary is succeeding to shape the discourse and frame online proctoring as a misunderstood, humdrum technology. I do not want racist, ableist academic surveillance to be a practice educators and students shrug off as an unfortunate but necessary part of learning. I do agree with Selwyn et al. (2021) that online proctoring demands we “develop counter-narratives that push back against the imagining of public education as simply a ‘tech issue’” (14). Where and how these counter-narratives emerge is an urgent question.
From Neutrality to Dissonance
Before exploring online proctoring counter-narratives, I want to consider how higher education institutions normalize online proctoring. Of 100 randomly selected US and Canadian college and university websites chosen from a sample of 2,155, “none took a critical stance toward proctoring tools or addressed the ethics of student surveillance” (Kimmons and Veletsianos 2021). Official institutional policy appears to treat online proctoring tools as neutral educational technology. The finding is perhaps unsurprising. While exceptions do exist (e.g., “Proctoring and Equity” from the Center for Innovative Teaching and Learning at Indiana University Bloomington), institutions that have invested money and labor into bringing online proctoring to campus may be hesitant or unwilling to criticize the same technology on public-facing websites. Neutrality is therefore a strategic choice. And because education is politics (Nieto 1999; Shor & Freire 1987), neutrality is a political choice too, one that aligns institutions with online proctoring companies.
Disrupting this neutrality becomes even more difficult because educators and institutions, perhaps unaware of the technology’s harms, often provide students with guiding language written by the online proctoring companies themselves. For example, Respondus Monitor offers instructors a template titled “Using LockDown Browser and a Webcam for Online Exams,” which instructors can copy and paste into a syllabus (Respondus n.d.). The syllabus template suggests to students that they “[t]ake the exam in a well-lit room and avoid backlighting, such as sitting with your back to a window” (Respondus n.d.). Missing from this recommendation and others like it is the reason why students must be in a well-lit room, sometimes having to resort to shining a bright light directly into their faces (Chin 2021): because many online proctoring companies use facial recognition technology. Not only do these technologies struggle to detect dark skin (Simonite 2019), they are built using biased datasets, leading to racialization and dehumanization (Stevens and Keyes 2021).
Online proctoring companies also shape the perception of their harmful technology at the institutional level. Just as individual educators might depend on the companies for ways to describe to students how to use the technology, so too do institutional how-to resources and websites. Institutional support pages are too often little more than hyperlinks to help guides and video tutorials created by the companies. In addition, an institutional resource page might repackage a company’s recommendations to students, such as one example when the Respondus Monitor syllabus language about lighting reappears on an institutional resource page warning students, “You may need to add more lighting to your workspace when using Respondus Monitor to ensure the program can recognize your face during the assessment” (Northwestern University n.d.). Once more, the reason why students need to add more lighting is glaringly absent. Online proctoring companies can continue to control the narrative about their technology as long as institutional resource pages are indistinguishable from the frequently asked questions websites produced by online proctoring companies. Thus, online proctoring companies have succeeded in making their technology appear benign by attempting to collapse the distinct differences between in-person and online proctoring. Companies have also benefited from instructors and institutions who frame the technology as neutral, often parroting company copy on syllabi and how-to webpages.
Taking lessons from a cognitive approach to learning and policy implementation can help explain why changing people’s understanding about online proctoring might be especially hard when the technology is presented in such a way that its functionality appears both commonplace and unambiguously advantageous.
We draw on prior knowledge and existing beliefs when interpreting new information (Bransford, Brown, and Cocking 2000). A problem arises because “New ideas either are understood as familiar ones, without sufficient attention to aspects that diverge from the familiar, or are integrated without restructuring of existing knowledge and beliefs, resulting in piecemeal changes in existing practice” (Spillane et al. 2002, 398). What does this mean for educators encountering online proctoring for the first (or fifth) time? When neutral or positive language masks the technology’s harms, then online proctoring can appear not to be so much a new idea but instead a logical, if imperfect, extension of an educator’s existing beliefs and practices.
That online proctoring can either be an outgrowth of, or seem an outgrowth of, existing beliefs and practices is evidence of a larger problem: the beliefs and practices themselves. Pedagogies of policing and punishment are the soil sustaining online proctoring. It’s not enough to weed out online proctoring. Instead, what we could use is a controlled burn.
To light a fire that removes online proctoring from higher education, start by revising institutional websites and resources to explicitly name and describe online proctoring’s harms. These revisions—these counter-narratives—need to produce cognitive dissonance in educators in order to disrupt the narrative of online proctoring as a necessary, innocuous technology. This dissonance can force educators to confront both the technology itself and the underlying beliefs about learning that help educators rationalize deploying academic surveillance against their students. A goal is to help educators “recognize an existing model as problematic and, then, to focus resources and support on attempts to make sense of the novel idea, restructuring existing beliefs and knowledge” (Spillane et al. 2002, 418). In other words, sparking a shift in a person’s thinking begins with illuminating the ways online proctoring is a problem both as a technical solution and as a pedagogical practice. A dissonance-producing institutional resource about online proctoring might look like Figure 1:
I recognize it’s unlikely the above resource will be adopted by institutions of higher education across the land. So I have another suggestion. Before providing a reader with installation directions and other troubleshooting tips, which is what many institutional resources do, the resource could prompt an educator to reflect on the technology and its effects by asking:
Do you believe students with dark skin should have to shine a bright light on their faces to be recognized as having a face by the online proctoring technology?
Do you believe diabetic students should be too afraid to check their blood sugar levels or eat a snack for fear of being labeled suspicious by the online proctoring technology?
Do you believe students should allow a stranger to have remote access to their personal computer?
Would you want to show a stranger your office or bedroom before an exam begins or while taking an exam?
Is your pedagogy founded on distrust, policing, and punishment?
Institutional resources about online proctoring may appear to play a seemingly small role in the larger conversation about the technology and its impacts on teaching and learning. However, understanding the resources as a vehicle the edtech imaginary uses to influence teaching and teachers themselves emphasizes the need to attend to how the resources frame online proctoring. Institutional resources about online proctoring can be understood as a policy technology—a technology about technology, if you will—or a means designed to implement policy. Other policy technologies include curricula and assessments (Spillane et al. 2019). The problem with institutional resources adopting a neutral or positive framing of online proctoring is thus twofold. First, as previously discussed, uncritical resources can produce, reinforce, and normalize academic surveillance and pedagogies of distrust, policing, and punishment by being assimilated into educators’ preexisting beliefs and practices.
A second damaging consequence exists. When an educator’s pedagogy is pushed toward policing and punishment, practices enabled in part by uncritical resources, their sense of themselves as a teacher risks being corrupted. Here Stephen Ball’s observation that “we do not do policy, policy does us” (2015, 307) helps to articulate why focusing on the resources’ language is so important. Because if our pedagogy is an outgrowth of our identities as educators, and policy shapes our sense of self, then a pedagogy of punishment wants us to become punishers. The policy “does us” by defining who we can be as teachers and who our students can be as learners. Recall a student forced to submit to online proctoring felt like a criminal because the technology positions students as inherently suspicious. And if a policy of online proctoring transforms students into criminals, then it turns teachers into police officers—and cops, I believe, should be banned from campuses.
A Story of Reform
Overcoming online proctoring and the pedagogies that maintain its use might begin with the individual, but we improve the chances of abolishing the technology when we join together to unlearn harmful pedagogies and replace them with pedagogies of educational dignity. To grow pedagogies of educational dignity, we can couple a cognitive approach to policy implementation with a stance toward learning as a fundamentally social experience (Lave and Wenger 1991; Vygotsky 1978). Many educators concerned about online proctoring have realized the social nature of learning by organizing events to learn with and from one another. Examples of collective meaning making include the Teach-In #AgainstSurveillance (Gray 2020) and the #AnnotateEdTech events (Logan and Caines 2021). These online gatherings, while vital for building community and solidarity, may nonetheless struggle to bring about the systemic change at institutions many of us seek.
To accomplish change at scale—a favorite word, I know, of the edtech imaginary—the movement against online proctoring can address the depth of educators’ beliefs and practices; the sustainability of changes over time; the spread of changes throughout an institution; and a shift in ownership over the new ideas from external to internal sources (Coburn 2003). Remember that changing an individual’s understanding requires resources and support (Spillane et al. 2002). Combine these additional resources and support with the elements for achieving deep change at scale (Coburn 2003), and the project of ridding online proctoring from campuses appears daunting.
Nonetheless, we can turn to the efforts of a coalition of administrators and staff at the University of Michigan-Dearborn for an example of institutional change at scale. In March 2020, the University shifted to remote emergency teaching. At the same time, the Office of the Provost and deans decided to publicly oppose online proctoring, and though the administrators did not ban the technology, they did strongly recommend faculty not use it (Silverman et al. 2021). In the months that followed, the staff at the Hub for Teaching and Learning Resources (the Hub) worked to implement the reform through a combination of depth, sustainability, spread, and shift (Coburn 2003). See Table 1 for how the University tried to accomplish the different dimensions of reform implementation.
Dimension of Reform Implementation
How Administrators and the Hub’s Staff at the University of Michigan-Dearborn Tried to Accomplish the Dimension of Reform Implementation
Changes to educators’ beliefs, educators’ interactions with students, and educators’ pedagogies.
provided individual consultations to faculty new to online teaching.
assisted faculty to develop authentic assessments.
hosted a virtual guest speaker, an expert in authentic assessments with a speciality in the STEM disciplines.
organized multiple faculty development programs throughout the year.
A long-term commitment to nurturing educators’ development over time.
hired two additional instructional designers on two-year contracts.
hired graders to aid faculty in high-enrollment courses to grade and provide feedback on more time-intensive authentic assessments.
The diffusion of reform-related pedagogical principles within a course, department, and institution.
provided individual or group email responses from the Hub, the Office of Digital Education, and the Office of the Provost with a consistent message that the decision to avoid online proctoring was due to student privacy and equity concerns.
Silverman et al. (2021) acknowledge communications with faculty “could have been better wrapped into a cohesive, campus-wide message” (121) to improve spread.
Ownership of the reform transitions from an external reform to an internal reform with authority for maintaining the change left to groups and individuals.
Silverman et al. (2021) recommend designing experiences to “develop a shared critical digital literacy between instructors and students by discussing the ethical problems associated with remote proctoring and building a shared understanding of academic integrity in the digital age” (126).
Table 1. Applying Coburn’s (2003) concept of scale to the University of Michigan-Deaborn’s approach to shifting educators’ beliefs and practices regarding online proctoring.
We need counter-narratives. However, a strategy for abolishing online proctoring built only on counter-narratives risks ceding the terms of the debate to those set by the online proctoring companies. For this reason, we also need stories that aren’t defined solely in opposition to the likes of online proctoring CEOs and the edtech imaginary they’re entranced by. The coalition against online proctoring that emerged at the University of Michigan-Dearborn is an instructive example of one such alternative narrative, a story of how we might achieve deep change by developing a partnership organization founded on relationships (Logan 2020). What started as co-authoring a counter-narrative about online proctoring at the University became, over time, a new narrative about partnerships between administrators, staff, faculty, and students to develop equitable, authentic assessments.
The example set by the University of Michigan-Dearborn demonstrates that when administrators offer support and financial resources to reimagine teaching and learning, trusting staff and faculty along the way, resistance to and refusal of online proctoring can generate a community that rejects pedagogies of policing and embraces people and our immutable educational dignity.
Where Might We Go from Here?
The future of online proctoring is still being written. Appealing to institutions’ and students’ fears, online proctoring CEOs tell their tales of worthless coronavirus diplomas (Harwell 2020) tarnishing an institution’s brand and raising questions about a student’s employability. The narrative belongs to the larger playbook drawn up by the corporate education movement and its vision of learning as human capital development (Williamson 2017). I believe learning cannot be reduced to a datapoint to be quantified, a credential to be protected at all costs.
Online proctoring companies possess a paltry view of education that produces and reinforces pedagogies of punishment. When confronted with the intersectional damages inflicted upon students by their technology, online proctoring companies insist their products are necessary, claiming the technology is an engine of equity (Norris 2021). Yet as Chris Gilliard argues, “A better remote proctoring system isn’t on the way—it can’t be—because they are all built on the same faulty and invasive ideas…about pedagogy, surveillance, and control” (@hypervisible, April 6, 2021).
Online proctoring is not like in-person proctoring. Online proctoring is not like a badly lit lecture video or an underused discussion board. Online proctoring is a manifestation of what Ruha Benjamin calls the New Jim Code, or “the employment of new technologies that reflect and reproduce existing inequities but that are promoted and perceived as more objective or progressive than the discriminatory systems of a previous era” (2019, 5–6). When institutions and educators frame online proctoring with market-based stories and their boogeymen, they risk being duped by online proctoring companies and their unreliable narrators selling dubious promises of objectivity and equity as evidence the technology works. In contrast, when the story of online proctoring is framed as the instantiation of the New Jim Code and its racist, ableist surveillance, then those who experience the technology’s inequities—students—emerge as trustworthy narrators with heartbreaking accounts of the humiliations they’ve had to endure. Their stories should be part of the evidence we use as we seek to rid online proctoring from schools.
Including online proctoring as part of the New Jim Code offers another possibility: that of the abolitionist imaginary. Abolitionist practices, suggested sava saheli singh (Pasquini 2021), can be a generative source of imagination, politics, organizing, and action in the struggle against online proctoring and other problematic educational technologies. Abolitionism’s emphasis on refusal alongside care and collectivity (Kaba 2021), for example, is essential if we are to develop pedagogies of educational dignity.
In addition, the fight against online proctoring takes on greater urgency when we understand online proctoring as the latest example of white supremaist surveillance technologies designed and deployed to police and punish. Like previous racializing information technologies used to surveil and control people (Browne 2015), online proctoring’s harms are experienced disproportionately by Black people as well as other nondominant populations. This longview of online proctoring is vital, for as Bettina Love notes, “An ahistorical understanding of oppression leads folx to believe that quick fixes to the system, such as more surveillance, more testing, and more punishment, will solve the issues of injustice and inequality” (2019, 92). It also means adopting the abolitionist stance that reform, even at the scale accomplished by the University of Michigan-Dearborn, cannot be where the story of online proctoring ends.
If the story of online proctoring is to end in freedom, we can start by telling counter-narratives and fashioning new narratives altogether. I am hopeful these stories will include accounts of honest institutional resources and websites. Of administrators who abandon online proctoring, despite paying for its false promises, and invest instead in providing support and funding for faculty and staff to develop authentic assessments. And I am hopeful we will share stories of lasting partnerships between educators and students, coalitions that accomplish deep change and grow pedagogies of educational dignity.
 An emphasis on authentic assessment is an essential element for building pedagogies of educational dignity. Authentic assessments are characterized by self-reflection and collaboration with others (Conrad and Openo 2018). Prioritizing self-reflection and embracing individuals’ genuine, complex selves can support educational dignity through intra-personal learning. Authentic assessment can also help students experience educational dignity through its frequent use of learning with and from other people, a crucial design choice upon which educational dignity relies (Espinoza et al. 2020).
Arthur, W. Brian. 1994. Increasing Returns and Path Dependence in the Economy. Ann Arbor: University of Michigan Press.
Ball, Stephen. 2015. “What Is Policy? 21 Years Later: Reflections on the Possibilities of Policy Research.” Discourse: Studies in the Cultural Politics of Education 36, no. 3: 306–313. https://doi.org/10.1080/01596306.2015.1015279.
Benjamin, Ruha. 2019. Race After Technology: Abolitionist Tools for the New Jim Code. Cambridge, UK: Polity.
Bransford, John, Brown, Ann, and Rodney Cocking, eds. 2000. How People Learn: Brain, Mind, Experience, and School. Washington, DC: National Academy Press.
Browne, Simone. 2015. Dark Matters: On the Surveillance of Blackness. Durham: Duke University Press.
Cahn, Albert, Caroline Magee, Eleni Manis, and Naz Akyol. 2020. “Snooping Where We Sleep: The Invasiveness and Bias of Remote Proctoring Services.” Surveillance Technology Oversight Project, November 11, 2020. https://www.stopspying.org/snooping.
Espinoza, Manuel, and Shirin Vossouhi. 2014. “Perceiving Learning Anew: Social Interaction, Dignity, and Educational Rights.” Harvard Educational Review 84, no. 3: 285–313.
Espinoza, Manuel, Shirin Vossouhi, Mike Rose, and Luis Poza. 2020. “Matters of Participation: Notes on the Study of Dignity and Learning.” Mind, Culture, and Activity 27, no. 4: 325–347. https://doi.org/10.1080/10749039.2020.1779304.
Eyler, Joshua. 2018. How Humans Learn: The Science and Stories Behind Effective College Teaching. Morgantown: West Virginia University Press.
Gilliard, Chris (@hypervisible). 2021. “A better remote proctoring system isn’t on the way—it can’t be—because they are all built on the same faulty and invasive ideas of about pedagogy, surveillance, and control.” Tweet, April 6, 2021. https://twitter.com/hypervisible/status/1379501433763078145.
Pasquini, Laura. 2021. “Between the Chapters #25: Searching for the Commons in the Wasteland with @savasavasava & @audreywatters.” Between the Chapters podcast episode, April 29, 2021. https://share.transistor.fm/s/442cc432.
Pierson, Paul. 2000. “Increasing Returns, Path Dependence, and the Study of Politics.” The American Political Science Review 94, no. 2: 251–267. https://doi.org/10.2307/2586011.
Selwyn, Neil, Chris O’Neill, Gavin Smith, Mark Andrejevic, and Xin Gu. 2021. “A Necessary Evil? The Rise of Online Exam Proctoring in Australian Universities.” Media International Australia: 1–16. https://doi.org/10.1177/1329878X211005862.
Shor, Ira, and Paulo Freire. 1987. A Pedagogy for Liberation. Westport, CT: Bergin & Garvey.
Silverman, Sarah, Autumm Caines, Christopher Casey, Belen Garcia de Hurtado, Jessica Riviere, Alfonso Sintjago, and Carla Vecchiola. 2021. “What Happens When You Close the Door on Remote Proctoring? Moving Towards Authentic Assessments with a People-Centered Approach.” To Improve the Academy: A Journal of Educational Development 39, no. 3: 115–131. https://doi.org/10.3998/tia.17063888.0039.308.
Spillane, James P., Brian J. Reiser, and Todd Reimer. 2002. “Policy Implementation and Cognition: Reframing and Refocusing Implementation Research.” Review of Educational Research 72, no. 3: 387–431. https://doi.org/10.3102/00346543072003387.
Spillane, James P., Jennifer L. Seelig, Naomi L. Blaushild, David K. Cohen, and Donald J. Peurach. 2019. “Educational System Building in a Changing Educational Sector: Environment, Organization, and the Technical Core.” Educational Policy 33, no. 6: 846–881. https://doi.org/10.1177/0895904819866269.
Vygotsky, Lev Semyonovich. 1978. Mind in Society: The Development of Higher Psychological Processes. Michael Cole, Vera John-Steiner, Sylvia Scribner, and Ellen Souberman, eds. Cambridge: Harvard University Press.
Charles Logan is a PhD student in learning sciences at Northwestern University. A former high school English teacher and university educational technologist, his research interests include critical digital pedagogy, co-authoring counter-narratives to oppose sociotechnical and edtech imaginaries, and designing learning experiences to support educational dignity. He is on Twitter @charleswlogan.
In this paper we describe fourth-party vendor relationships between remote proctoring tools and other companies in higher education with a specific focus on the remote proctoring company Proctorio. We unpack the problematic nature of such relationships in general but note that they are exacerbated when dealing with technologies as harmful as remote proctoring. Fourth-party relationships are particularly troublesome because those who work at institutions of higher learning are often unaware of their existence or can do little to impact or change them. We present a “harm index” reviewing literature around the harms of remote proctoring systems. We describe the nature of different types of fourth-party relationships and perform a content analysis of the partnerships listed on Proctorio’s website. We use an autoethnographic approach to share our experience as instructional designers, at an institution which has taken steps to limit the use of remote proctoring, and of attempting (and succeeding) to get the fourth-party integration of Proctorio removed from our learning management system’s integration with McGraw Hill Connect. The paper concludes with a discussion of the discourse and rhetoric used to rationalize this harmful technology, and our recommendations for how institutions might exert more control over fourth-party integrations with harmful surveillance technology.
The year is 2026, and Regional State University (RSU) has bounced back from the enrollment woes that resulted from the COVID-19 pandemic of 2020 and 2021. After intensive faculty development efforts during the pandemic, the majority of instructors on campus became skilled online instructors. RSU set itself apart during the pandemic by rejecting remote proctoring systems by issuing several strong recommendations against such tools to their faculty and offering faculty development opportunities around authentic assessments. RSU was featured in several national news articles for their stance on remote proctoring and celebrated for their attention to student privacy. New students were attracted to RSU because of its reputation for having strong online programs and also because of RSU’s commitment to student privacy and authentic assessments.
On her first attempt to use the technology she has to pass an identity check using facial recognition. She keeps getting an error that the software cannot detect her face and decides to call RSU’s technology help desk to see what the problem might be but no one there seems to know about this software. They ask her how she got access to this system and in tracing her steps she mentions that she started by clicking a link in the Learning Management System (LMS). So they transfer her to Summer, an administrator of the LMS. Summer tells Aisha that though she got to this proctoring system through the LMS that the school does not have any administrative access to this remote proctoring system and cannot help her in any way—that she will need to contact the remote proctoring company’s help line.
Little does Aisha know that Summer is a student privacy advocate and is shocked to hear that some students are being subjected to a remote proctoring system. Summer begins the process of trying to figure out what is going on but while waiting to hear back from the customer support rep at the math homework system she sees that RSU is in the national news—headline: “RSU Student Humiliated.” The article features Aisha who is recounting her experience with the remote proctoring company’s help line that Summer referred her to, and it just sounds horrible. They asked Aisha to jump through various hoops including shining a light on her face and even asked her to remove her hijab. Matters are made worse when a week later the proctoring company suffers a major data breach potentially exposing a massive amount of student data on the dark web. Summer is at a complete loss for what to do. Protecting student privacy is part of her job, but the company violating students’ privacy has no real relationship or accountability to RSU. She finds a clause buried in the university’s contract with the math homework system saying that they consider the contract to be binding for themselves but also for any partnerships that they enter into. Summer wonders if this means that the math homework company is legally responsible for the remote proctoring company—even so, it sure feels like the damage to student trust will fall back on the university.
The story above is a semi-fictional speculative account of potential harms that can come from fourth-party vendor relationships, in which outside companies partner with one another with little oversight by the educational institution. Ross (2017) explains how “speculative methods are particularly important for the study and analysis of digital education because of its rapidly changing nature, and the need to anticipate potential ‘unintended consequences’ of such rapid changes.” We based this story on real events that we experienced in our own professional context and the documented experiences of other students, faculty, and staff, which are reviewed in our remote proctoring Harm Index (Table 2). In this paper we will define fourth-party vendor relationships and investigate how the harms of remote proctoring technology are particularly pernicious in the context of a fourth-party deal.
We describe the nature of different types of fourth-party relationships with a focus on the remote proctoring company Proctorio’s partnerships. Our decision to focus on Proctorio is due to our practical experience working with the product through an integration with McGraw Hill Connect, a program that provides electronic textbooks and other learning materials. This paper adds to the larger body of literature that takes a critical view of educational technologies, in that these technologies often act as “mechanisms of economic capture, surveillance, and control” (Paris et al. 2021) and often work against the core pedagogical goals of educational institutions. In that institutions of higher education have a responsibility to treat students’ data responsibly and monitor the data privacy practices of vendors with access to student data, we suggest that fourth-party partnerships obstruct educational institutions’ regular oversight of student information.
As remote proctoring technologies are relatively new and knowledge about them and their harms is rapidly evolving, the literature we have reviewed includes not just scholarly works but also news stories, especially accounts from students who have been exposed to remote proctoring technology in educational settings. Because we aim to define and detail the harms of these technologies, both generally and in the context of fourth-party deals, we found it necessary to focus on literature by and about students, faculty, and staff who had been negatively impacted by these technologies.
Our methodology includes content analysis of Proctorio’s website, help documentation, and publicly available information about their partnerships with other education technology companies (Hsieh and Shannon 2005). We analyze these materials to understand the nature of the relationships between Proctorio and its partner companies. The appendix outlines the documents analyzed in our content analysis.
We use collaborative autoethnography to analyze our experience identifying the existence of a fourth-party surveillance company operating on our university campus, and requesting that the third party restrict the fourth-party from working with our campus users (faculty and students). Autoethnography is a method used to systematically analyze and interpret personal experiences (Ellis, Adams, and Bochner 2011). Collaborative autoethnography enables researchers to work together to combine their shared and personal narratives to make meaning out of experiences (Chang, Ngunjiri, and Hernandez 2016). Our collaborative ethnography was constructed using alternating personal narratives, in which we engage in critical reflection on the experience of identifying and ultimately removing a fourth-party proctoring service from the campus where they are both employed as instructional designers. Our perspective is one of current instructional design and educational technology professionals in higher education, employed at University of Michigan–Dearborn (a small, undergraduate-focused campus of the U of M). Both of us were already opposed to remote proctoring for the reasons detailed in Table 2 before the events in our autoethnography unfolded and had participated in public conversation about the harms of remote proctoring on social media. We additionally co-authored a peer-reviewed paper with colleagues about authentic assessment as an alternative to remote proctoring and issued a press release with Fight for the Future about the paper and expressing our difficulties with the McGraw Hill partnership with Proctorio. These experiences shape our narrative in that we likely had more prior knowledge of the harms of remote proctoring and the potential roadblocks to removing it from our campus. As our narrative will show, the process was complex, frustrating, and time-consuming despite our prior knowledge and preparation.
Defining the Fourth-Party Relationship
Third party services are common in the educational technology industry. For example, instead of building a platform on which to administer online courses, a university will often contract with a company to provide a Learning Management System (LMS). The company hosts and maintains the software, and the university pays a fee to use it. A fourth-party relationship comes about when vendor A, with whom the institution has an established relationship, partners with vendor B with whom the institution does not have an established relationship (Aldoriso 2020). Language can be confusing here because the vendor may refer to the partners as their third-party partners. This is because they see themselves as the originating party. However, this is an erasure of the institution as the originating party.
Fourth-party partnerships almost always introduce a new feature or functionality to the third-party product. Depending on the product and where any particular individual exists in the university hierarchy, new features from this relationship may be perceived as desirable or neutral (Gogia 2021). However, these partnerships merit increased suspicion when the fourth-party product does not have an obvious connection to the third-party product, or does not have an educational value. While such spurious partnerships could be viewed as simple money-making opportunities, these relationships can have dangerous implications when they involve problematic technologies that have a history of harming students, such as remote proctoring, because these technologies make their way into classrooms with little institutional input or support.
Jones et al. (2020) argue that institutions of higher education are examples of “information fiduciaries” (Balkin 2016), meaning that they have a particular obligation to treat the data of their primary stakeholders (students) responsibly. Fourth-party agreements constitute a threat to educational institutions’ responsibility to their students in terms of data privacy because, as we describe above, they expose students and their information to technologies that may harm their students or operate with bias.
Doorways Between Companies and Institutions
There are many examples of university maintained systems that allow for the possibility of integration with third-party or fourth-party vendors. For example, Google Apps for Education allow for various integrations with their suite of tools and Zoom has a marketplace where vendors can be allowed to offer integrations. One common option for a university-maintained Learning Management System (LMS) has long been the possibility to use plug-ins, and more recently the Learn Tools Interoperability (LTI) protocol (Severance, Hanss, and Hardin 2010), for integrating tools. There is an important distinction between a “true” fourth-party partnership and an integration option with a university maintained system. In instances where there is a possibility for integration, a university has some direct influence to work with the vendor to negotiate and navigate university policies around such integrations; for instance data privacy or procurement protocols. In some way (either by a dedicated employee or by a university committee) the university works directly with the vendor. This is different from fourth-party relationships which are inherently relationships between vendors which bypass the university entirely. It is also important to note that these kinds of integrations can actually act as the mechanism through which fourth-party vendors can come in; considering that once a third party is integrated that third party can then easily partner with a fourth party.
Relationships between a third-party and a fourth-party vendor can vary widely depending on the business partnership but ultimately these relationships act as doors into the institution. Just as there are many different types of doors: french doors, sliding doors, split doors; which can be in multiple states (open, closed, ajar), there are many different kinds of relationships between third- and fourth-party vendors. It is important to understand these relationships as they can deeply alter the power dynamics that are at play. Below are several relevant examples of relationships that can apply to partnerships between educational technology companies:
Integration Possibility – an option for two technologies to work together when an institution has separate agreements with both of them. Often a precursor to a fourth -party integration but not necessarily a fourth-party integration in and of itself.
Free – the third party offers the fourth-party service or product for free within their system.
Freemium – the third party offers some basic functions of the fourth party’s service or product for free with an option to pay for more advanced features. The free product in the Freemium model may not be totally free for the user—they may “pay” using an alternative currency known as “mind share,” or the “development of awareness for the provider’s brand and the consideration for purchase of future commercial products and services” (Pujol 2010). The free version, while a product or service in itself priced at $0, also serves as a type of marketing device for the paid product.
Resell – the third party offers the fourth party’s service or product at a cost. The cost can be charged to the institution, but is also frequently placed on the student.
Identifying the Doors: A Content Analysis of Proctorio’s Fourth-Party Partnerships
The remote proctoring company Proctorio lists a number of partnerships with other companies on its website, which are all presented under the category of “assessment platforms.” Our content analysis of these partnerships is summarized in Table 1 and details of the sites reviewed can be found in the Appendix.
Nature of relationship to Proctorio (free, freemium, resell, integration)
How partner defines their business (assessment, learning)
Pricing for students/institutions
How partner describes what Proctorio does
Student Resell – Top Hat resells Proctorio by directly charging students
Active Learning Platform
Each student pays $30 for access to TopHat Pro and then $10 per course for access to Proctorio
“Partnership ensures higher education institutions transitioning to remote teaching can preserve the integrity of their tests and exams.”
McGraw Hill Connect
Freemium/Student Resell – McGraw Hill Connect provides a freemium Proctorio plan directly to instructors, resells a more advanced product directly to students
Course Management and Adaptive Learning
Free for all instructors to require for their students. Instructors can additionally require a premium option for $15 a semester paid by students
“You’re in control. Ensure your course’s academic integrity.”
Integration possibility – Integration is possible with separate agreement with Proctorio
End to end assessments platform
“To make use of Proctorio you should have an agreement with Proctorio. Once the agreement is final, Proctorio will share K&S Keys for Cirrus to setup. Once the K&S details are received by Cirrus it will take a maximum of 24 hours to setup.”
“Recordings of the exam sessions can be viewed and cheating behaviour is automatically flagged by the AI.”
Integration possibility – Integration is possible with separate agreement with Proctorio
“Ans* is designed to support paper, digital and hybrid examinations. With a click of a button, you can convert a digital test into a face-to-face exam and vice versa.”
“When a licence has been acquired, Ans* will support you in setting up the configuration by following these steps:…”
“By enabling the integration with Proctorio, the security of the administration of the exam is increased. Proctorio collects information that can be used to handle fraud procedures of your institution. With online proctoring, you’re able to administer exams in a more secure way anyplace, anywhere.”
“Within the gradebook, suspicious behavior of the student is flagged.”
Integration possibility – Ascend can facilitate the integrations without a separate agreement with Proctorio
Integrated software, assessment, and analytics solutions
“Proctors are monitoring for odd or disruptive behavior. Do not engage in misconduct or disruption. If you do, you will be dismissed, and your exam will not be scored.”
Freemium – unclear how much is paid for extra features and who pays
“Derivita is a first of its kind STEM technology platform and complete computer algebra system.”
“Lockdown” at no cost, other features
“Derivita has been fully integrated with Proctorio’s remote proctoring platform. This enables educators to administer STEM assignments within the LMS, using Derivita’s content and technology, while ensuring rigorous adherence to academic integrity standards.”
Unknown – While the partnership with EvaExam is listed on Proctorio’s site, EvaExam’s site does not detail the cost structure or nature of the partnership
“ID verification, recording video, audio, the participant’s screen, and any web traffic on the system used may be centrally controlled, automated, and are legally compliant with the additional Proctorio plugin.”
Unknown – likely freemium. Questionmark offers an AI proctoring service called “record and review” that is facilitated by Proctorio, but also offer a live remote proctoring service
“The automated system observes and records the exam session on video, for potential review later. The system flags potential anomalies, such as a second person on screen. When the system flags an anomaly, the customer can review it or send it to Questionmark for inspection. This makes it harder for a test-taker to cheat or to copy the exam questions to pass onto others.”
Table 1. Overview of Proctorio partnerships with “Assessment Platforms.”
The Proctorio website groups all of these partnerships together as “assessment platforms,” a decision which obscures the significant differences between the services that the partner provides (e.g. textbooks, assessments, active learning) but also what the nature of the partnership is between the company and Proctorio. For example, at the time of this writing, Proctorio’s partnership with McGraw Hill Connect allows any instructor who has adopted a McGraw Hill text to enable Proctorio on student activities that are assigned through the Connect platform (and additionally to enable a paid premium product for students), while Proctorio can only be used on the Cirrus Assessment platform when an institution has a separate (paid) agreement with Proctorio—an agreement that likely requires some attention from someone at the university.
Loose Hinges: Inconsistent Messaging in Fourth Party Relationships
Fourth-party relationships can introduce confusion about the purpose and functionality of the products involved. Proctorio defines itself as a tool for promoting academic integrity, which gives instructors information that they can use to make their own decisions about whether cheating has occurred during a given assessment. However, because many instructors encounter and use Proctorio’s product through these other fourth-party products, they may receive all of their information about Proctorio through the help documentation of this other vendor. These Proctorio partners and resellers can describe the purpose of Proctorio’s product in their documentation in ways that are not entirely congruent with Procotrio’s stated purpose. For example, at the time of this writing, Cirrus Assessment’s help documentation page titled Integration with Remote Proctoring from Proctorio states that “Cheating behaviour is automatically detected by the AI” (see Appendix).
Although we strongly object to the idea that any behavior detected by an AI can represent “cheating behavior,” this statement from Cirrus is notably at odds even with Proctorio’s public statements about their product. On their FAQ page under the question “Does Proctorio utilize algorithmic decision making?” Proctorio states that “No,… Proctorio’s software does not perform any type of algorithmic decision making such as determining if a breach of exam integrity has occurred. All decisions regarding exam integrity are left up to the exam administrator or institution” (see Appendix).
Additionally on the Proctorio FAQ page for the question “How do you decide what behaviour counts as ‘cheating’?”, Proctorio themselves state, “Only the exam administrator or the institution can dictate what type of behaviour they want to monitor over the course of an exam. Exam administrators will then review exam attempts to determine whether any flagged behavior was truly infringing on the integrity of the exam” (see Appendix). Moreover, Proctorio’s own “acceptable use policy” prohibits punishment of students based solely on Proctorio reports. The policy states that “Institutions and their representatives are prohibited from making any negative decisions regarding exam integrity or from imposing any other negative consequence or detriment on an End User based partly or entirely on Proctorio’s analysis” (see Appendix).
Miscommunications like these about what the product is for and how it can be used, show how misleading fourth-party relationships can be. But while fourth-party partnerships are complex they are not inherently bad. Returning to our speculative narrative from the introduction, the math homework system we mention could integrate a digital graphing calculator from an outside vendor to assist students with their work with little negative consequence. However, when fourth-party deals are made with companies that offer harmful products these relationships become particularly problematic.
What’s the Harm?
Given that fourth-party integrations can expand the functionality of a third-party tool, often for free, what is the harm? Why would some schools not want free access to more functions? To explore this we review literature around the various harms that have been caused by remote proctoring systems. Reports of the many harms of remote proctoring systems have been widely documented especially since the beginning of the COVID-19 pandemic. Walker et al. (2020) outlined harms of remote proctoring technologies specific to nursing students, the nursing profession, and to the public. To address the important question of “What is the harm?” we have indexed harms from a broad perspective to create our Harm Index of Remote Proctoring Systems, which we present in Table 2. Here we are exploring three different levels of potential harm: harm to the student, harm to the institution, and harm to larger society.
Harm to student – *
Harm to institution – +
Harm to larger society – ^
Some remote proctoring companies store student recordings for years (Gogia 2020) *.
Remote proctoring companies have experienced data breaches (Abrams 2020)*.
Teaching students to install software that undermines the integrity of their computing environment can ingrain poor data security habits (Fox Cahn et al. 2020)*^.
Does Not Prevent Cheating/There are better kinds of assessments
Students have found ways around proctoring software and publicize these hacks (Geiger 2021)+.
Real-life tasks are a better assessment of real-life skills – students are assessed the the most effective ways (Crosslin 2021; Feathers 2020b; Silverman et al. 2021)*.
Remote proctoring systems can raise student anxiety (Chin 2020)*.
Student performance can suffer when they have test anxiety (Woldeab and Brothen 2019)*.
Remote proctoring features may not be compatible with adaptive technologies such as screen readers(Office of Information Technology 2021)*+.
Basic access to these technologies are often difficult (Feathers 2020a)*+.
Bias – Race, Ability, Gender
Those with certain kinds of disabilities can trigger cheating flags of no fault of their own – tics, eye movements, self-massage, needing to go to the bathroom (Brown 2020)*+^.
Flagging these behaviors is a feature not a bug of AI proctoring systems – they are designed to look for “atypical” behavior (Patil and Bromwich 2020) *+^.
Algorithmic proctoring uses facial recognition/detection technology which can fail to recognize those with dark skin (Clark 2021)*+^.
Students can be locked out of exams if a face is not detected in the frame (Chin 2021)*.
Reaching out to support can lead to degrading practices to “troubleshoot the problem” like being asked to shine a light on your face (Caplan-Bricker 2021)*.
AI identification methods can be compromising for trans and non-binary students (Swauger 2020)*+ .
Invasion of Privacy
Room scans are invasive and intrusive – they can reveal personal information the student doesn’t wish to share (Harwell 2020a)*.
Product features sometimes ask students to show parts of their bodies (their lap) in inappropriate ways (Harris 2020) *+.
Proctoring can cost upwards of $500,000 a year (Harwell 2020b)+.
High costs of proctoring borne by students or budget-squeezed institutions (Malone 2019; Wan 2019)+^.
Additional costs from the possibility of court cases and public relations (McKenzie 2021)+.
Proctoring services may not always be in compliance with state or local laws about student surveillance or collection of biometric data (Long 2021)*+^.
Human proctors are “alone” with students and may harass or otherwise harm them while the student is involved in course activities (Bhat 2021)*+.
Digital Divide/Digital Redlining
Many remote proctoring technologies require expensive hardware (laptop, webcam, microphone) that students may not have or software (a certain browser, a browser extension) that students may not consent to installing (Selinger and Gilliard 2021; Yun 2020)*.
Internet bandwidth is not the same everywhere and some students may struggle with connections (Flaherty 2021)*.
Larger harms to freedoms and society
Chilling effect on academic freedom – regarding research and choice in teaching
Australian researchers found that their research was hindered due to the litigious nature of proctoring companies and the larger negative climate around remote proctoring (Selwyn et al. 2021) +^.
Academic Integrity Researcher Phillip Dawson had to return grant funds because he could not find a remote proctoring company that would let him research their tool to see if it actually prevented cheating (CRADLEdeakin 2020)+^.
Some instructors are not given a choice about using this technology *+^.
Normalization of surveillance on students and faculty
Surveillance technologies are used in conjunction with human rights violations all over the word – proctoring normalizes surveillance for students (Fox Cahn et al. 2020)*^.
Short distance from surveillance of students in learning activities to surveillance of faculty during official university business (teaching, communications, etc.) (@hypervisible 2020)*+^
Remote proctoring systems have implications for eroding student trust (Stewart 2020)*+^.
Table 2. Harm index of remote proctoring systems.
Closing the Backdoor
The following is an autoethnographic reflection between us (the two authors) about our experience discovering and investigating a fourth-party proctoring option at our university, which had a stated anti-proctoring technology stance (Silverman et al. 2021). These events took place between February 5th and May 15th of 2021 and it is of importance to note this is the time it took to have a technology removed that was never vetted or approved by the university in any way. McGraw Hill’s partnership with Proctorio was established as early as February of 2020 (@mheducation 2020) potentially in response to the mass transition to online teaching during the COVID-19 pandemic. However, as this autoethnography will show, no one at the institution seemed to have any knowledge of it for a year. The partnership began to draw wider attention when a parent’s group published an open letter asking McGraw Hill to end their partnership with Proctorio (Ongweso 2020).
Autumm: In February of 2021 I was browsing twitter when I came across conversations about a McGraw Hill Connect (MHConnect) partnership with Proctorio. I did some googling and came across their webpage outlining this partnership and stating that the proctoring options would be part of any text with a 2019 copyright or newer. Days prior, I had been in a Canvas administrators meeting for our institution where the MHConnect integration had been discussed. It was noted that the new integration was mostly a “pass through” over to the publisher’s private platform, which they controlled. I had thought this was a good thing at first, that perhaps it meant that less of our student’s data would be passing between systems but then realized when I saw the Proctorio partnership that this would mean that our students could be subjected to this harmful technology without much oversight from our institution. This upset me for personal reasons but also because our campus had specifically rejected remote proctoring since the beginning of the pandemic. I reached out to Sarah on Twitter DM to vent.
Sarah: When Autumm contacted me to tell me about the McGraw Hill Connect partnership with Proctorio and how it might affect our students I was first and foremost angry. But then I immediately thought about our campus decision to reject remote proctoring (as Autumm mentions) and how that should carry some weight in our working relationship with McGraw Hill. After all, we are a customer and partner of theirs, and it is very common for customers of technology companies to ask to have the product configured to their liking (just take a look at several different universities’ instances of the Canvas LMS to see an example of edtech customization). I suggested to Autumm that we attempt to have Proctorio removed from MH Connect platform for UM-D users, assuming that this would be an easy request to fulfill.
Autumm: When I contacted Sarah I was feeling that we were somewhat at a dead end because of the nature of the integration that I had heard about in the admin meeting days prior. It did not sound like other integrations that I’d had experience with (having LMS admin experience from previous roles) that would bring outside functionality to the LMS—and over which an LMS admin would have more control. Rather, this integration just seemed to pass credentials over to another McGraw Hill controlled platform. Talking with Sarah she questioned my assumptions and asked many questions about configurations and customizations that might be possible. She made good points and I felt like it was a long shot but I reached out to our institution’s lead LMS administrator to point out this new partnership and question if customizations or a shut-off might be possible. He was not aware of this partnership but agreed that it was concerning and said that he would reach out to McGraw Hill. After waiting about two weeks and hearing nothing I reached back out to him and he said he would reach out again. Two days later he wrote to me to say he had met with representatives of McGraw Hill who said that they would turn the integration off for our school but that there were conditions: (1) that they wanted to look and see if there was anyone using the integration and (2) that they needed two weeks to turn it off. I was unhappy about the two week waiting period, especially since we had already waited two-weeks to initially get a response, and I never fully understood why it was required. Later that day one faculty member was identified as already using the integration and McGraw Hill additionally requested an email from the associate provost to perform the shut off. Arrangements were made to work with the faculty member who was using the system to find alternatives and the official email from the associate provost requesting the shut off went out. However, a month later Sarah was working with a faculty member who was using MHConnect and found that the options for proctoring were still there.
Sarah: That particular faculty member hadn’t wanted to use the proctoring feature but noticed that it became available. I think it is important to note that it was a complete coincidence that I was working with an instructor that was using MHConnect, and thus was able to verify that the proctoring feature had not been turned off. Absent my working relationship with this faculty member, none of the instructional designers on our campus would have access to Connect or any fourth-party tools that are connected to it. Together, Autumm relayed that the proctoring feature was not turned off to the LMS administrator and our associate provost, who contacted McGraw Hill again. At this point, we were assuming there was some sort of technical misunderstanding. We eventually heard back that while McGraw Hill could turn off the integration for users that accessed Connect through the LMS, it could not turn it off for those that logged in directly through the Connect site. Late in April I tweeted, “Frustrating day for resisting surveillance and e-proctoring. Found out that McGraw Hill cannot disable the Proctorio integration in “Connect” for all our users. This integration is built on the presumption that eproctoring is an uncomplicated value-add to any course. It is not.” Evidently, someone from McGraw Hill saw this tweet, and reached out to our associate provost by email offering to have a Zoom call in which we could clarify the details of McGraw Hill’s position.
Sarah and Autumm: We accepted the offer for a Zoom call, and decided to use it as an opportunity to better understand how Proctorio partners with McGraw Hill in addition to discussing how Proctorio could be removed for our campus users. We discussed a plan for McGraw Hill to deactivate Proctorio for all our campus users, both those that log in through our LMS and those that log in through the McGraw HIll Connect site. We then inquired as to whether the Data Privacy Agreement (DPA) we had signed with McGraw Hill covered other partnerships that they chose to make, such as the one with Proctorio. They responded that they viewed the DPA as applicable to any other technology companies that they partner with, meaning that our original DPA covered Proctorio being used through McGraw Hill Connect by our users. They also reiterated that they did not want to force Proctorio on anyone, and that they were happy to pursue various avenues to restrict its use on our campus if that is what we desired. In response to our dissatisfaction that they had integrated Proctorio into our users accounts without informing us or asking permission, they maintained that if we had a campus policy against remote proctoring, it was primarily our responsibility to inform faculty of the policy and enforce it.
Discussion and Recommendations
We have described the problematic nature of current fourth-party partnerships, but there is potential for more problematic future partnerships. Fourth-party partnerships may exist explicitly to circumvent campus decisions or policies (such as administrative policies, or faculty governance) or to respond to budget and purchasing constraints (Gogia 2021). The loopholes which are created as part of these deals can do real harm. Considering how we speak about and rationalize such technologies is an important part of analyzing how they end up existing on our campuses.
Caines (2021) described a weaponization of care around surveillance technologies where they are sold and rationalized under a rhetoric of care and gives specific examples from remote proctoring companies. Adjacent to this frame we also see remote proctoring companies using what Herzog (2010) called the “banality of surveillance.” With this construct we see companies making the case that the technology is essential and, though it may not be perfect, we need to suffer with the drawbacks because the good outweighs the bad (McFarland 2021). Third parties that form relationships with remote proctoring companies also implicitly make the case that remote proctoring is essential (and harmless) by integrating it into their product without user or institutional consent. But the banality of surveillance around remote proctoring is nothing but smoke and mirrors, as our Harm Index (Table 2) shows. The harms suffered by students, institutions, and larger constructs such as academic freedom from these technologies is very real. Additionally, multiple educators have pointed out that other, more authentic kinds of assessments do not even require exams (Crosslin 2021; Silverman et al. 2021).
Relationships between educational technology companies and educational institutions can be fraught. Often, there is a disconnect between how institutionally approved technologies are chosen and whether they support the kind of education the institution wants to provide (Cohn 2021). Our first recommendation for institutions to have more control over fourth-party integrations is for universities to consider not only which technologies to adopt, but which technologies, and specifically what functionalities, they do not want to adopt. Methods and tools need to be created to assist with alignment of the university’s mission, strategic plan, or other guiding principles for evaluating not just the benefits but also the harms of technologies. The addition of new functionalities to an existing educational technology tool through a fourth-party relationship can be particularly subtle and they often come with a techno-utopian sales rhetoric that fails to imagine that this addition could be anything but a good thing. Without consideration of where boundaries exist for the institution in regards to what technology is acceptable and what is not it is impossible to express issues with fourth-party relationships. Passing resolutions and offering recommendations against such technologies can go a long way against limiting the use of harmful technology.
Our second recommendation is for institutions to leverage their influence as direct paying customers, or as the provider of a sales environment of these tools, to demand that the third party remove the surveillance functions provided by the fourth-party companies. Enterprise systems are regularly configured to customers specifications. Shutting off these integrations is technically possible. Success in getting a fourth-party integration removed may vary depending on the specifics of the fourth-party partnership (as discussed above). We speculate that McGraw Hill was willing to grant our request for several reasons. For one, on balance it is better for them to retain us as a satisfied customer than to insist on proctoring functionality in our school. In addition, there is the name recognition of our institution and our relationship with our flagship university. Finally, it is important to note that we did take steps to bring public light on our situation. We wrote a peer-reviewed paper about our experiences and issued a press release (Fight for the Future 2021), which could have impacted our ability to successfully get this integration removed. While resisting harmful fourth-party integrations is a difficult, time-consuming, and unpredictable endeavor, we hope that other educators feel empowered to do so based on our experiences.
Fight for the Future. 2021. “University Advocates E-Proctoring Alternatives, but Struggles to Remove e-Proctoring Option from McGraw-Hill Connect Platform.” Fight for the Future. April 1, 2021. https://www.fightforthefuture.org.
Herzog, Todd. 2010. “The Banality of Surveillance: Michael Haneke’s Cache and Life after the End of Privacy.” Modern Austrian Literature 43 (2). Austrian Studies Association: 25–41.
Hsieh, Hsiu-Fang, and Sarah E. Shannon. 2005. “Three Approaches to Qualitative Content Analysis.” Qualitative Health Research 15, no. 9: 1277–88. doi:10.1177/1049732305276687.
@hypervisible. 2020. “End of Essay Brings up Important Point. There Are Countless Reasons Instructors Should Reject These Surveillance Systems, but One Is Self-Preservation. It’s Only a Matter of Time before These Systems Are Turned on Instructors—in Some Cases They Already Are. Https://T.Co/IPlGFAMm0V.” Tweet. Twitter. September 11, 2020. https://twitter.com/hypervisible/status/1304394596902993924.
Selwyn, Neil, Chris O’Neill, Gavin Smith, Mark Andrejevic, and Xin Gu. 2021. “A Necessary Evil? The Rise of Online Exam Proctoring in Australian Universities.” Media International Australia, April 1. doi:10.1177/1329878X211005862.
Severance, Charles, Ted Hanss, and Joseph Hardin. 2010. “IMS Learning Tools Interoperability: Enabling a Mash-up Approach to Teaching and Learning Tools.” Technology, Instruction, Cognition and Learning 7 (3–4): 245–62.
Silverman, Sarah, Autumm Caines, Christopher Casey, Belen Garcia de Hurtado, Jessica Riviere, Alfonso Sintjago, and Carla Vecchiola. 2021. “What Happens When You Close the Door on Remote Proctoring? Moving Toward Authentic Assessments with a People-Centered Approach.” To Improve the Academy: A Journal of Educational Development 39, no. 3.
Woldeab, Daniel, and Thomas Brothen. 2019. “View of 21st Century Assessment: Online Proctoring, Test Anxiety, and Student Performance | International Journal of E-Learning & Distance Education / Revue Internationale Du e-Learning et La Formation à Distance.” International Journal of E-Learning & Distance Education 34, no. 1. http://www.ijede.ca/index.php/jde/article/view/1106/1727.
Sarah Silverman is an instructional designer at the Hub for Teaching and Learning Resources at the University of Michigan–Dearborn. In addition to educational technology criticism, her interests include Universal Design for Learning and Disability Studies. A scientist by training, she received her PhD in Entomology from UC Davis and worked in teaching and learning support at UC Davis and UW Madison before coming to UM Dearborn. She currently resides in New Haven, CT.
Autumm Caines is an instructional designer at the University of Michigan–Dearborn. Autumm’s scholarly and research interests include blended/hybrid and online learning, open education, digital literacy/citizenship with a focus on equity and access, and online community development. This blend of interests has led to a concern about mounting ethical issues in educational technology and recent publications and presentations on topics concerning educational surveillance, student data collection, and remote proctoring. Autumm has taught honors students at small liberal arts colleges as well as traditional students, working professionals, and veterans at a regional public university. More at autumm.org.
This webtext presents the rationale, scaffolding, and instructions for an assignment intended for First-Year Writing (FYW) students: the Filter Bubble Narrative. We pose this assignment in response to Lyon’s (2017) call to analyze “soft surveillance” situations and Gilliard’s (2019) challenge to critically analyze platform-perpetuated surveillance norms with students. We suggest that social media is a particularly productive space to focus student attention on soft surveillance given social media’s ubiquitous presence in society and in students’ lives. Moreover, through their social media use, FYW students have developed an array of digital literacies (Selfe and Hawisher 2004) as prosumers (Beck 2017) that are so engrained in their everyday existences that they haven’t held them up for critical scrutiny (Vie 2008). Through Pariser’s (2012) concept of the “filter bubble,” students engage in scaffolded activities to visualize the effects of algorithmic surveillance and to trace and reassemble the data-driven identities that social media platforms have constructed for them based on their own user data. The final deliverable is a multimodal narrative through which students critically examine and lay claim to their own data in ways that may inform their future use of social media and open opportunities to confront soft surveillance.
David Lyon (2017) argued that we live in a surveillance culture, a way of living under continual watch “that everyday citizens comply with—willingly and wittingly, or not” (825). Lyon (2006) previously stressed that such a pervasively visible cultural existence extends beyond notions of the “surveillance state” and the “panopticon” to forms of seemingly “soft and subtle” surveillance that produce “docile bodies” (4). Drawing upon the work of Gary Marx (2003; 2015), Lyon (2017) argued that such “soft surveillance” is seemingly less invasive and may involve individuals willingly surrendering data, perhaps through “public displays of vulnerability” (832) that are common online via cookies, internet services providers (ISPs), and social media sites. Contemporary surveillance culture is therefore less out there and more everywhere, less spy guys and big brother and much more participatory and data-driven.
In higher education, scholars like Hyslop-Margison and Rochester (2016) and Collier and Ross (2020) have argued that surveillance has always existed through “data collection, assessment, and evaluation, shaping the intellectual work, and tracking the bodies and activities of students and teachers” (Collier and Ross 2020, 276). However, the COVID-19 pandemic has accelerated and contributed to the ways that academic activity is surveilled via proprietary learning management systems and audio/video conferencing software that track clicks and log-ins while simultaneously hoarding student/user data (Atteneder and Collini-Nocker 2020). Responding to and potentially resisting such prevalent surveillance, no matter how soft, therefore requires “a careful, critical, and cultural analysis of surveillance situations” (Lyon 2017, 836). However, as Gilliard’s (2019) “Privacy’s not an abstraction” stressed, “precisely because ideas about privacy have been undermined by tech platforms like Facebook and Google, it is sometimes difficult to have these discussions with students” (para. 16). We will argue that social media news feeds are just the kind of surveillance situations that need critical attention, in writing classrooms, in service of students’ critical digital literacies.
Critical Digital Literacies in the Age of Algorithmic Surveillance
Along with many other scholars writing about technology and classroom practice before us (Selber 2003; Selfe 1999; Takayoshi and Huot 2003; Vie 2008), we suggest that critical is a keyword for theory as well as for application in our networked, digital age, and one that does not emerge fortuitously from incorporating the latest digital technologies in classrooms. In fact, by incorporating technologies into our classrooms, we are often contributing to surveillance culture, as Collier and Ross (2020) note. A critical orientation, we argue, can help.
In “Critical Digital Pedagogy: a Definition,” Jesse Stommel (2014) defined critical pedagogy “as an approach to teaching and learning predicated on fostering agency and empowering learners (implicitly and explicitly critiquing oppressive power structures)” (para. 4). Critical digital pedagogy, he argued, stems from this foundation, but localizes the impact of instructor and student attention to the “nature and effects” of digital spaces and tools (Stommel 2014, para. 14). In adapting the aims of critical pedagogy to the digital, what emerges is a clear distinction between doing the digital in instrumental fashion (e.g., to develop X skill) and doing the digital critically (e.g., to transform one’s being through X). A critical digital literacies approach to surveillance might suggest:
a willingness to speculate that some of the surveillance roles we have come to accept could be otherwise, along with an acknowledgment that we are implicated in what Lyon terms ‘surveillance culture’ (2017) in education. What can we do with that knowledge, and what culture shifts can we collectively provoke? (Collier and Ross 2020, 276)
As Selber (2004) and Noble (2018) have argued, digital technologies and platforms are made by humans that have their own biases and intentions, and those same biases and intentions may become part of the architecture of the technology itself—regardless of intentions or visibility. Other scholars, like Haas (1996) and O’Hara et al. (2002) therefore cautioned against perpetuating what is often called “The Technology Myth,” by calling teacher-scholars to look critically “at the technology itself” instead of through it (Haas 1996, xi). Without a critical perspective, students and instructors may fail to question the politics, ideologies, and rhetorical effects of their digital tools, spaces, and skills, what Selber (2004) defined as critical literacy in a digital age. We argue that there may be no better space to engage students in critical digital practice than the online spaces they visit daily, often multiple times per hour: social media news feeds.
Social Media News Feeds as a Space for Critical Digital Practice
In a report for Pew Research Center titled “Social Media Outpaces Print Newspapers in the U.S. as a News Source,” Elisa Shearer (2018) revealed that 18-to-29-year-olds are four times as likely to go to social media for news compared to those aged 65 and older. Social media applications, which are frequently accessed via mobile devices, are therefore incredibly popular with college-age students (Lutkewitte 2016) and should be seen for what they are: “technology gateways”, or the primary places where users practice digital literacies (Selfe and Hawisher 2004, 84). However, as Vie (2008) argued, even frequent users may still need to further develop “critical technological literacy skills” (10) since “comfort with technology does not imply … they can understand and critique technology’s societal effects” (12). In order to open up awareness and areas of resistance, we suggest students should be introduced to, and offered opportunities to interrogate, the ways in which their self-selected, or curricularly-mandated, technologies surveil them. Here, we aim to focus their attention on the ways they are softly surveilled via algorithms operating behind the scenes of their social media platforms. Specifically, Gilliard (2019) cautioned that “the logic of digital platforms … treats people’s data as raw material to be extracted” and put to use by individuals for a variety of purposes—malicious, benign, and in-between. Moreover, Beck (2017) argued that it has become normative for social media applications, and the companies that control them, to employ algorithmic surveillance to track all user data and personalize experiences based on that data. Indeed, these seemingly invisible mechanisms further “soften” attitudes toward surveillance that may result in sharing personal details so publicly on social media (Marx 2015; Lyon 2017).
One consequence of algorithmic surveillance on social media is what Pariser (2012) has coined the “filter bubble.” Filter bubbles are created through algorithmic content curation, which reverberates users’ pre-existing beliefs, tastes, and attitudes back to them on their own feeds, which isolates users from diverse viewpoints and content (Nguyen et al. 2014, 677). For example, YouTube recommends videos we might like, Facebook feeds us advertisements for apparel that is just our style, and Google rank-orders search results—all based on our own user data. In many ways, the ideas and information we consume are “dictated and imposed on us” by algorithms that limit our access to information and constrain our agency (Frank et al. 2019, Synopsis section). After all, as Beck (2017) argued, these filter bubbles that are curated by algorithmic surveillance constitute an “invisible digital identity” about individuals (45). And as Hayles (1999) argued, our identities are hybridized and may be seen as “an amalgam, a collection of heterogeneous components, a material-informational entity whose boundaries undergo continuous construction and reconstruction,” (Hayles 1999, 3). This suggests that an individual’s online activity and interaction with other digital actors in online spaces, which results in an algorithmic curation of a unique filter bubble, is a material instantiation of their embodied identity(ies).
We therefore maintain that turning students’ attention to their own filter bubbles on social media, a space where they may have already developed an array of literacies, means they can attempt to reconcile the distinction between their digital literacies and critical digital literacies as part of reassembling their data with their body. Indeed, the difference between digital literacies and critical digital literacies are particularly problematic in social media spaces. After all, social media are themselves sites of converging roles and agencies, where users are both producer and consumer (Beck 2017) and, as Jenkins (2006) suggested, sites “where the power of the media producer and the power of the media consumer interact in unpredictable ways” (2). We therefore ask, as William Hart-Davidson did in his foreword to the 2017 edited collection, Social Media/Social Writing: Publics, Presentations, and Pedagogies, “What if we took it [SM] seriously?” (xiii). What if instructors acted intentionally to shift students from instrumental users and information consumers to thinking critically about social media? What opportunities for agency might be revealed through concerted and critical attention to how they are algorithmically surveilled and reconstituted?
As Rheingold (2012) suggested, students who know what the tools are doing and “know what to do with the tools at hand stand a better chance of resisting enclosure” (218). For us, a critical digital pedagogy that fosters critical digital literacies is the antidote to the “enclosure” Rheingold references and a way to more holistically and critically understand agency online. Noble’s (2018) term algorithmic oppression also offers insight into the deleterious effects of unchecked algorithmic curation where, in the case of Google search, in particular, “technology ecosystems… are structuring narratives about Black women and girls” in ways that deepen inequality and reinforce harmful stereotypes (33). Jenkins (2006), too, noted that in networked systems “not all participants are created equal” (3) and that corporations have more power than individual consumers (3).
How can students therefore develop the critical literacies to resist or subvert the market-driven forces that seek to disempower and make their algorithmic identities invisible? Beck (2017) suggested that writing classrooms are a valuable space to try to do so, as “[o]ften times writing courses provide students with the means to consider possibilities for positive change to policy, procedure, and values—all with the power to enact such change through writing” (38). In other words, working with students to trace their online footprint and activities that contribute to the curation of their filter bubbles may offer the opportunity for students to critically look at their digital practices through their own digital practices. Though our interventions will be imperfect, amidst corporate-controlled, algorithmic agents, Hayles (1999) and Latour (2007) have nevertheless stressed that our informational lives are materially part of our identity, and that we do have opportunities for transforming our networked agency. Though “our lives, relationships, memories, fantasies, desires also flow across media channels” (Jenkins 2006, 17), creating data that gets funneled through algorithms for corporate or partisan profit, we can intervene. More importantly, perhaps, so can our students.
One place to begin is to reunite our digital fingerprints and our bodies through narrative, through storytelling. Hayles (1999) argued for “us[ing] the resources of narrative itself, particularly its resistance to various forms of abstraction and disembodiment” (22). We agree and have developed the Filter Bubble Narrative assignment sequence to put theory into practice. We use the term narrative in a capacious sense that recognizes the agency and positionality a writer has to arrange events or data, to tell a story, and the connective, reflective tissue that makes narrative a structure for meaning-making and future action. By investigating and storifying the effects of algorithmic curation and soft surveillance, we defragment our identity and construct a hybrid, a Haylesian posthuman assembled from a Latourian tracing. In short, through the Filter Bubble Narrative assignment sequence, we hope to offer students opportunities to act to create an embodied, expansive identity, one that is both designable and pre-designed as an interaction between humans and algorithms.
In order to encourage students to critically interrogate these filter bubbles and therefore how they’re algorithmically surveilled online, this webtext presents a scaffolded assignment, the Filter Bubble Narrative, as an example of how instructors and students might put soft surveillance under a microscope. However, unlike the hotly debated Kate Klonick assignment that involved gathering data from non-consenting research subjects conversing in public places (see Klonick’s New York Times Op-Ed “A ‘Creepy’ Assignment: Pay Attention to What Strangers Reveal in Public”), our assignment and its scaffolding invites students to investigate the technologies that they already use and that surveil them, “willingly and wittingly, or not’” (Lyon 2017, 825). We think this practice is superior to “reproducing the conditions of privacy violations” that Hutchinson and Gilliard argue against and that are enacted in assignments that involve others, especially without their knowing consent (as cited in Gilliard 2019, para. 9). However, we recognize that some students may not use social media at all, and we do not support the mandatory creation of social media accounts for academic purposes. Therefore, alternative assignments should be made available, as needed.
The Filter Bubble Narrative Assignment Sequence
Taken together, the assignment sequence aims to develop students’ critical digital literacies surrounding surveillance by creating opportunities for students to pay attention to the invisible algorithms that surveil them and personalize the information and advertising they see on their social media feeds, ultimately creating filter bubbles. Students will also be encouraged to investigate opportunities for agency within their filter bubbles through narrative and technical interventions like disabling geolocation within apps, adjusting privacy settings, and seeking out divergent points of view, among other strategies.
The assignment sequence culminates in a multimodal writing assignment, the Filter Bubble Narrative (see Appendix A). The choice to call this project a filter bubble narrative is meant to create some intertextuality between existing first-year writing (FYW) courses that may ask students to write literacy narratives, a common FYW narrative genre included in many of our colleagues’ courses and textbooks. Doing so will hopefully allow instructors to find familiar ground from which to intentionally modify more traditional assignments and to intentionally develop their critical digital pedagogies as well as their students’ critical digital literacies.
Given the widespread move to online and hybrid modes of instruction in higher education due to the COVID-19 pandemic, we intentionally designed our Filter Bubble unit for online delivery via discussion boards, though this is not strictly necessary. And though we outline a multi-week sequence of low-stakes assignments as scaffolding for the Filter Bubble Narrative, we also anticipate that instructors will modify the timeline and assignments to suit local teaching and learning contexts. Finally, in addition to fostering critical digital literacies, these assignments take into consideration the WPA’s (2014) Outcomes Statement for First-Year Writing, the guidelines Scott Warnock (2009) outlines in Teaching Writing Online, and a variety of scholarly voices that recognize opportunities for multimodal composition are essential to developing twenty-first–century literacies (Alexander and Rhodes 2014; Cope, Kalantzis and the New London Group 2000; Palmeri 2012; Yeh 2018).
Scaffolding the filter bubble narrative
During the first week of the Filter Bubble unit, students first read Genesea M. Carter and Aurora Matzke’s (2017) chapter “The More Digital Technology the Better” in the open textbook Bad Ideas About Writing and then submit a low-stakes summary/response entry in their digital writing journals. Additionally, students watch the preview episode (5:12) of Crash Course Navigating Digital Information hosted by John Green on YouTube (CrashCourse 2018). This ten-video course was created in partnership with MediaWise, The Poynter Institute, and The Stanford History Education Group. Then, students engage in an asynchronous discussion board structured by the following questions:
(Q1.) John Green from Crash Course suggests that we each experience the internet a little differently, that content is “personalized and customized” for us. What do you make of that? How is the information that you consume online personalized for you? Do you see this personalization as a form of surveillance? Or not?
(Q2.) Co-authors Genesea M. Carter and Murora Matzke define digital literacy as “students’ ability to understand and use digital devices and information streams effectively and ethically” (321). Let’s interrogate that definition a bit, making it more particular. What constitutes “effective” and/or “ethical” understanding and use?
After answering the prescribed questions, students conclude their post with their own question about the video or chapter for their classmates to answer, as replying to two or more students is a requirement for most discussion boards.
During the second week, students watch the social media episode (16:51) of the Crash Course Navigating Digital Information series. (CrashCourse 2019) After watching, students submit a low-stakes mapping activity in their digital writing journals where they map what’s in their bubble by taking screenshots of the news stories, advertisements, and top-level posts they encounter in their social media feeds. Then, students engage in an asynchronous discussion board structured by the following questions:
(Q1.) Given what you found from investigating the kinds of news stories, advertisements, and top-level posts in your social media feeds, what parts of your identity are in your filter bubble? Where do you see your interests? For example, Jessica sees a lot of ads for ethically made children’s clothing, Rothy’s sustainably made shoes, and YouTube Master Classes about writing. It seems that her filter bubble is constructed in part from her identity as an environmentalist and writing professor. Joel, on the other hand, sees ads for Star Wars merchandise and solar panel incentive programs, suggesting his filter bubble is constructed from his identity as a Star Wars fan and homeowner that needs a new roof.
(Q2.) What parts of your identity, if any, are not represented in your filter bubble?
(Q3.) How do you feel about what’s there, what’s not, and how that personalization came to be? How is your identity represented similarly or differently across digital sites and physical places?
As mentioned previously, students conclude their post with their own question about the video or discussion board topic for their classmates to answer.
In the first half of the third week, students read the Filter Bubble Narrative assignment sheet (see Appendix A) and engage in a first thoughts discussion, a practice adapted from Ben Graydon at Daytona State College. Here, students respond to one or more of the following questions after reading the Filter Bubble Narrative assignment sheet:
(Q1.) Connect the writing task described in the project instructions with one or more of your past writing experiences. When have you written something like this in the past? How was this previous piece of writing similar or different?
(Q2.) Ask a question or questions about the project instructions. Is there anything that doesn’t make sense? That you would like your instructor and classmates to help you better understand?
(Q3.) Describe your current plans for this project. How are you going to get started (explain your ideas to a friend, make an outline, just start writing, etc.)? What previously completed class activities and content might you draw on as you compose this project? What upcoming activities might help you compose this project?
In the second half of the third week, students begin knitting together the story of their filter bubble. Additionally, they engage in an asynchronous discussion board structured by the following question:
(Q1.) What can you do to take a more active role in constructing your identity and “ethically” and “effectively” (Carter and Matzke 2017, 321) navigating your information feeds?
As mentioned previously, students conclude their post with their own question, but for this discussion board topic we offer this alternative:
(Q2.) If you’d like recommendations from your classmates about steps you can take within your apps and/or feeds and pages that might diversify or productively challenge your current information landscape, let us know. If you’d rather we not send you recommendations, that’s okay, too. Go ahead and ask any other topic-related question you’ve got.
The fourth week is spent composing a full-length draft of the Filter Bubble Narrative, which students submit to a peer review discussion board for peer feedback and to an assignment folder for instructor feedback at the beginning of the fifth week.
While peer review is in-progress and the instructor reviews drafts, during the fifth week, students submit a low-stakes reflection in their digital writing journals that investigates how their ideas about digital literacy have changed (or not), especially in relation to the definition provided by Carter and Matzke (2017) about effective and ethical use of digital technologies (321), as well as what they’ve learned about themselves, surveillance, and about writing multimodality.
Limitations & risks
We acknowledge that the Filter Bubble Narrative comes with certain limitations and risks. First, while we suggest that this assignment and its scaffolding may offer potential pathways for students to develop critical digital literacies that may result in further awareness and even resistance to forms of soft surveillance, we are also aware that those practices may be ultimately out of reach. After all, as various scholars discussed above have noted (see Beck 2017; Gilliard 2019; Noble 2018), social media platforms frequently take action to purposefully obscure their very mechanisms for surveillance, which is part of the process of softening resistance (Lyon 2006; 2017; Marx 2003; 2015). Without careful critical attention to such processes, instructors and students may be misled to see this assignment as a transaction of skills necessary to resist all forms of soft surveillance. While students may become more aware of and deliberate about how they perceive or interact with their filter bubble, this does not render the surveillors and their surveillance inert.
Second, some students may be unable or unwilling to draw on their own social media use for this assignment. As we mentioned in an earlier section, not all students engage with social media and others may have broader concerns with privacy. After all, part of the assignment and its scaffolding, as described above, ask students to disclose information about their own social media use—information they may wish to keep private from their teacher and instructors. Students therefore should be reminded that they do not have to disclose any information they do not wish to and guided through alternative assignment designs (e.g., fictionalizing their filter bubble contents).
We’ve offered the Filter Bubble unit as one way to smooth the journey from an instructor’s critical digital pedagogy to students’ critical digital literacies. Instead of sketching this assignment for Journal of Interactive Technology and Pedagogy readers, we wanted to offer a student-directed deliverable, an assignment sheet (see Appendix A), as a way to recognize that “documents do things,” as Judith Enriquez (2020) argued in “The Documents We Teach By.” These things that documents do are many and varied. Our teaching materials are a material representation of our teaching and learning values and of our identities as critical digital pedagogues. And, perhaps most importantly, they have rhetorical effects on our students. Thus, It’s important that we offer student-centered instantiations of critical digital pedagogy along with scholarly-ish prose aimed at other teacher-scholars. Moreover, as students engage with this assignment we hope to be able to offer information about its efficacy in regard to critical digital literacies. Further, student reflections about this assignment are needed and forthcoming, as are notes about alterations we’ll make based on student-instructor collaborations.
In closing, just as we must look at technologies instead of through them in order to perceive soft surveillance and engender critical digital literacies, we must do the same with our teaching documents (Enriquez 2020). We hope that our Filter Bubble Narrative deliverable is a teaching and learning document that instructors can critically look at in order to consider ways to work together with students to reassemble a richer and more critical understanding of online identities within our algorithmically curated social media news feeds. Beyond understanding, we also hope that teachers and students will act to mitigate soft surveillance and filter bubble effects and to become ethical agents with (and even developers of) algorithmic technologies.
Alexander, Jonathan, and Jacqueline Rhodes. 2014. On Multimodality: New Media in Composition Studies. Urbana: Conference on College Composition and Communication/National Council of Teachers of English.
Atteneder, Helena, and Bernhard Collini-Nocker. 2020. “Under Control: Audio/Video Conferencing Systems Feed ‘Surveillance Capitalism’ with Students’ Data.” In 2020 13th CMI Conference on Cybersecurity and Privacy (CMI) – Digital Transformation – Potentials and Challenges(51275), 1–7. https://doi.org/10.1109/CMI51275.2020.9322736.
Beck, Estee. 2017. “Sustaining Critical Literacies in the Digital Information Age: The Rhetoric of Sharing, Prosumerism, And Digital Algorithmic Surveillance.” In Social Writing/Social media: Publics, presentations, and pedagogies, edited by Douglas Walls Stephanie Vie, 37–51. Fort Collins: The WAC Clearinghouse and University Press of Colorado.
Haas, Christina. 1996. Writing Technology: Studies on the Materiality of Literacy. Mahwah: L. Erlbaum Associates.
Hart-Davidson, William. 2017. “Availability Matters (and so does this book): A Foreword. In Social Writing/Social media: Publics, Presentations, and Pedagogies, edited by Douglas Walls and Stephanie Vie, ix–xiii. Fort Collins: The WAC Clearinghouse and University Press of Colorado.
Hayles, Katherine. 1999. How We Became Posthuman: Virtual Bodies in Cybernetics, Literature, and Informatics. Chicago: University of Chicago Press.
Hyslop-Margison, Emery, and Ramonia Rochester. 2016. “Assessment or surveillance? Panopticism and Higher education.” Philosophical Inquiry in Education 24, no. 1, 102–109.
Jenkins, Henry. 2006. Convergence Culture: Where Old and New Media Collide. New York: New York University Press.
Nguyen, Tien T., Pik-Mai Hui, F. Maxwell Harper, Loren Terveen, and Joseph A.
Konstan. 2014. “Exploring the filter bubble: the effect of using recommender systems on content diversity.” In Proceedings of the 23rd international conference on World wide web, 677–86.
Noble, Safiya Umoja. 2018. Algorithms of Oppression: How Search Engines Reinforce Racism. New York: New York University Press.
O’Hara, Kenton, Alex Taylor, William Newman, and Abigail J. Sellen. 2002. “Understanding the Materiality of Writing from Multiple Sources.” International Journal of Human-Computer Studies 56, no. 3: 269–305. https://doi.org/10.1006/ijhc.2001.0525.
Palmeri, Jason. 2012. Remixing Composition: A History of Multimodal Writing Pedagogy. Carbondale: Southern Illinois University Press.
Pariser, Eli. 2012. The Filter Bubble: How the New Personalized Web Is Changing What We Read and How We Think. New York: Penguin Publishing Group.
Rheingold, Howard. 2012. “Participative Pedagogy for a Literacy of Literacies.” In The Participatory Cultures Handbook, edited by A. Delwiche & J. J. Henderson, 215–19. London: Routledge.
Selber, Stuart A. 2004. Multiliteracies for a Digital Age. Carbondale: Southern Illinois University Press.
Selfe, Cynthia L. 1999. “Technology and Literacy: A Story about the Perils of Not Paying Attention.” College Composition and Communication 50, no. 3, 411–36. https://doi.org/10.2307/358859.
Selfe, Cynthia L., and Gail E. Hawisher. 2004. Literate Lives in the Information Age: Narratives of Literacy from the United States. Mahwah: Lawrence Erlbaum Associates.
In “Social Media: Crash Course Navigating Digital Information,” host John Green says filter bubbles mean “we are surrounded by voices we already know and [are] unable to hear from those we don’t” (8:36). We can also think of filter bubbles as echo chambers that reverberate our existing beliefs, tastes, and attitudes.
Let’s read just a bit more about filter bubbles on Wikipedia, which is a solid site for general, introductory information about almost anything. Please skim this article now: Wikipedia on Filter bubbles.
Next, please watch the following TED talk by Eli Pariser, who invented the term “filter bubble”: Beware Online Filter Bubbles. It’s about 9 minutes long.
Whaddya think? Pariser defines the term “filter bubble” like this: “your filter bubble is your own personal, unique universe of information that you live in online. And what’s in your filter bubble depends on who you are, and it depends on what you do. But the thing is that you don’t decide what gets in. And more importantly, you don’t actually see what gets edited out” (4:06). Additionally, Pariser offers a visual depiction of filter bubbles (at 4:33). Here, the media corporations around the circle are curating, or selecting, what information you encounter on your social media feeds. You see only what’s inside as you passively scroll and click. You’re in a filter bubble. This is in contrast to all the information that you could see on the Web, as represented by the colorful circles that lie outside of the algorithms’ restrictive membrane. Since your filter bubble is unique to you, and created based on your clicking, buying, and browsing data, we might say that it represents part of who you are, part of your identity, both online and offline.
For example, when John Green illustrates his otherwise invisible filter bubble (12:15), we see a particular collection of activities, topics, beliefs, and values; we see parts of his identity (See Figure 1 below).
The algorithms running behind Green’s social media feeds personalize his online experience so that the advertising, news stories, and shared content Green encounters hold his attention, a valuable commodity for advertisers and groups or corporations pushing particular angles. I wonder, what’s in your filter bubble? And how does what’s in there represent who you are, your identity, both online and off?
Further, what might you do, as Eli Pariser and John Green both mention in their respective videos, to affect what’s in your bubble in ways that help you move toward your best future self, the aspirational version of yourself (5:12), instead of in ways that reinforce your “more impulsive, present selves” (5:15)? The goal of this project is to investigate and tell the story of your filter bubble as a representation of your identity and to reflect (and maybe act) upon what you find.
Your Filter Bubble Narrative should tell the story of your filter bubble as a reflection of your identity, both online and off. In composing this story, you should
Describe what’s in your filter bubble and how that’s connected to your interests, values, and beliefs on and offline (or not);
Discuss how you feel about algorithmic personalization, in general, and your specific filter bubble as a representation of your identity;
Sketch out what, if anything, you might do in the future to affect what’s in your filter bubble and/or how you might “ethically” and “effectively” (Carter and Matzke 2017, 321) navigate what’s in there using the strategies Green and Pariser discuss in their videos, as well other strategies you use or have heard about.
You’ll need to make this story multimodal, which means that in addition to alphabetic writing, you should use at least one other mode of communication. For example, you might communicate using images, video, and/or sound. You can create these texts yourself or use (and cite) items from the Web or elsewhere. Please include at least 500 words of written text and at least 3 visual or audio elements. As for the audience and genre, you have some flexibility here. You might want to write your piece for an undergraduate publication like Young Scholars in Writingor Stylus, UCF’s journal of first-year writing. Alternatively, you might write for Medium, a web-based publishing platform where your piece might be tagged #technology #digitalliteracy #self. Or maybe you’re thinking of starting your own blog and this could be your first entry. In any case, you want to consider the audience your publication site addresses (beyond your classmate and me) as you compose.
About the Authors
Jessica Kester is a Professor of English in the School of Humanities and Communication and the Quanta-Honors College at Daytona State College (DSC). She also co-founded and coordinated a Writing Across the Curriculum and Writing in the Disciplines program (WAC/WID) at DSC from 2013 until 2019. Her work has previously appeared in Across the Disciplines and Currents in Teaching and Learning.
Joel Schneier is a Lecturer and Composition Coordinator at the University of Central Florida in the Department of Writing & Rhetoric. He earned a PhD in Communication, Rhetoric, & Digital Media from North Carolina State University in 2019. His research focuses on the intersections of digital literacies, mobile communication, writing, and sociolinguistics, and he has published in Frontiers in Artificial Intelligence, New Media & Society, and Mobile Media & Communication, among others.