In a time of global crisis, universities have struggled to preserve academic integrity, and have turned to software companies for a solution. As a result, students are left to decide what they value more—their education, or their privacy.
As the world closed down and coronavirus case counts rose, universities around the world shut their doors and prepared to pivot into uncharted waters. During this time, many educators and administrators had one main problem on their minds: “How do we prevent cheating?”
This is an interesting question, yet I can’t help but wonder in response, “Why have institutions assumed the worst of their students? Why do they assume the group of young adults they selected by hand would value their education so little that they would exploit the online environment to cheat? Why aren’t these institutions asking how they can best support their students through a time of global uncertainty?” And moreover, “Why have institutions decided to turn to software companies to address this pedagogical problem?”
During the pandemic, I’ve witnessed educators struggle to adjust to teaching remotely. Some have been open about how this has been a dramatic shift for everyone and that we need to be compassionate with ourselves and others in these trying times. Others have approached online education with an expectation that they will be operating “business as usual.” This expectation has fallen flat in countless ways, hemorrhaging the relationships that educators and students have worked so hard to cultivate.
Additionally, when universities and professors mandate the use of online proctoring services, they leave their students in the midst of a moral and financial quandary: “Should I allow my privacy to be violated in order to complete a course that may be required for my degree?” and arguably more important, “Do I have any other choice?” Backed into a corner and knee-deep in loans, students typically don’t have any alternative options if a professor is requiring the use of online surveillance tech. This is problematic for a number of reasons.
Throughout my education, I’ve developed some habits to cope with my disabilities. I sit in the front row, so I won’t become distracted or overstimulated by the activities of other students. I take handwritten notes, so that my migraines aren’t triggered by staring at a computer screen all day. I take my exams in the distraction-reduced testing room in the Disability Resource Center. I don’t complete assignments in my bedroom, as an attempt to prevent my work from worsening my insomnia.
In my first online class session in March 2020, it became abundantly clear that these coping strategies had become obsolete overnight, when my professor simply said, “I expect everyone to have their cameras on at all times during class, so that I know you’re paying attention.” This request seemed innocent enough, but as the video feeds of ninety students flicked on one by one, I was overcome with feelings of overwhelm.
I was frustrated because if I continued learning the way I knew worked for me, my professor, based on his view from my camera, would assume I wasn’t paying attention. I was listening and looking down at my notebook, writing handwritten notes, in fear of becoming distracted and consequently missing important material. Knowing that I would have to sit in four hours of back-to-back lectures, I would avoid looking at my computer screen to prevent myself from developing a migraine and from becoming overstimulated by the view of all my peers—which I never had to deal with sitting in the front row. I was accused of looking at my phone during class. Of course, I corrected my professor and explained the situation, but it was appalling to me that doing what was necessary for me to learn was even a topic of discussion, especially since these same behaviors would never have been questioned in an in-person environment.
As the pandemic persisted and exams came around, it became clear that problems like these are just the tip of the iceberg. This was often a result of the criteria that remote proctoring services employ in a shallow attempt to identify cheating and protect academic integrity. It is further exacerbated by the fact that these services were not developed for a diverse student body with a diverse set of needs. The student body is not made up exclusively of the wealthy, white, and privileged. Yet, this is the demographic online proctoring services and surveillance technologies have been developed to cater to, and students find themselves in the position of having to justify their own disabilities, living situations, and financial standings with professors in ways they would never be expected to in person.
Students today wear many hats and typically are not just academics and scholars focusing solely on education. Some are single mothers who can’t afford to send their toddlers to daycare because they were laid off at the start of the pandemic. So they are flagged as cheating by their online proctoring service because they had to step away from their computer to comfort a crying child during their exam. Some students live in multi-family households and are flagged as cheating because they can’t control how loud their environment is or how many people are walking around in the background. Some have ADHD and get flagged for cheating because they fidget too much during testing. Some students of color are flagged simply for having a darker skin tone, which the software struggles to identify. The list goes on.
As if it wasn’t difficult enough to focus on an exam in one’s home, it’s insulting that students are frequently accused of cheating simply for doing their best to cope with an online environment. The added anxiety associated with being watched, not even by a human proctor, but by a computer, significantly impacts students’ mental health, adding insult to injury in a time of global trauma—worrying that every yawn, itch, readjustment, and eye movement will be one too many, jeopardizing their academic career.
But let’s take a step back. Even in on-campus learning situations, students are supervised. From security cameras, to campus police, to ID cards, to professors and administration keeping a watchful eye over the student body—students are under constant surveillance. So why is this any different? Why are students causing such a fuss when it comes to surveillance in an online environment?
I believe the largest distinction is between our public and private worlds. There are very few public spaces nowadays where you aren’t under at least one form of surveillance, and while we may not like it, there are benefits that balance it out, such as access to public facilities such as gyms, libraries, or universities. Meanwhile, our homes are our private worlds. We don’t have to worry about being watched or judged and we can truly relax. So when universities attempt to mandate surveillance technology in our own homes under the guise of academic integrity, it doesn’t come across as quite as worth it, and seems especially intrusive. Students don’t gain safety or facility benefits, and yet are expected to subject themselves to surveillance in their own private worlds on the grounds that they are not trusted. To exacerbate the problem, there is also often insufficient transparency on the part of the universities and software companies in terms of how student data is being used and stored. This leaves students feeling even more vulnerable, when they are already being stripped of their privacy. From a very young age, we are taught not to talk to strangers. If a stranger approached me and insisted on being let into my home to take pictures and record audio for a purpose undisclosed to me, I would be incredibly apprehensive. And I feel the same way about online proctoring, because really, it’s no different. There is no guarantee that our data will only be seen by our professors, and there’s no way to know if it was. It’s incredibly unnerving.
In that same vein, I think that while it’s not as appalling as online proctoring, professors requiring the use of video cameras in online courses is still an act of them having access to our private worlds. It’s nice to see everyone’s faces and feel a sense of community among peers, but I think it’s negligent to assume that this doesn’t alter the professional relationship between professors and students. Over the course of the pandemic, I’ve felt a sense of familiarity with my professors that I haven’t when in-person, and I think that comes from us having some access into each other’s private worlds. We can virtually meet each other’s pets and see each other’s family members walking around. This kind of exposure leads to a much more casual learning environment which isn’t necessarily a positive or negative thing, though I’m sure everyone has their own preference. What I do find problematic, though, is when students are expected to have their cameras on, increasing vulnerability and familiarity, while the strict, professional environment as seen on campus remains. When this occurs, it feels as though that student-professor relationship is completely lost, because there is a lack of trust in regards to what happens when the cameras are off.
I’m of the opinion that students should decide whether or not to use their cameras. To ease concerns of cheating, professors could assign papers instead of tests and reflections instead of quizzes when possible. At the very least, universities should provide in-person, socially distant proctoring for students who don’t feel comfortable using online proctoring software. For thousands of dollars per semester, students should not be forced to sacrifice the privacy of their own homes just to get their education.
About the Author
Sinéad Doyle is a student pursuing her Bachelor of Arts in Media Studies and Production at Temple University in Philadelphia. She is currently the post-production producer and editing coordinator for Queer Temple, a television program about the LGBTQIA+ community in Philadelphia. Sinéad has extensive experience in podcast production as the voiceover artist, assistant producer, and assistant editor for The Quell Foundation’s Lift the Mask—Voices of Heroes in the Silent Pandemic, a limited-series podcast covering mental health amongst frontline healthcare professionals during the COVID-19 pandemic. She also has experience with social media development and copywriting.
This piece reflects on the asymmetrical power balance between students and instructors in any given learning management system by considering what it would look like if students had the same level of data access as instructors and how that might impact instructor practices. The piece also explores how the author as an instructional designer and instructor has perpetuated some of the more problematic LMS practices when it comes to data tracking. Finally, the article proposes that it is in higher education’s best interest to rethink LMSs by rethinking access and open the ability for students to have more control of their data as one means of improving the overabundance of surveillance in modern society.
Dear Dean Cobblepot,
I just received my grade for last week’s assignment for Professor Crane’s psychology course. The LMS reports that he spent 3 minutes grading me and that I earned a 64%. The LMS also shows that he spent an average of 10 minutes per assignment but I somehow only got 3 minutes. I also noticed that he spent 30 seconds looking at my discussion post, even though it was 500 words—meanwhile, he spent an average of three minutes reading per post. Finally, I saw that while grading my assignment (for all of 3 minutes) he also spent time in other browser tabs including LabDepot.com and SpiritHalloween.com. Given all of this, I’m concerned about how adequately Professor Crane is treating me and my work in his course. I have attached a full report of all his interactions with my data in the LMS for further clarification.
Thank you
Barbara G.
If that email feels uncomfortable or unfair or makes you wonder if a student can understand the idiosyncrasies of grading enough to know that time-on-task does not equate with quality, then we should ask whether the reverse is equitable and fair. After all, what assumptions, misdirections, and conclusions are we drawing when we attribute meaning to our students via their data profiles (two separate and distinct things) in a learning management system (LMS)?
Many of us are worried and uncertain about the degree to which Facebook, Google, and numerous companies we don’t even know are wheeling and dealing in our data. These companies use this data to manipulate us by controlling what we see, what we have access to, and what those in our network experience while also, at times, ignoring how these tools can amplify the worst in us. However, if we are frustrated and angered by these practices, then we must also reflect on just how much we perpetuate those very same practices by ignoring or even encouraging the use of data gathering and tracking in our institutional systems in general and our digital learning environments more specifically.
Just as Facebook’s data-tracking can supposedly help us see our friends, data tracking in digital learning environments supposedly helps us see what students are doing. But how accurate is it and how often do we mistake the data for the person? Of course, we need some data to inform research, clarify our understanding of the world, and follow student progress but that data is often obtained with direct consent and with a clarity of how it will be used. Can any instructor genuinely promise that the data their students are generating within these LMSs and through third-party vendors such as etextbook sites was obtained with clear consent and used in transparent and specific ways that students agreed to?
When I started writing this piece, it was late 2019 and Instructure, known for its LMS, Canvas, was in the process of being sold to Bravo, a private equity group. People in the industry such as Phil Hill and Michael Feldstein saw the purchase as a data grab with many wondering what kind of behavioral surplus of student data would be used in the next iterations of edtech quackery. When I returned to this piece after seeing the call for this journal issue, I had the recent Dartmouth Medical School academic honesty scandal on my mind. As I finished writing this piece in early June, I read that Dartmouth had dropped its accusations against seventeen students for cheating. The accusations stemmed from using data logs from Canvas and accusing students of academic dishonesty. Students were informed of the academic charges and given 48 hours to respond; while the institution had full access to their data logs, the students themselves were denied access in trying to prove their innocence.
The power imbalances embedded in technology between students and instructors, and between instructors and institutions, was already concerning prior to COVID. But, like many things, the pandemic increased that imbalance through a form of disaster capitalism that had much of higher education more worried about preventing cheating (whatever we mean by that) than caring for and supporting students during a time of intense stress and uncertainty. We frame the LMS as our “virtual classroom,” and yet it allows us glimpses into our students’ lives that would be unforgivable violations of privacy were we to do the same thing in the classroom.
I’m only slightly amazed at how unquestioningly I, like those administrators and educators at Dartmouth, took to using the LMS to watch, judge, and control my students, and how easy it was to justify my decision. The last couple of years have given me much to think about in terms of how my role as instructor and instructional designer helped to perpetuate some of these imbalances. I have uncritically encouraged the use of the learning management systems at colleges and universities and I have leveraged the LMS as a tool of power rather than one of learning, and now I am left wondering how well it can be used for learning without the widespread potential for its use as a tool of power.
I have routinely failed, yet still aspire to pass the Jesse Stommel 4-word pedagogy test, “Start by trusting students.” I own that a good portion of that failure is the reproduction of harm that I inherited. Yet, what the LMS has offered to me and many others is a Faustian bargain that promises efficiency and productivity at the cost of respect and privacy. Such problems and limitations are often left to instructors to discover and advocate against as institutions and administrators rarely commit to a large financial investment such as an LMS and say, “by the way, there are problems.” They are too often pulled by the demands of internal and external entities to create proof of teaching and learning—the kind of proof that comes in the form of charts and numbers or the kind of proof that LMSs are nearly perfectly engineered to create.
The opportunities for privacy creep afforded by an LMS are hard to resist. It manifested in a desire not just to verify my students’ statements, but to trap them. I could ask “Who has done the reading that I put up on Blackboard?” and, as they raise their hands, I could ask, “are you sure that’s your answer?” I could then pull up a report and reveal “the truth” of their statements. “How could you read something if you haven’t accessed it on Blackboard?” The real “truth” is that there are many ways to effectively answer that but my ego and the lure of “evidence” would keep me from seeing that. And that’s the trick of technology—it gives us data or “hard facts” that undermine and erase the messiness that is humans and learning. It presents a decontextualized “objective fact” that allows us to be productively punitive rather than intentionally curious about understanding what prevents our students from doing the things we ask of them.
There are many ways we can use an LMS to check on and control our students. For example, we can keep information and knowledge from them until we decide they are ready to go forward; we can see how much time they spend in different spaces in the LMS, and watch every click of their mouse. With each of these, we can make myriad inferences about what they are or aren’t doing. Yet, often, when we’re at that point, we’re looking for “a reason” to explain something we believe to be true (e.g. plagiarism, cheating, not doing the reading, not working “hard” enough—whatever we mean by that).
So many of these features are easily accessed—simple buttons or sections labeled “Reports.” They’re not secret levels to navigate towards or particular requests to unlock, they are part and parcel of the features we are encouraged to use in our LMSs. There are no prompts that ask us if tracking students is something we should reconsider; nothing to make us pause and question our motives. We are compelled to produce reports about their work that we wouldn’t in the physical classroom, at least not without significant invasion of privacy.
On the other hand, students have no option for privacy in these environments—like elsewhere in the technological landscape of higher education, they are not individuals with agency, they are data. They are stuck with how the instructor chooses to organize the course in the LMS, and they have no control over how they are represented in the data.
I wonder why institutions would willingly encourage near-unquestioning authoritative power for instructors over students’ actions in their LMSs that they would not allow in the classroom. Can we even imagine a physical classroom where these things occur? An instructor looks over the shoulders of students to make sure they spend the appropriate amount of time on each page. Another puts a stopwatch to each student to track all sorts of things students do with the learning materials. One professor hooks students up to an eye-tracking device to make sure they can see exactly what the student sees. The level of tracking allowed and how instructors or institutions can leverage it should raise reasonable concerns, starting with how we think of the LMS. Thinking of an LMS as a “virtual classroom” obscures the level of surveillance and control an LMS affords us.
I would like to propose a new rule for all LMSs. Equal transparency for all levels of uses. For everything an instructor can see about a student, so too should students see about the instructor. But let us not stop there—students should be able to see and instantly call up reports about anyone within the organization that has come into contact with their profile or data.
Why do I propose this radical transparency? Because we have to be better. If we are individually worried about the state of digital technology in the world at large and how much of our data is controlled by others, we have to show students there are other ways and that they deserve better; they deserve to be citizens in and not objects of the LMS.
Would instructors change their practices and actions if students could track them? If students could see how much time instructors spend in LMSs and even how much an instructor spends on students’ work, it’s not hard to imagine the concerns they would raise with instructors and the administration. If emails like the one that start this piece became more common, would that change how instructors act? If being held to the same standards of scrutiny as students would change instructors’ behaviors, then we have to ask if what we are doing is right. It’s easy to imagine, then, that the faculty would start to game the system in ways that are eerily similar to what our students do when we are fixated on making them jump through our hoops.
As a rule, it’s not practical or even possible at most institutions with the given LMSs. But as a mental practice for instructors and administrators to think through when exploring data and reporting features of LMSs, it has merit. There seems an inevitability to the data mining and hoarding that institutions and third-party vendors are conducting.
On one side, the tendrils of surveillance capitalism permeate nearly any digital software being sold to institutions, whether focused on applicants, students, alumni, faculty, or staff. On the other side, boards of directors, accrediting agencies, and public agencies demand more complex proof that an institution does what it claims.
Instructors, instructional designers, and to a degree, even administrators are limited in how much say or sway they have in this data grab. We can be more intentional in how we use it and more inclusive in letting students know that it can be used in ways beyond our individual control. We can find opportunities to push or at least make known the problems and limitations of using machines as oracles to divine the truth of our students’ minds.
All technologies have tradeoffs—I certainly value that fact—and yet, I have to wonder how we can mitigate the tradeoff of students (and instructors for that matter) as data in this evolving technological landscape. This ongoing conversation should encourage instructors and administrators to consider just how an LMS acquires its data on students in numerous ways and whether it is in the students’ long-term interest to create, or more problematically, have third-party vendors create, maintain, and use that data without any responsibility to students or giving students the ability to control their own data.
Lance Eaton is the Director of Digital Pedagogy at College Unbound, a part-time instructor at North Shore Community College and Southern New Hampshire University, and a PhD student at the University of Massachusetts, Boston with a dissertation that is focusing on how scholars engage in academic piracy. He has given talks, written about, and presented at conferences on open access, academic piracy, open pedagogy, hybrid flexible learning, and digital service-learning. His musings, reflections, and ramblings can be found on his blog: http://www.ByAnyOtherNerd.com as well as on Twitter: @leaton01.
I report on administration surveillance of online courses at CUNY’s School of Professional Studies. Department Chairs and course observers have been given the ability to examine the whole body of work uploaded to the faculty member’s Blackboard site, including grading, student work, and timeliness, in some cases for the entire semester. The bond of trust between faculty and student is also compromised by this third-party scrutiny of potentially personal material. Faculty unions and governance bodies need to identify similar surveillance on their own campuses and demand limitations on constant scrutiny. Unfortunately, students will find it harder to assert privacy rights.
The COVID crisis vastly expanded the number of online courses at the City University of New York, and many of those courses will stay on-line in the future. Faculty, who generally had taught their courses with minimal scrutiny from their supervisors, have had to establish comprehensive Blackboard sites which are potentially open to a high degree of surveillance by Department Chairs and higher-level administrators. Students, who were previously “surveilled” only by their professor, now can be tracked by many others with administrative access. The union which represents both tenure-track and contingent CUNY faculty has been attempting to address that problem, with mixed success. Contractual safeguards were negotiated in 2019, but policing that language across twenty-two campuses and hundreds of academic departments and programs is difficult. Here, I report on one campus which has vastly expanded surveillance, and the efforts of the author and the union he represents to bring those practices to a halt.
The School of Professional Studies is an unusual CUNY campus. Housed on leased floors in two office buildings in midtown Manhattan, the vast majority of its courses have always been taught on-line, almost exclusively by contingent “part-time” faculty with few job security protections.[1] A small number are guaranteed a minimum of two courses a semester for three-year blocks. All the rest are reappointed either yearly or semester-to-semester. Thus, the threat of losing one’s job for any reason (or for no reason) is very present and the dangers inherent in surveillance serve to increase that threat.
Unlike tenure-track faculty, who are also evaluated on their research, publications, and service obligations, the work of contingent “Adjunct” faculty is assessed almost exclusively on the quality of their teaching.[2] At CUNY, in a traditional classroom setting, a contractually stipulated protocol mandates a single classroom observation each semester by a designated observer, followed by a “post-observation conference,” which usually also discusses the overall goals of the course; subsequently, a written report is sent to the Department Chair, who is responsible for hiring and firing. In addition, informal discussions between the Department Chair and Adjunct during the semester might add to the overall evaluation.
But that observation can look quite different in an on-line setting, whether the class is taught “synchronously”—with real-time interactions between faculty and students—or asynchronously, perhaps with taped lectures and certainly a profusion of on-line assignments. That’s because the observer—and potentially, any supervisor or administrator—has the ability to examine the whole body of work uploaded to the faculty member’s Blackboard site. That might include, for example, an entire semester’s lectures, all the assignments, and all test and quiz questions. All responses to student work, grading schemes, speed of grading, assignment grades, the number of students current on their work, falling behind, or failing the course are also evident in perusing a Blackboard site.
My union, the Professional Staff Congress-AFT, fortunately attempted to address the proliferation of on-line teaching before the pandemic. In 2019, contract negotiations with CUNY resulted in protocols for on-line observations. At their heart was the directive that “For teaching observations of online or partially online courses, the parties intend to replicate as closely as possible the longstanding teaching observation practices established pursuant to this Agreement”—that is to prior procedures for in-class observations. More specifically, for synchronous courses, the observer would only have access to the course for the duration of one class period. For asynchronous courses, the observer would be granted 48-hour access. Crucially, in either case, the observer would only have “limited access to the course platform, usually defined as ‘student’ or ‘guest’ access but in no event ‘instructor’ or ‘administrator’ access” (“Memorandum”). That is, the observer could only see what a typical student could see—perhaps that week’s lecture, assignment, and quiz—not the whole semester’s worth of postings. And certainly not all the management of the course, progress of students, or student posts. That all seems clear enough.
This type of contract language seems like a minimum necessary protection of faculty, and should be a standard protocol in all online courses.
This language protects student privacy as well. The students know that the professor has access to their work and no one else. When the observer “enters” the online space, students are informed of the duration of that access and its limitations, just as they are implicitly informed when an observer walks into a physical classroom. When sensitive topics are broached, students should have control over, and knowledge of, whom they are talking to. In a classroom setting, an observer only sees the homework being handed back to students. They shouldn’t be able to read that homework when a class goes on-line.
However, SPS has utilized—in the union’s view, in violation of the contract—two loopholes to enable complete scrutiny and surveillance. First, it has given all observers “Grader” status. SPS has justified this procedure by noting that the only designations specifically prohibited are “Instructor” and “Administrator.” That is true; but “Graders” do not have “limited access.” Rather, they have the same “view” access as instructors themselves; the main difference is only that a Grader cannot change or edit the existing Blackboard posts, or change the course controls. So, an SPS observer of an asynchronous course has forty-eight hours to fully examine every aspect of a Blackboard site. The observer also has the ability to read all student posts or homework, thus breaching the implicit bond of student-faculty trust.
As SPS has itself conceded, there are “more than 40 roles available in Blackboard.” Clearly the contract could not have de-limited them all, or any new ones that Blackboard might unilaterally create between rounds of contract negotiations. But SPS asserts the right to act unilaterally—and, indeed, this is the unfortunate state of labor-management relations in the United States, even when a union contract is in place: management plays offense, the union defense. Because we must now file a contractual grievance and wait (perhaps a year) for a hearing in front of a neutral third party, our proposed remedy is broad. In our grievance, alongside asking for an end to the practice, we assert that SPS has so poisoned the observation process that any firings (or “non-reappointments,” as they are called in the oh-so-polite parlance of academia) are inevitably tainted and must be reversed.[3] Sometime in 2022, we will see if an arbitrator agrees and orders an end to SPS’s overly-ambitious interpretation of the contract language.
Second, and even more pernicious in our view, SPS has granted its Academic Directors (ADs)—the equivalent of Department Chairs—semester-long “Instructor” status. The stated reason is that, in extremis—say, if the faculty member fell deathly ill—an AD might need to communicate with the students directly. That is certainly a worthy goal, easily addressed in the moment of extremity. Instead, the AD can look into the class all semester, without the faculty member—or the students—even knowing when and what is being perused. Forty-eight hours of unlimited monitoring are bad enough; here, the AD can watch the class unfold over the entire semester. Our grievance remedy is thus the same, but this does not address potential harm to students. Everything they write, no matter how confidential the subject matter—sexual orientation, criminal acts as either victim or perpetrator, immigration status, political beliefs, etc.—is now subject to third-party scrutiny. Alternatively, if that potential invasion of privacy is revealed at the beginning of the semester, they may feel the need to limit their speech as a matter of self-protection.
Even with the COVID pandemic on the wane, online classes will likely persist, creating openings for the surveillance of course sites that threaten the academic freedom and contractual rights of faculty (particularly contingent faculty) and potentially violating the privacy rights of students. Addressing these threats will require unions to collectively bargain to obtain strong and clear contractual language regulating the processes of online course observations, and ongoing policing of those safeguards. Meanwhile, faculty must be educated about the risks of inappropriate surveillance of their courses, and how to identify this in their learning management systems, and must stand ready to assert their rights to fair evaluations of online teaching. For students, however, and in a non-union environment, all these tasks are more difficult, and the effects of surveillance even more pernicious.
Notes
[1] “Part-time” is CUNY’s contractual term for all non-tenure-track faculty. Some teach only one or two courses a year. Others have a credit load higher than “full-time” tenure-track faculty. See CBA Article 15.2, “Workload for part-time members of the Instructional Staff” and “Additional Side Letters and Agreements: Teaching Load Reduction Agreement.” For contingency, see CBA Article 9, “Appointment and Reappointment” and Article 10, “Schedule for Notification of Reappointment and Non-Reappointment.” All at https://psc-cuny.org/cuny-contract.
[2] Tenure-track faculty are contractually mandated to receive an annual evaluation which includes assessment of “Classroom instruction and related activities; Administrative assignments; Research; Scholarly writing; Departmental, college and university assignments; Student guidance; Course and curricula development; Creative works in individual’s discipline; Public and professional activities in field of specialty”(“CUNY Contract” Article 18). Only a small subset of Adjuncts receive such an evaluation; their union suggests they should request one if they are eligible for a relatively rare three-year appointment (Clarion Staff).On the other hand, Adjuncts “shall be observed for a full classroom period” once a semester, while tenure-track “may be observed once each semester” (“CUNY Contract” Article 18).
[3] In legal parlance, referring to arrests and prosecutions thrown out if the evidence was based on an illegal search or seizure, this is known as “the fruit of the poisonous tree.”
Marc Kagan is a graduate student in the History Department at CUNY’s Graduate Center, where he is writing his dissertation on NYC’s Transport Workers Union Local 100. He spent two years as a Professional Staff Congress grievance counselor for the Graduate Center, the School of Professional Studies, and several other CUNY campuses.
New technologies are introduced into people’s lives today at a rate unprecedented in human history. The benefits of technologies and the onslaught of corporate messaging can result in a pervasive techno-optimism that leaves people unaware of the downsides or collateral effects of technologies until harms are already done. With the show Black Mirror as muse, we open by imagining the story of Oya, a first-year college student unwittingly trapped by educational “innovations.” After reviewing examples of technological resistance from antiquity to Black women scholars today, we then propose two activities educators can employ to engage students’ technoskeptical imagining. First, we developed a MadLib activity that employs play as a means to creatively speculate about technologies. Second, we offer a fill-in-the-blank creative writing activity that builds on the MadLib activity while providing more flexibility in crafting their own dystopian stories. We hope this approach and these activities can work toward protecting those who are most vulnerable to the harms of technologies.
Introduction
Meet Oya, a first-year college student at a new venture-capital-backed school located on the campus of Alvara College, a traditional liberal arts college. Oya is not a typical undergraduate student; they have been targeted by Petra Capital’s recruitment team to supplement the traditional demographics of the college’s student body. As part of The Alvara Personalized Experience (TAPE), they live in a dormitory specifically built for students enrolled in this special recruitment strategy.
The door opens and a 30-year-old woman begins to move in and unpack her things just as Oya settles into their dorm room on the first day. Oya learns that their new roommate, Barbara, is an important component of TAPE. Barbara is Oya’s assigned success guardian. In this role, Barbara will observe and document everything Oya does and everywhere they go. Barbara will offer suggestions to Oya about what they can do to improve their college experience, including recommendations about diet, sleep, study habits, time management, and even social opportunities on campus. Oya does not have to follow these prompts, but Barbara will report Oya’s choices to their professors and the financial aid office.
Oya’s story is fictional and may seem outlandish. The idea of a personalized college experience enhanced by a “success guardian” following a young undergraduate student to monitor and report their every action may seem absurdly intrusive and disruptive. However, many schools have deployed surveillance technologies that perform similar functions in the name of student success. Surveillance activities that would feel invasive and even creepy if conducted in person were popularized and normalized by Google and Facebook (Zuboff 2019), and these practices increasingly creep into “smart” technologies (i.e., Internet of Things) and educational technologies. The expanding tentacles of surveillance have only tightened their grip since so many institutions and people were pushed online during the COVID-19 pandemic. As students, workers, and educators become further habituated to these digital systems, it is harder for them to critically evaluate the risks and harms that can come from such “personalization.”
While tech creators make techno-utopian promises about what educational technologies can deliver, legislators and regulators have done little to protect people against their negative effects. Policy and legal reforms around the collection of student data have been proposed—and in some cases already implemented—but as Caines and Glass (2019, 94–95) warned, “While laws and internal policies are critical, they take time to develop, and in that time new models and practices come forward to bypass proposed and existing regulations.” Users of these technologies—including teachers and students—are often left to fend for themselves. Few people will read and interpret Terms of Service (ToS) that are often written to obfuscate more than inform (see, e.g., Lindh and Nolin 2016). Few users of new technologies will research collateral effects. Simply put, the cards are stacked against us.
As a result, educators need pedagogical approaches, tools, and assessments to work alongside students in making decisions about technologies in their individual, civic, and educational lives. In this paper, we discuss the development of two educational activities that use dystopian fiction as a device for helping students develop technoskeptical imaginations.
History
Contemplating and confronting ethical issues around technologies is not new. Humans have long resisted new technologies which they believe impinge on their values, livelihoods, or very lives. Plato wrote of the god Thamus, who evaluated technologies and rejected writing as a technology that would result in a “conceit of wisdom instead of real wisdom” (Postman 1992, 4). The Luddites of nineteenth-century England rejected textile machinery that threatened their craft (Jones 2013). The science fiction genre has long speculated on the possible harms of technologies, and the recent Black Mirror show has offered particularly vivid visions of technological dystopia (Conley and Burroughs 2020; Fiesler 2018). The critique of technologies is not reserved solely to the world of science fiction, but has been taken up by academics as well. For instance, nearly a half-century ago, Bunge (1975) coined the term technoethics in his call for technologists to be more aware of the social implications of their inventions.
The field of technoethics also has a more embodied tradition, grounded in the work of Black feminist scholars who have challenged algorithms of oppression (Noble 2018), discriminatory design (Benjamin 2019), and biased facial recognition (Buolamwini and Gebru 2018) that amplify and sustain anti-Black racism and sexism in society. Amrute (2019) challenged top-down models of technoethics by calling for attunements that attend to techno-affects, centering the bodies and knowledge of those most vulnerable to—or targeted by—technological harm.
An embodied technoethics perspective is particularly critical for our authorship team of four white scholars working from the relative comfort of academic spaces. We acknowledge that we must recognize how our intersectional positionalities in a sexist, racist, classist, and ableist society require us to listen to, and support, those who may face the disproportionate negative impacts of technologies. Technologies in education, as well as the educational practices surrounding their integration, often uphold whiteness and perpetuate structural injustices (Heath and Segal 2021). How can educators help students see the ways technologies extend, amplify, or create social problems?
As Geraldine Forsberg (2017, 232) argued, “Questions can help break the power that technologies have over us. Questions can help us critique the technological bluffs that are being communicated through advertisements, political and scientific discourse and education.” Building on the work already done in the field, three authors of this paper (Krutka, Heath, and Staudt Willet 2019) proposed technoethical questions that educational technology scholars and practitioners could use to investigate and interrogate technologies with students:
Was this technology designed ethically and is it used ethically?
Are laws that apply to our use of this technology just?
Does this technology afford or constrain democracy and justice for all people and groups?
Are the ways the developers profit from this technology ethical?
What are unintended and unobvious problems to which this technology might contribute?
In what ways does this technology afford and constrain learning opportunities about technologies?
In the past two years, in collaboration with students in our classes, we have conducted technoethical audits of Google’s suite of apps (Krutka, Smits, and Willhelm 2021), and of educators’ use of Google Classroom during the COVID-19 pandemic (Gleason and Heath 2021). In response to reading the public accounts of this research, Autumm Caines adapted the tool into an online format to help faculty conduct self-directed technoethical audits of educational technologies.
Through sharing our experiences in conducting these technoethical audits, our authorship team eventually agreed that asking these technoethical questions of students did not always generate the deep, critical thinking about technologies we sought. These uneven results may partially be attributed to the techno-optimism (Postman 1992) and techno-solutionism (Papert 1988) that are pervasive in the U.S. We therefore sought out other approaches that could challenge students and teachers to confront such narratives of technological progress.
Dystopian Storytelling about Technology
Building on our technoethical questions and with Black Mirror as our muse, we sought to identify activities that might more readily spur students’ technoskeptical imaginations. The show Black Mirror is a “sci-fi anthology series [that] explores a twisted, high-tech near-future where humanity’s greatest innovations and darkest instincts collide” (Netflix n.d.). Episodes address technoethical topics in digital censorship, virtual reality gaming, and artificially intelligent toys, among others. In societies where technology is often equated with progress (Benjamin 2019; Jones 2013; Krutka 2018; Postman 1992), Black Mirror disrupts such narratives and creates space to question how technology should be limited or even banned.
Educators have drawn inspiration from Black Mirror, and dystopian fiction more broadly, to develop educational approaches and activities. For instance, Emanuelle Burton, Judy Goldsmith, and Nicholas Mattei (2018) responded to the difficulties of teaching ethics in computer science curriculum by using science fiction as a powerful pedagogical tool. Casey Fiesler (2018) detailed her use of Black Mirror to help college students “think through different possibilities” for technology in the future. Episodes served as launching points for her students to engage in “creative speculation” about ethical issues that arose from the plots of the shows and consider existing or possible laws (Feisler 2018). The Screening Surveillance project (2019) from the Surveillance Studies Center “is a short film series that uses near future fiction storytelling based on research to highlight potential social and privacy issues that arise as a result of big data surveillance.” sava saheli singh, who conceptualized and produced the series, partnered with educators on multiple occasions to incorporate the work of dystopian fiction with the intention of addressing contemporary technoethical issues. From the perspective of the 2040s, Felicitas Macgilchrist, Heidrun Allert, and Anne Bruch (2020, 77) imagined “a kind of social science fiction to speculate on how technology will have been used in schools, and what this means for how future student-subjects will have been addressed in the future past of the 2020s.” This type of imagining played out malignant alternative futures for educational technologies where students would be “smooth users,” “digital nomads,” or ecological humans embedded in “collective agency.”
Here we describe two activities designed for education students, but adaptable for others, that encourage technoskeptical imagination around technologies in general and edtech specifically. This scholarly experiment has proved promising in our initial exploratory teaching.
MadLibs Activity
Building on the work from Krutka, Heath, and Staudt Willet (2019) to consider how to encourage educators to consider technoethical questions, we incorporate a construct of play to inspire technoskeptical imagining. Although technoskeptical thinking can be rewarding, continued consideration of systemic inequities and injustices can be emotionally draining. Play can be a powerful means to disrupt power hierarchies, challenge authority, and encourage agency, particularly for youth whose intersecting identities are marginalized (Yoon 2021).
Through this playful lens, we created a dystopian MadLibs activity (see Table 1). MadLibs is a two-person children’s word game that was traditionally produced in hard copy books and employed a phrasal template. The phrasal template is a story with several missing words that are defined grammatically or descriptively. For instance, a blank (i.e., missing word) could be labeled as needing a verb, noun, or even type of plant to complete the sentence. One player reads out loud the label of the blank and the second player (who cannot see the context of the story) provides answers. These answers are plugged into the story, which results in a funny, amusing, and often absurd tale.
In adapting MadLibs as an educational warm-up activity to spark technoskeptical imaginations, we embraced the notion of absurdity. In preparation, we wrote out the frame of a dystopian story with missing details. However, instead of missing grammatical items, we left blank the specifics of a company or technology, as well as the functions of the technology. We designed the MadLibs activity to be delivered during a synchronous instructional session when the blanks could be crowdsourced from students. The instructor needs to plan for activities in which students can participate for a few minutes while a facilitator plugs the crowdsourced elements into the dystopian story, accounting for verb tense and grammatical flow, and then reads the story aloud to students.
Although the story is written with a more serious and dystopian plot, the final story still contains elements of absurdity, because students did not know the narrative context when they chose the missing elements. The reading of the final, somewhat farcical story can be met with amusement. This levity can then be followed by a more serious discussion where students interrogate connections between the MadLibs story and their lived experiences with technology. As a result, the MadLibs activity is a warm-up to the Fill-in-the-Blank Creative Writing activity where students engage in writing dystopian fiction.
MadLibs Play
Company =
Company slogan =
Group with institutional power (plural) =
Think of what the technology does generally, not just for you, when thinking of these three functions:
Function #1 of technology (beginning with verb ending in “ing”) =
Function #2 of technology (beginning with present tense verb) =
Function #3 of technology (beginning with present tense verb) =
After many controversies where citizens have accused us of doublespeak, [COMPANY] wants to remind you of our mission: [COMPANY SLOGAN]. Some people say that profits get in the way of our mission to make the world a better place. Many critics have called our product a weapon of oppression. Do not listen to these un-American troublemakers who are only jealous of our immense success!
These critics claim that [GROUP WITH INSTITUTIONAL POWER] will use our product to harm those under their control by [FUNCTION #1 OF TECH]. Some critics even say they feel intimidated by the ability of the technology to [FUNCTION #2 OF TECH]. But aren’t [GROUP WITH POWER] also just trying to make the world a better place? Meanwhile, the jealous critics claim that [GROUP WITH POWER] are using the technology to [FUNCTION #3 OF TECH] and that is causing social problems. But come on! Let the free market decide! If people did not love [COMPANY], then we would not be enjoying such incredible success. Technology is progress, and progress is good!
Table 1. MadLibs play.
Fill-in-the-Blank Creative Writing Activity
After completing the MadLibs activity, students are prompted to deepen their technoskeptical imagining by creating and writing their own dystopian fiction. Offering participants a prompt, particularly those in a one-off workshop, can provide provocation for the beginning of a story. To scaffold the activity, we created another phrasal template as part of the design of a Fill-in-the-Blank Creative Writing activity. This activity is facilitated through a series of Google Docs that all students or participants are able to edit directly. The Fill-in-the-Blank activity can be completed individually or in small groups. Like the MadLibs activity, parts of the dystopian story are missing; however, unlike the MadLibs activity, students can see the entire frame of the story. Missing elements, again, are not grammatical in nature but are instead elements of the story such as the “name of technology/company” and “group with power/group without power.” We recommend students be given free rein in this activity. That is, the use of a phrasal template does not have to be required; rather, it is provided as a prompt as needed. After completing their stories, students are asked to evaluate the narrative they wrote using the analytical tool developed by Krutka, Heath, and Staudt Willet (2019). We envision that this Fill-in-the-Blank Creative Writing activity could also be conducted asynchronously, where students would sit with the prompt (or develop their own) over the course of a longer period of time.
Dystopian Storytelling Activity
Welcome to this semi-true technology dystopia storytelling activity. Dystopia storytelling can help us to imagine some of the harms that technology can bring while at the same time making it okay for us to embellish a little. If you have watched or read any speculative or science fiction you know it is best when there are some elements of the truth to it – think about your favorite episodes of the show Black Mirror.
Below we have started you off with a dystopian fiction prompt with some elements missing – you will find these missing elements in all caps in the brackets. The idea is for you to replace these items as prompted with items of your own devising – which might be true but also could just come from your imagination. For instance you could replace [TECHNOLOGY] with Facebook, social media, Zoom, or even a toaster but you should stick with that and try to make the story make sense as you continue to write. Feel free to search for technology company websites and steal their own rhetoric and the way that they talk about themselves for things like the motto or stated intention. If you don’t like the story arc feel free to even change the text – make this story your own.
One note – depending on the technology you choose the name of the tech may be the same as the name of the company ie. Zoom or Facebook – or it could differ for instance Google is actually owned by Alphabet. Again, make this story your own and if little details bog you down just write them out.
Many people today use [TECHNOLOGY] to [EXPLAIN WHAT TECHNOLOGY ALLOWS PEOPLE TO DO]. It has become very popular and many humans use [TECHNOLOGY]. [COMPANY] even explains that [THE COMPANY MOTTO OR STATED INTENTION]. However, we have come from the future to tell you [TECHNOLOGY] is not a tool, but a weapon intended to hurt people!
We have learned that [GROUP WITH POWER] is using [TECHNOLOGY] to harm [A VULNERABLE GROUP] by [EXPLAIN HOW A GROUP WITH POWER IS USING THE TECHNOLOGY TO HARM A VULNERABLE GROUP]. Beyond these obviously intentional harms, [TECHNOLOGY] is even causing collateral damage that is worsening [NAME SYSTEMIC INEQUALITY OR HARM] by [EXPLAIN HOW IT IS MAKING THAT SYSTEMIC INEQUALITY OR HARM WORSE].
If the use of this technology continues then this could lead to the long-term destruction of [EXPLAIN WHAT COULD BE PERMANENTLY DESTROYED]. [COMPANY] is even trying to trick people into thinking they’re changing their ways by pushing for legislation that [DESCRIBE LAWS THAT ALLOW FOR CONTINUED ABUSE BUT GIVE THE APPEARANCE OF MAKING CHANGE].
And it is all about profits for [COMPANY]! We discovered that they are making money by [EXPLAIN HOW THE COMPANY PROFITS FROM THEIR WEAPON]. They’re also exploiting [NAME GROUP THAT IS EXPLOITED SUCH AS WORKERS OR USERS] by [IDENTIFY ACTION THAT OF TECHNOLOGY THAT CAUSES HARM], and harming the environment by [EXPLAIN HARMS TO ENVIRONMENT]. The consequences are widespread! We hope you can stop the evil use of [TECHNOLOGY] before it’s too late!
Table 2. Dystopian storytelling activity.
Next Steps
Revisiting Oya, envision a scenario in which their experience did not include a human success guardian but instead the surveillance technologies to which many students are already subjected. How might Oya’s situation have been different if they had practiced developing their technoskeptical imagining? Armed with the ability to imagine something more than utopian rhetoric, Oya sees the harmful outcomes that could result from surveillance technologies. Oya is then prepared to ask questions and look for ways to democratize the technology, rather than letting it control them. They ask the stakeholders (e.g., student services offices, professors) issuing the technology to also imagine negative consequences. Oya also takes the time to read critiques of the company from technology journalists and digital rights activists to better understand their context, purpose, and profit models. They talk with classmates and family members back home, and Oya writes about technoethical concerns to inform a larger audience about risks and dangers. Finally, Oya organizes a local chapter of a digital rights group so they are better equipped to challenge multinational technology corporations and their own school.
Evaluating technology from an ethical perspective is difficult. Corporate sales pitches are ubiquitous. For many of us, our livelihoods depend on our use of such tools. We must therefore reflect on our own lived experiences and those of the people around us. Potential harms often lie beneath the surface. Embracing technoskeptical imagination and creative power can offer a step towards enabling students to better protect themselves in their use of technological tools. If educators aim to stop harms in the present, and mitigate risks in the future, we might raise technoethical consciousness through dystopian storytelling.
Buolamwini, Joy, and Timnit Gebru. 2018. “Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification,” In Proceedings of the 1st Conference on Fairness, Accountability and Transparency, no. 81, 77–91.
Burton, Emanuelle, Judy Goldsmith, and Nicholas Mattei. 2018. “How to Teach Computer Ethics through Science Fiction.” Communications of the ACM 61, no. 8: 54–64. https://doi.org/10.1145/3154485.
Conley, Donovan and Benjamin Burroughs. 2020. “Bandersnatched: Infrastructure and Acquiescence in Black Mirror.” Critical Studies in Media Communication 37, no. 2: 120–132. https://doi.org/10.1080/15295036.2020.1718173.
Forsberg, Geraldine E. 2017. “Teaching Technoethics from a Media Ecology Perspective.” Explorations in Media Ecology 16 (2–3): 227–237.
Gleason, Benjamin, and Marie K. Heath. 2021. “Injustice Embedded in Google Classroom and Google Meet: A Techno-ethical Audit of Remote Educational Technologies.” Italian Journal of Educational Technology 29, no. 2: 26–41. https://doi.org/10.17471/2499-4324/1209.
Heath, Marie K., and Pamela Segal. 2021. “What Pre-Service Teacher Technology Integration Conceals and Reveals: ‘Colorblind’ Technology in Schools.” Computers & Education 170 (September): article 104225. https://doi.org/10.1016/j.compedu.2021.104225.
Jones, Steven E. 2013. Against Technology: From the Luddites to Neo-Luddism. New York: Routledge.
Krutka, Daniel G., Marie K. Heath, and K. Bret Staudt Willet. 2019. “Foregrounding Technoethics: Toward Critical Perspectives in Technology and Teacher Education.” Journal of Technology and Teacher Education 27, n. 4 (October): 555–74. https://www.learntechlib.org/primary/p/208235/.
Krutka, Daniel G., Ryan M. Smits, and Troy A. Willhelm. 2021. “Don’t Be Evil: Should We Use Google in Schools?” TechTrends 65 (July): 1–11.
Lindh, Maria, and Jan Nolin. 2016. “Information We Collect: Surveillance and Privacy in the Implementation of Google Apps for Education.” European Educational Research Journal 15, no. 6: 644–663.
Macgilchrist, Felicitas, Heidrun Allert, and Anne Bruch. 2020. “Students and Society in the 2020s. Three Future ‘Histories’ of Education and Technology.” Learning, Media and Technology 45, no. 1: 76–89. https://doi.org/10.1080/17439884.2019.1656235.
Noble, Safiya Umoja. 2018. Algorithms of Oppression: How Search Engines Reinforce Racism. New York: New York University Press.
Papert, Seymour. 1988. “A Critique of Technocentrism in Thinking about the School of the Future,” in Children in the Information Age, edited by Blagovest Sendov and Ivan Stanchev, 3–18. New York: Pergamon Press. https://doi.org/10.1016/B978-0-08-036464-3.50006-5.
Postman, Neil. 1992. Technopoly: The Surrender of Culture to Technology. New York: Vintage.
Yoon, Haeny S. 2021. “Stars, Rainbows, and Michael Myers: The Carnivalesque Intersection of Play and Horror in Kindergarteners’ (Trade)marking and (Copy)writing.” Teachers College Record 123, no. 3: 1–22.
Zuboff, Shoshana. 2019. The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power. New York: PublicAffairs.
About the Authors
Daniel G. Krutka (he/him/his) is a human, probably too tethered to his smartphone, but human nonetheless. He is a former high school teacher and his current job is Associate Professor of Social Studies Education at the University of North Texas. He researches intersections of technology, democracy, and social studies. You can listen to him host educators and researchers on the Visions of Education podcast (VisionsOfEd.com) or amplify his retweets at @dankrutka.
Autumm Caines (she/her/hers) is an instructional designer at the University of Michigan—Dearborn. Autumm’s scholarly and research interests include blended/hybrid and online learning, open education, digital literacy/citizenship with a focus on equity and access, and online community development. This blend of interests has led to a concern about mounting ethical issues in educational technology and recently publications and presentations on topics concerning educational surveillance, student data collection, and remote proctoring. Autumm has taught honors students at a small liberal arts colleges as well as traditional students, working professionals, and veterans at a regional public university. More at autumm.org.
Marie K. Heath (she/her/hers) is an Assistant Professor of Educational Technology at Loyola University Maryland. Prior to her work in higher education, Marie taught high school social studies in Baltimore County Public Schools. Her work in public schools informs her commitment to education that promotes a robust and multi-racial democracy through liberatory education. Marie’s research focuses on the intersection of education, civic engagement, and technology to foster social change. Her scholarship interrogates educational technology, confronts White supremacy, and advocates for teacher activism.
K. Bret Staudt Willet (he/him/his) is an Assistant Professor of Instructional Systems & Learning Technologies at Florida State University. Bret’s research investigates self-directed learning through social media. He has several ongoing projects related to this research area. First, he examines networked learning in online communities, such as those hosted by Twitter and Reddit. Second, he studies how new teachers expand their professional support systems during their induction transition. Third, he explores the connections between informal learning and invisible labor. Learn more on his website, bretsw.com.
The path from classroom to workplace is short. On the way the path passes through hiring processes, which demonstrate to candidates the organization’s true values. Many parallels exist between the concept and practice of education surveillance technology, those of hiring technologies, and those of workplace surveillance technologies. Moreover, these new education surveillance technologies should be seen as part of the pipeline preparing current students to accept the future of work being defined solely by the values and perspectives of technology firms and corporate leaders. The history of education in industrial economies has always been tied closely to, if not dictated by, the needs of industry. This remains true today and indicates that any attention paid to education surveillance technology must be done while acknowledging that today’s students are intended by the system to be tomorrow’s employees, and that it’s not just skills, but temperament, that education organizations teach and evaluate. A high school student inured to being surveilled in education, even excited about it because it takes the form of support and instant feedback, will expect similar or more from their college experience. As that student prepares to enter the workforce they will encounter artificial intelligence/machine learning (AI/ML) driven résumé review and interview training systems designed to help them succeed against AI/ML driven applicant tracking systems (ATSs) and virtual interview assessment. Once hired they will face increasing levels of virtual management and monitoring. Just like in school.
What comes next is not a traditional scholarly article. It is, as the title suggests, a summary of observations made over nearly thirty years in higher education spaces, most of which focused on career development, hiring, and their related technologies. Links are not definitive pieces of scholarship but selected and intended to give readers a sense of the conversations at large and to help orient readers on topics that may be removed from their own immediate experiences.
K–12: A Brief History
Industrialists wanted more profit
And workers protesting could chuck it
So saboteurs did
To bust up the rig
And history calls them techphobic
What is the point of education? Specifically, what is the point of education in the minds of the different stakeholders—students, faculty, and administrators? What has shaped and is shaping that context?
In industrial and post-industrial economies, education operates to prepare a skilled workforce to gain or maintain a competitive edge both for organizations internal to a given economy as well as in the rivalry between economies. In the United States, the government creation of land grant institutions followed a perceived need to adapt the population to new and different skills to meet expanding industry needs. Alongside the land grant institutions and the changes they ushered in were the development, forms, and practices of K–12 education. From the start of public education to the very late 20th century, the K–12 educational format was pretty much the same across the country. Rows of desks with a teacher at the front delivering lessons and tasks for students to carry out while they sat quiet and attentive. Recess and PE classes providing some physical outlet for children and teens otherwise confined. Alternative learning was typically shop class and vocational-technical school for high schoolers bound for a trade. The workplaces through that time were similar. Office spaces were rows of desks, later cube farms, often with a supervisor nearby, but set apart from the workers. Factory spaces were not physically arranged like schools, but still relied on authority structures to ensure productivity. Open classrooms and reduced structure mirrored open floor plans and other management innovations as the 20th Century closed. Why? Industry needed talent, needed workers, who were more creative and more flexible to match the move towards focus on quarterly earnings and staying constantly competitive. The fascination and cultural embrace of Fred W. Taylor’s management science and its virtue of efficiency have not waned. Its effects on our education priorities and designs remain evident.
Taylor’s morality resets
What moderns consider true assets
Efficiency’s grand
Human life gets panned
So elites can corner the markets
The most visible pressure on higher education institutions comes from industry leaders complaining that college students these days are not prepared to enter the workforce. The myriad articles in business and mainstream press criticizing the supposed inadequacies of higher education regularly remind us of this. What these industry leaders mean is college students do not appear to be ready on day one to do the full job. There are a number of problems with this perspective that too rarely get called out. First, the sheer size of most businesses means these leaders are remarkably disconnected from hiring and orientation processes. This interpretation from them comes from spreadsheets and reports from further down the management chain. Are those reports missing or obscuring other factors? Do these leaders’ organizations have good, robust orientation programs and managers with the time, training, energy, and commitment to onboard a new hire? Could there be other internal factors CEOs might miss, or obscure? Why have we seen an increase in these complaints over the past 15–20 years? Does it have anything to do with the increase in organizations outsourcing cost centers, like training and development units?
(Note that this time frame seems to run parallel to the rising costs of college degrees. So as a college degree becomes more essential to enter the workforce for even a decent wage—keeping in mind most wages in the US economy have remained stagnant for the past 30 years, after inflationary adjustment, despite rising productivity and profits for senior leaders—employers want to shift costs from internal training and development to students paying rising tuition, often at state schools that have seen budget support cut by their government.)
These complaints from corporate leadership contain a critical error. Students, especially from the oft-maligned liberal arts and humanities majors, actually possess the skills most valued by employers. The problem students face, and that employers misinterpret, lies in how those experiences and skills are acknowledged and communicated. Of course, employers who still favor narrow selections of majors in the hiring filters contribute to missing qualified students in many positions. Hiring a history major with strong communication, collaboration, and leadership skills and running them through a six week intensive on Java or C++ would probably prove more effective in the long term than a CS major with no clue how to work well with others or have a productive conversation with someone in marketing or R&D. Unfortunately, that’s not how the hiring process usually goes.
The launch of education tech firms like Coursera, and companies like Google, offering certificates for the skills they value today with the promise of a job in the short term, add another vector of pressure on higher education institutions. While competition is lauded, often by business leaders who take pains to reduce competition for their own enterprises, the entrance into the education market by these players further shifts the concept of education from something broadly beneficial for a lifetime to a time sensitive commodity necessary for a slightly better pay rate. One that’s offered, it should be noted, by those in an altogether higher pay grade with a vested interest in a workforce beholden to their products. Does that Google certificate lose value as soon as you drive it off the lot?
Higher education typically measures success through First Destinations surveys or similar. These surveys by design predicate employment in a field related to a graduate’s degree as the highest measure of success for the institution. What’s measured indicates what’s valued. This creates an awkward position if one considers the discrepancy between majors and the significantly larger set of possible fields of employment. Entering graduate schools is also acceptable; a safe handoff to another institution that still demonstrates a progression towards employment. Never mind issues like the market being glutted with newly minted PhDs who’ve received little training or support for the non-academic future they did not envision when they entered, but that’s statistically the most likely outcome for all their efforts.
For those First Destination surveys, graduate employment information gets collected. The bigger the names, the better. Career centers must tout the big names hiring the school’s students so admissions can impress prospective students and their families with the narrative that tuition here leads to a real job with a real paycheck. Schools building lazy rivers and luxury apartments for student housing may receive media attention around admission and recruitment tactics, but this pitch about employment says more about higher education’s priorities and inability to resist the expectations of industry.
Career Centers and the College to Work Pipeline
Enter into this mix career centers, historically siloed and under-resourced. Originally conceived as placement offices (and still referred to by many in higher education senior leadership as such) during the heyday of the GI Bill, these departments have labored over the past 20 years to redefine themselves to match the changing economic landscape students will enter upon graduation. These departments take on a varied mix of responsibilities including managing experiential education programs (internships and co-ops), running programming on leadership development and career readiness competencies, hosting large and small events like job fairs, career fairs, and employer information sessions, operating job boards, providing a broad range of career counseling and advising services, reviewing and coaching students on the full range of the components of the hiring process, and connecting students with alumni through networking tools and events. In an environment where professional staff to student ratios can be 2:2000 or more, efficiency drives many decisions. Automation increasingly becomes the road to efficiency with a growing number of third party vendors offering AI/ML based tools to stand in for activities like basic résumé review duties and video interview feedback.
Two particular aspects of this development bear mention. First, these new tools replace historically human-to-human interactions about what is often regarded and talked about as being a particularly human process. Fit is a two way street in hiring, and the best way for a candidate to determine if an organization even approximates their PR is through interaction with those who work there. This replacement of human interaction conditions students to accept that computers are and should be their first line of instruction, inquiry, and engagement. Second, the creators of these tools see a hiring process that replaces human gatekeepers with code, and they have responded by giving candidates technology to maintain a sense of parity with the applicant tracking systems (ATSs), online skills and personality assessments, and video interview systems of employers. Meanwhile applicants have less exposure to the humans on the other side even as AI advocates pitch it as increasing the human side of our work. Often career centers must pay for these products while their staff and the students they serve become part of the training data to enrich the vendor.
A futurist delivering a keynote at a recent conference for university career development professionals unintentionally captured the moment. After laying out a future that offers decreasing stability for employees; a future that sees the majority of professional workers as gig workers, not employees; and a nod to growing rates of mental health issues, especially around anxiety; her self-proclaimed pro-human take was advice (good, quality advice) on how practitioners could prepare students for this hostile job market. The inevitability of it all was always assumed and never questioned.
Hiring tech’s pitch
Improve human addition
Through human subtraction
Beyond the hiring process new employees may find themselves directed not to their supervisor or HR representative for help acclimating to a new workplace, but to the AI chatbot. Algorithms are now being developed and deployed to monitor, measure, and assess employee productivity and feed reports to managers for annual reviews. Even now code embedded in Microsoft Outlook can monitor tasks, suggest follow up and basic actions, and be embedded in employee management by sending reports on time spent on activities, response times, and more.
Mario Savio was Right in 1964, He’s Right Today
Human resources
Industrialized cyborg
Expendable meat
In the critical conversation around surveillance technology in education we must acknowledge its location within a larger set of industry-driven values around employment. The tension between educating well-rounded citizens and training future workers is as old as public education in the United States. The latter almost always wins, except for those already possessing privilege and having access to elite institutions where conversations around purpose in career sound like inheritances, not taunts. Ultimately, this mindset of humans as primarily cogs in the machine will undermine or even negate important undertakings like Diversity, Equity, and Inclusion (DEI) initiatives. Free Speech Movement founder Mario Savio’s protest speech against a machine-like university in 1964 is if anything more applicable today:
As long as the human as resource mindset of western industry dominates the conversation around employment, it will inform and shape the nature, tools, and forms of pedagogy. Schools will continue to be perceived and treated as refineries for raw materials, rather than a civic good for the development of human beings.
About the Author
Chris Miciek began building the first 100% online career center in the US in 2002, pioneering online technology and social media to provide career development advising and instruction for university students. Since then he’s presented at state, regional, and national conferences on emerging technology, including creating a keynote panel for Midwest Association of Colleges and Employers (MWACE) 2007 and leading the National Association of Colleges and Employers (NACE) Tech Summit in 2009. He authored a chapter for the NACE Case Study Guide and created podcasts, articles, and webinars on technology in career services and hiring. Chris served on the NACE Principles and Ethics Committee during 2020-21 and has co-chaired Eastern Association of Colleges and Employers (EACE) Technology Committee since 2019.
Email us at [email protected] so we can respond to your questions and requests. Please email from your CUNY email address if possible. Or visit our help site for more information: