Issues

A three-by-three grid of fragments of poems, with syllables appearing in different colors.
3

Back in a Flash: Critical Making Pedagogies to Counter Technological Obsolescence

Abstract

This article centers around the issue of teaching digital humanities work in the face of technological obsolescence. Because this is a wide-ranging topic, this article draws a particular focus on teaching electronic literature in light of the loss of Flash, which Adobe will end support for at the end of 2020. The challenge of preserving access to electronic literature due to technological obsolescence is integral to the field, which has already taken up a number of initiatives to preserve electronic literatures as their technological substrate of hardware and software become obsolete, with the recent NEH-funded AfterFlash project responding directly to the need to preserve Flash-based texts. As useful as these and similar projects of e-literary preservation are, they focus on preservation through one traditional modality of humanities work: saving a record of works so that they may be engaged, played, read, or otherwise consumed by an audience, a focus that loses the works’ poetics that are effected through materiality, physicality, and (often) interaction. This article argues that obsolescence may be most effectively countered through pedagogies that combine the consumption of e-textual record, with the critical making of texts that share similar poetics, as the process of making allows students to engage with material, physical poetics. The loss of Flash is a dual loss of both a set of consumable e-texts and a tool for making e-texts; the article thus provides a case study of teaching with a complementary program for making e-poetic texts in the place of Flash called Stepworks.

Technological obsolescence—the phenomenon by which technologies are rendered outdated as soon as they are updated—is a persistent, familiar problem when teaching with, and through digital technologies. Whether planned, a model of designed obsolescence where hardware and software are updated to be incompatible with older systems (thus forcing the consumer to purchase updated models) or perceived, a form of obsolescence where a given technology becomes culturally, if not functionally, obsolete, teaching in a 21st century ubicomp environment necessitates strategies for working with our own and our students’ “obsolete,” outdated, or otherwise incompatible technologies. These strategies may take any number of forms: taking time to help students install, emulate, or otherwise troubleshoot old software; securing lab space with machines that can run certain programs; embracing collaborative models of work; designing flexible assignments to account for technological limitations…the list could go on.

But in Spring 2020, as campuses around the globe went fully remote, online, and often asynchronous in response to the COVID-19 pandemic, many of us met the limits of these strategies. Tactics like situated collaboration, shared machinery, or lab space became impossible to employ in the face of this sudden temporal obsolescence, as our working, updated technologies became functionally “obsolete” within the time-boundedness of that moment. In my own case, the effects of this moment were most acutely felt in my Introduction to the Digital Humanities class, where the move to remote teaching coincided with our electronic literature unit, a unit where the combined effects of technological obsolescence and proprietary platforms already necessitate adopting strategies so that students may equitably engage the primary texts. For me, primary among these strategies are sharing machinery to experience texts together, guiding students through emulator or software installation as appropriate, and adopting collaborative critical making pedagogies to promote material engagement with texts. In Spring 2020, though, as my class moved to a remote, asynchronous model, each of these strategies was directly challenged.

I felt this challenge most keenly in trying to teach Brian Kim Stefans’s Dreamlife of Letters (2000) and Amaranth Borsuk and Brad Bouses’s Between Page and Screen (2012), two works that, though archived and freely available via the Electronic Literature Collection and therefore ideal for accessing from any device with an internet connection, were built in Flash, so require the Flash Player plug-in to run. Since Adobe’s announcement in 2017 that they would stop supporting Flash Player by the end of 2020, fewer and fewer students enter my class with machines prepared to run Flash-based works, either because they have not installed the plugin, or because they are on mobile devices which have never been compatible with Flash—a point that Anastasia Salter and John Murray (2014) argue, has largely contributed to Adobe’s decision. The COVID-19 move to remote, online instruction effectively meant that students with compatible hardware had to navigate the plug-in installation on their own (a process requiring a critical level of digital literacy), while those without compatible hardware were unable to access these works except as images or recordings. In the Spring 2020 teaching environment, then, the temporal obsolescence brought on by COVID-19 acted as a harbinger for the eventual inaccessibility of e-literary work that requires Flash to run.

The problem space that I occupy here is this: teaching electronic literature in the undergraduate classroom, as its digital material substrates move ever closer to obsolescence, an impossible race against time that speeds up each year as students enter my classroom with machines increasingly less compatible with older software and platforms. Specifically, I focus on the challenge of teaching e-literature in the undergraduate, digital humanities classroom, while facing the loss of Flash and its attendant digital poetics—a focus informed by my own disciplinary expertise, but through which I offer pedagogical responses and strategies that are useful beyond this scope.

Central to my argument, here, is that teaching e-literary practice and poetics in the face of technological obsolescence is most effective through an approach that interweaves practice-based critical making of e-literary texts, with the more traditional humanities practices of reading, playing, or otherwise consuming texts. Critical making pedagogy, however, is not without its critiques, particularly as it may promote inequitable classroom environments. For this reason, I start with a discussion of critical making as feminist, digital humanities pedagogy. From there, I turn more explicitly to the immediate problem at hand: the impending loss of Flash. I address this problem, first, by highlighting some of the work already being done to maintain access to e-literary works built in Flash, and second by pointing to some of the limits of these projects, as they prioritize maintaining Flash-based works for readerly consumption, even though the loss of Flash is also the loss of an amateur-friendly, low-stakes coding environment for “writing” digital texts. Acknowledging this dual loss brings me to a short case study of teaching with Stepworks, a contemporary, web-based platform that is ideal for creating interactive, digital texts, with poetics similar to those created in Flash. As such, it is a platform through which we can effectively use critical making pedagogies to combat the technological obsolescence of Flash. I close by briefly expanding from this case study, to reflect on some of the ways critical making pedagogies may help combat the loss of e-literary praxes beyond those indicative of and popularized by Flash.

Critical Making Pedagogy As DH Feminist Praxis

Critical making is a mode of humanities work that merges theory with practice by engaging the ways physical, hands-on practices like making, doing, tinkering, experimenting, or creating constitute forms of thinking, learning, analysis, or critique. Strongly associated with the digital humanities given the field’s own relationship to intellectual practices like coding and fabrication, critical making methodologically argues that intellectual activity can take forms other than writing, as coding, tinkering, building, or fabricating represent more than just “rote” or “automatic” technical work (Endres 2017). As critical making asserts, such physical practices constitute complex forms of intellectual activity. In other words, it takes to task what Bill Endres calls “an accident of available technologies” that has proffered writing its status as “the gold standard for measuring scholarly production” (2017).

Critical making scholarship can, thus, take a number of forms, as Jentery Sayers’s (ed.) Making Things and Drawing Boundaries showcases (2017). For example, as a challenge to and reflection on the invasiveness of contemporary personal devices, Allison Burtch and Eric Rosenthal offer Mic Jammer, a device that transmits ultrasonic signals that, when pointed towards a smartphone, “de-sense that phone’s microphone,” to “give people the confidence to know that their smartphones are non-invasively muted” (Burtch and Rosenthal 2017). Meanwhile, Nina Belojevic’s Glitch Console, a hacked or “circuit-bent” Nintendo Entertainment System, “links the consumer culture of video game platforms to issues of labor, exploitation, and the environment” through a play-experience riddled with glitches (Belojevic 2017). Moving away from fabrication, Anne Balsamo, Dale MacDonald, and Jon Winet’s AIDS Quilt Touch (AQT) Virtual Quilt Browser is a kind of preservation-through-remediation project that provides users with opportunities to virtually interact with the AIDS Memorial Quilt (Balsamo, MacDonald, and Winet 2017). Finally, Kim A. Brillante Knight’s Fashioning Circuits disrupts our cultural assumptions about digital media and technology, particularly as they are informed by gendered hierarchy; this project uses “wearable media as a lens to consider the social and cultural valences of bodies and identities in relation to fashion, technology, labor practices, and craft and maker cultures” (Knight 2017).

Each of these examples of critical making projects highlight the ways critical making disrupts the primacy of writing in intellectual activities. However they are also entangled in one of the most frequently lobbed and not insignificant critiques of critical making as a form of Digital Humanities (DH) praxis: that it reinforces the exclusionary position that DH is only for practitioners who code, make, or build. This connection is made explicit by Endres, as he frames his argument that building is a form of literacy through a mild defense of Steven Ramsay’s 2011 MLA conference presentation, which argued that building was fundamental to the Digital Humanities. Ramsay’s presentation was widely critiqued for being exclusionary (Endres 2017), as it promoted a form of gatekeeping in the digital humanities that, in turn, reinforces traditional academic hierarchies of gender, race, class, and tenure-status. As feminist DH in particular has shown, arguments that “to do DH, you must (also) code or build,” imagine a scholar who, in the first place, has had the emotional, intellectual, financial, and temporal means to acquire skillsets that are not part of traditional humanities education, and who, in the second place, has the institutional protection against precarity to produce work in modes outside “the gold standard” of writing (Endres 2017).

Critical making as DH praxis then, finds itself in a complicated bind: on the one hand, it effectively challenges academic hierarchies and scholarly traditions that equate writing with intellectual work; on the other hand, as it replaces writing with practices (akin to coding) like fabrication, building, or simply “making,”[1] it potentially reinforces the exclusionary logic that makes coding the price of entry into the digital humanities “big tent” (Svensson 2012).[2] Endres, however, raises an important point that this critique largely fails to account for: that “building has been [and continues to be] generally excluded from tenure and promotion guidelines in the humanities” (Endres 2017). That is, while we should perhaps not take DH completely off the hook for the field’s internalized exclusivity and the ways critical making praxis may be commandeered in service of this exclusivity, examining academic institutions more broadly, and of which DH is only a part, reveals that writing and its attendant systems of peer review and impact factors remains the exclusionary technology, gate-keeping from within.

To offer a single point of “anec-data:”[3] in my own scholarly upbringing in the digital humanities, I have regularly been advised by more senior scholars in the field (particularly other white women and women of color) to only take on coding, programming, building, or other making projects that I can also support with a traditional written, peer-reviewed article—advice that responds explicitly to writing’s position of primacy as a metric of academic research output, and implicitly to academia’s general valuation of research above activities like teaching, mentorship, or service. It is also worth noting that this advice was regularly offered with the clear-eyed reminder that, as valuable as critical making work is, ultimately the farther the scholar or practitioner appears from the cisgendered, heterosexual, able-bodied, white, male default, the more likely it is that their critical making work will be challenged when it comes to issues of tenure, promotion, or job competitiveness. While a single point of anec-data hardly indicates a pattern, the wider system of academic value that I sketch here is well-known and well-documented. It would be a disservice, then, not to acknowledge the space that traditional, peer-reviewed, academic writing occupies within this system.

Following Endres, my own experience, and systemic patterns across academia, I would argue that even though critical making can promote exclusionary practices in the digital humanities, the work that this methodological intervention does to disrupt technological hierarchies and exclusionary systems—including and especially, writing—outweighs the work that it may do to reinforce other hierarchies and systems. Indeed, I would go one step further to add my voice to existing arguments, implicit in the projects cited above and explicit throughout Jacqueline Wernimont and Elizabeth Losh’s edited collection, Bodies of Information: Intersectional Feminism and the Digital Humanities (2018), that critical making is feminist praxis, not least for the ways it contributes to feminism’s long-standing project of disrupting, challenging, and even breaking language and writing.[4]

The efficacy of critical making’s feminist intervention becomes even more evident, and I would argue, powerful, when it enters the classroom as undergraduate pedagogy. Following critical making scholarship, critical making pedagogy similarly disrupts the primacy of written text for intellectual work. Students are invited to demonstrate their learning through means other than a written paper, in a student learning assessment model that aligns with other progressive, student-centered practices like the un-essay.[5] Because research in digital and progressive pedagogy highlights the ways things like un-essays are beneficial to student learning, I will focus here on pointing to particular ways critical making pedagogy in the undergraduate digital humanities classroom operates as feminist praxis for disrupting heteropatriarchal assumptions about technology. For this, I will pull primarily from feminist principles of and about technology, as outlined by FemTechNet and Lauren Klein and Catherine D’Igniazio’s Data Feminism, and from my experiences teaching undergraduate digital humanities classes through and with critical making pedagogies.

In their White Paper on Transforming Higher Education with Distributed Open Collaborative Courses, FemTechNet unequivocally states “effective pedagogy reflects feminist principles” (FemTechNet White Paper Committee 2013), and the first and perhaps most consistent place that critical making pedagogies respond to this charge are in the ways that critical making makes labor visible, a value of intersectional feminism broadly, and data feminism specifically (D’Ignazio and Klein 2020). Issues of invisible labor have long been central to feminist projects, so it is no surprise that this would also be a central concern in response to Silicon Valley’s techno-libertarian ethos that blackboxes digital technologies as “labor free” for both end-users and tech workers. When students participate in critical making projects that relate to digital technologies—projects that may range from building a website or programming an interactive game, to fabricating through 3D printing or physical circuitry—they are forced to confront the labor behind (and rendered literally invisible in) software and hardware. This confrontation typically manifests as frustration, often felt in and expressed through their bodies, necessitating an engagement with affect, emotion, care, and often, collaboration in this work—all feminist technologies directly cited in both FemTechNet’s manifesto (FemTechNet 2012), and as core principles of data feminism (D’Ignazio and Klein 2020). Similarly, as students learn methods and practices for their critical making projects, they inevitably find themselves facing messiness, chaos, even fragments of broken things, and only occasionally is this “ordered” or “cleaned” by the time they submit their final deliverable. Besides recalling FemTechNet’s argument that “making a mess… [is a] feminist technolog[y]” (FemTechNet 2012), the physical and intellectual messiness of critical making pedagogy also requires a shift in values away from the “deliverable” output, and towards the process of making, building, or learning itself. At times, this shift in value toward process can manifest as a celebration of play, as the tinkering, experimentation, chaos, and messiness of critical making transform into a play-space. While I am hesitant to argue too forcefully for the playfulness of critical making in the classroom (not least for the ways play is inequitably distributed in academic and technological systems), Shira Chess has recently and compellingly argued for the need to recalibrate play as a feminist technology, so I name it here as an additional, potential effect of critical making pedagogy (Chess 2020). Whether it transforms to play or not, however, the shift in value away from a deliverable output is always a powerful disruption of the masculinist, capitalist narratives of “progress” and “value” that undergird technological, academic work.

These are, of course, also the narratives primarily responsible for the profitability of obsolescence, which brings us back to my primary thesis and central focus: the efficacy of adopting critical making pedagogies to counter the effects of technological obsolescence in electronic literature. Because technological obsolescence is a core concern of electronic literature, it will be worth spending some time and space to examine this relationship more deeply, and address some of the ways the field is already at work countering loss-through-obsolescence. Indeed, some of this work already anticipates possibilities for critical making to counter this loss.

Preservation and E-lit

As stated technological obsolescence is the phenomenon whereby technologies become outdated (obsolete) as they are updated. Historically, technological obsolescence has occurred in concert with developments in computing technologies that have radically altered what computers can do (eg: moves from primarily text to graphics processing power) or how they are used (eg: with the advent of programming languages, or the development of the graphical user interface). However, as digital technologies have developed into the lucrative consumer market of today, this phenomenon has become driven more heavily by capitalistic gains through consumer behaviors. Consider, for instance, iOS updates that no longer work on older iphone models, or new hardware models that do not fit old plugs, USB ports, or headphones. In each case, updates to the technology force consumers to purchase newer models, even if their old ones are otherwise functioning properly.

In the field of electronic literature, obsolescence requires attention from two perspectives: the readerly and the writerly. From the former, obsolescence threatens future access to e-literary texts, so the field must regularly engage with preservation strategies; from the latter, it requires that that the field regularly engage with new technologies for “writing” e-literary texts, as new platforms and programs both result from and in others’ obsolescence. E-lit, thus, occupies both sides of the obsolescence coin: on the one side, holding the outdated to ensure preservation and access, and on the other, embracing the updated, as new platforms, programs, and hardware prompt the field’s innovation. Much of the field’s focus is on combating obsolescence through attention to outdated (or, as in the case of Flash, soon-to-be-outdated) platforms, and maintaining these works for future audiences. However, this attention only weakly (if at all) accounts for the flipside of the writerly, where obsolescence also threatens loss of craft or practice in e-literary creation; this is where, I argue critical making as e-literary pedagogy is especially productive as a counter-force to loss. First though, it will be worth looking more closely at the ways obsolescence and e-literature are integrated with one another.

Maintaining access to obsolete e-literature is, and has been, a central concern of the field in large part because the field’s own history “is inextricably tied to the history of computing, networking, and their social adoption” (Flores 2019). A brief overview of e-literature’s “generations” effectively illustrates this relationship. The first generation of e-lit is primarily pre-web, text-heavy works developed between 1952 and 1995—dates that span the commercial availability of computing devices and the development of language-based coding, to the advent and adoption of the web (Flores 2019). Spanning over 40 years, this period also includes within it, the period 1980–1995 which is “dominated by stand-alone narrative hypertexts,” many of which “will no longer play on contemporary operating systems” (Hayles and House 2020, my italics). The second generation, beginning in 1995, spans the rise of personal, home computing and the web, and is characterized by web-based, interactive, and multimedia works, many of which are developed in Flash (Flores 2019). In 2005, the web of 1995 shifted into the platform-based, social “web 2.0” that we recognize and use today. Leonardo Flores uses this year to mark the advent of what he argues for as the third generation of e-lit, which “accounts for a massive scale of born digital work produced by and for contemporary audiences for whom digital media has become naturalized” (Flores 2019). In the Electronic Literature Organization (ELO)’s Teaching Electronic Literature initiative, N. Katherine Hayles and Ryan House characterize third generation e-lit, specifically, as “works written for cell phones” in contrast to “web works displayed on tablets and computer screens” (Hayles and House 2020).

Though Flores includes activities like storytelling through gifs and poetics of memes in his characterization of third generation e-lit, framing this moment through cell phones is helpful for thinking about the centrality of computing development and its attendant obsolescence to the field. In the first place, pointing to work developed for cell phones immediately brings to mind challenges of access due to Apple’s and Android’s competitive app markets. Here, there is on the one hand, the challenge of apps developed for only one of these hardware-based platforms, so inaccessible by the other; on the other hand, there are the continuous operating system updates that in turn, require continuous app updates which regularly results in apps seemingly, and sometimes literally, becoming obsolete overnight. In the second place, the 1995–2015 generation of “web works displayed on tablets and computer screens” is, as noted, a generation characterized by the rising ubiquity of Flash—a platform that was central to both user-generated webtexts, and e-literary practice of this time (Salter and Murray 2014). As noted, Flash-based works are currently facing their own impending obsolescence due to Adobe’s removal of support, a decision that Salter and Murray argue results directly from the rise of cell phones and/as smartphones which do not support Flash. Thus, cell phones and “works created for cell phones” once again demonstrates the intricate relationship between computing history, technological development, and e-literature.

As this brief history demonstrates, the challenge of ensuring access to e-lit in the face of technological obsolescence is absolutely integral to the field, as it ultimately ensures that new generations of e-lit scholars can access primary texts. Indeed, it is a central concern behind one of the field’s most visible projects: the Electronic Literature Collection (ELC). As of this writing, the ELC is made up of three curated volumes of electronic literature, freely available on the web and regularly maintained by the ELO. In addition to expected content like work descriptions and author’s/artist’s statements, each work indexed in the ELC is accompanied by notes for access; these may include links, files for download, required software and hardware, or emulators as appropriate. The ELC, thus, operates as both an invaluable resource for preserving access to e-lit in general, and for ensuring access to e-lit texts for teaching undergraduates. Most of the work in the ELC is presented either in its original experiential, playable form, or as recorded images or videos when experiencing the work directly is untenable for some reason (as in, for instance, locative texts which require their user to be in specific locations to access the text). However, some of the work is presented with materials that encourage a critical making approach—a move in direct conversation with my argument that critical making plays an important pedagogical role for experiencing and even preserving e-literary practice, even and especially if the text itself cannot be experienced or practiced directly. Borsuk and Bouse’s Between Page and Screen, an augmented reality work that requires both a physical, artist’s book of 2D barcodes specifically designed for the project, and a Flash-based web app that enables the computer’s camera to “read” these barcodes, offers a particularly strong example of this.

Between Page and Screen is indexed in the 3rd volume of the ELC. In addition to the standard information about the work and its authors, the entry’s “Begin” button contains an option to “DIY Physical Book,” which will take the user to a page on the work’s website that offers users access to 1) a web-based tool called “Epistles” for writing their own text and linking it to a particular bar code; and 2) a guide for printing and binding their own small chapbook of bar codes, which can then be held up and read through the project’s camera app. In this way, users who may be unable to access the physical book of barcodes that power Between Page and Screen are still offered an experiential, material engagement with the text through their own critical making practices. Engaging the text in this way allows users not only to physically experience the kinetics and aesthetics of the augmented reality text, but also to engage the materiality and interaction poetics at the heart of the piece—precisely those poetics that are lost when the only available access to a text is a recording to be consumed. At the same time, engaging the text through the critically made chapbook prompts a material confrontation between the analog and the digital as complementary, even intimate, information technologies. Of course, in this case it is (perhaps) ironically not the anticipated analog technology that is least accessible; rather it is the digital complement, the Flash-based program that allows the camera to read the analog bar codes, that is soon to be inaccessible.

Complementing the Electronic Literature Collections are the preservation efforts underway at media archeology labs around the country, most notably the Electronic Literature Lab (ELL) directed by Dene Grigar at Washington State University, Vancouver. Currently, the ELL is working on preserving Flash-based works of electronic literature, through the NEH-sponsored Afterflash project. Afterflash will preserve 447 works through a combination process where researchers:

1) preserve the works with Webrecorder, developed by Rhizome, that emulates the browser for which the works were published, 2) make the works accessible with newly generated URLs with six points of access, and 3) document the metadata of these works in various scholarly databases so that information about them is available to scholars (Slocum et. al. 2019).

Without a doubt, this is an exceptional project of preservation for ensuring some kind of access to Flash-based works of electronic literature, even after Adobe ends their support and maintenance of the software. In particular, the use of Webrecorder to capture the works means that the preservation will not just be a recorded “walk-through” of the text, but will capture the interactivity—an important poetic of this moment in e-literary practice.

As exceptional as this preservation project is, however, it is focused entirely (and not incorrectly) on preserving works so that they may be experienced by future readers, who do not have access to Flash. But what of preserving Flash as a program particularly suited for making interactive, multimedia webtexts? As Salter and Murray argue, a major part of the platform’s success and influence on turn-of-the-century web aesthetics, web arts, and electronic literature has to do with its low barrier-to-entry for creating interactive, multimedia works, even for users who were not coders, or who were new to programming (Salter and Murray 2014). Pedagogically speaking, the amateur focus of Flash also meant that it was particularly well-suited for teaching digital poetics through critical making. In the first place, it upheld the work of feminist digital humanities to disrupt and resist the primacy of original, complex, codework to the digital humanities and (more specifically) electronic literatures. In this way, it could operate as an ideal tool for feminist critical making pedagogies by, both promoting alternatives to writing for intellectual work, and by resisting the exclusivity behind prioritizing original, complex codework. In the second place, it allowed students to tinker with the particularities and subtleties of digital poetics—things like interaction, animation, kinetics, and visual / spatial design—without getting overwhelmed by the complexities and specifications of code. As the end of 2020 and Adobe’s support for Flash looms large, the question then becomes all the more urgent: if critical making offers an effective, feminist pedagogical model for teaching electronic literatures in the face of technological obsolescence, how can we maintain these practices in our undergraduate teaching in a post-Flash world?

Case Study: Stepworks

In response to this question, I propose: Stepworks. Created by Erik Loyer in 2017, Stepworks is a web-based platform for creating single-touch interactive texts that centers around the primary metaphor of staged, musical performance, an appropriate metaphor that resonates with traditions of e-literature and e-poetry that similarly conceptualize these texts in terms of performance. Stepworks performances are powered by the Stepwise XML, also developed by Loyer, but the platform does not require creators to work directly in the XML to make interactive texts. Instead, the program in its current iteration interfaces directly with Google Sheets—a point that, while positive for things like ease of use and supporting collaboration (features I discuss more fully in what follows), does introduce a problematic reliance on Google’s corporate decisions to maintain access to and workability of Stepworks and its texts. In the Stepworks spreadsheet, each column is a “character,” named across the first row, while the cells in each subsequent row contain what the character performs—the text, image, code, sound, or other content associated with that character. Though characters are often named entities that speak in text, they can also be things like instructions for use, metadata about the piece, musical instruments that will perform sound and pitch, or a “pulse,” a special character that defines a rhythm for the textual performance. Finally, each cell contains the content that will be performed with each single-click interaction, and this content will be performed in the order that it appears down the rows of the spreadsheet. Students can, therefore, easily and quickly experiment with different effects of single-click interactive texts, as they perform at the syllable, word, or even phrase level, just by putting that much content into the cell.

Figures 1, 2, and 3 illustrate some of these effects. Figure 1, a gif of a Stepworks text where each line of Abbott and Costello’s “Who’s on First” occupies each cell, showcases the effects of performing full lines of text with each interaction.

The gif includes text that appears in alternative lines on top and below one another, as each character speaks. This text reads: “Strange as it may seem, they give ball players nowadays very peculiar names / Funny names? / Nicknames, nicknames. / Now on the St. Louis team we have Who’s on first, What’s on second, I Don’t Know is on third-- / That’s what I want to find out / I want you to tell me the names of the fellows on the St. Louis team. / I’m telling you / Who’s on first, What’s on second, I Don’t Know is on third / You know the fellows’ names? / Yes.
Figure 1. An animated gif showing part of a Stepworks performance created by Erik Loyer that remediates the script of Abbott and Costello’s comedy sketch, “Who’s On First.” The gif was created by the author, through a screen recording of the Stepworks recording.

Figure 2, a gif from a Stepworks text based on Lin-Manuel Miranda’s Hamilton, showcases the effects of breaking up each cell by syllable and allowing the player to perform the song at their own pace.

The text appears on a three-by-three grid, and in each space of the grid the text gets a different color, indicating a different character. The gif begins with text in the center position that reads “but just you wait, just you wait…” and in the bottom right that reads “A voice saying.” Then text appears in each position on the grid around the center saying “Alex, you gotta fend for yourself,” and the bottom right space reads “He started retreatin’ and readin’ every treatise on the shelf.” The top left then presents text reading “There would have been nothin’ left to do for someone less astute.”
Figure 2. An animated gif showing part of a Stepworks performance created by Erik Loyer that remediates lyrics from Lin-Manuel Miranda’s broadway musical Hamilton. The gif was created by the author, through a screen recording of the Stepworks recording.

Finally, Figure 3 is taken from Unlimited Greatness, a text that remediates Nike’s ad featuring the story of Serena Williams as single words. In this text, the effects of filling each cell with a single word are on display.

There is text that appears on word at a time in the center of the screen which reads “Compton / sister / outsider / pro / #304 / winner / top 10 / Paris / London / New York / Melbourne / #1 / injured / struggling.”
Figure 3. An animated gif showing part of a Stepworks performance called Unlimited Greatness created by Erik Loyer that remediates a Nike ad about Serena Williams’s tennis career originally created by the design team, Wiedan+Kennedy. The gif was created by the author, through a screen recording of the Stepworks recording.

Pedagogically speaking, this range of illustrative content allows students to quickly grasp the different effects of manipulating textual performance in an interactive piece. At the same time, the ease of working in the spreadsheet offers them an effective place from which to experiment in their own work. For example, reflecting on the process of creating a Stepworks performance based on a favorite song, one student describes placing words purposefully into the spreadsheet, breaking up the entries by word and eventually by syllable, rather than by line:

I made them [the words] pop up one by one instead of the words popping up all together using the [&] symbol. I also had words that had multiple syllables in the song, so I broke those up to how they fit in the song. So, if the song had a 2-syllable word, then I broke it up into 2 beats.

Although the spreadsheet is not named explicitly, the student describes a creative process of purposefully engaging with the different effects of syllables, words, and lines, opting for a word- and syllable-based division in the spreadsheet to affect how the song’s remediation is performed (see Figure 4, the middle column of the top row).

Besides offering the spreadsheet as an accessible space in which to build interactive texts, Stepworks comes equipped with pre-set “stages” on which to perform those texts. Currently there are seven stages available for a Stepworks performance, and each of them offers a different look and feel for the text. For instance, the Hamilton piece discussed above (Figure 2) is performed on Vocal Grid, a stage which assigns each character a specific color and position on a grid that will grow to accommodate the addition of more characters; Who’s On First, by contrast, is performed on Layer Cake, a stage that displays character dialogue in stacked rows. As the stages offer a default set design for font, color, textual behavior, and background, they (like the spreadsheets) reduce the barrier-to-entry for creating and experimenting with kinetic, e-poetic works. Once a text is built in the spreadsheet and the spreadsheet published to the web, it may be directly loaded into any stage and performed; a simple page reload in the browser will update the performance with any changes to the spreadsheet, thereby supporting an iterative design process. The user-facing stage and design-facing spreadsheet are integrated with one another so that students may tinker, fiddle, experiment, and learn, moving between the perspective of the audience member and that of the designer.

In my own classes, it is at this stage of the process that additional, less tangible benefits of critical making pedagogy have come to light, especially in my most common Stepworks assignment: challenging students to remediate a favorite song or poem into a Stepworks performance. When students reach the stage of loading and reloading their spreadsheets into different stages in Stepworks, they also often begin exploring different performance effects for their remediated texts, regularly going beyond any requirements of the assignment to learn more about working with and in Stepworks to achieve the effects they want. Of this creative experience, one student writes:

In order to accurately portray the message of the song, I had to take time and sing the song to myself multiple times to figure out the best way to group the words together to appear on screen. Once this was completed, I had an issue where words would appear out of order because I didn’t fully understand the +1 (or any number for that matter) after a word. I thought that this meant to just add extra time to the duration of the word or syllable. I later figured out that it acted as a timeline of when to appear rather than the duration of the word’s appearance. Once I figured this out, it then came down to figuring out the best timing of the words appearing on each screen to match the original song.

Here the student’s reflection indicates a few important things: first the reflection articulates the student’s own iterative design process, as they move between the designer’s and audience’s experiential positions to create the “best timing of the words” performed on screen. This same movement guides the student towards figuring out some more advanced Stepworks syntax: working with a “Pulse” character to effect a tempo or rhythm to the piece by delaying certain words’ appearances on screen (the reference to “+1”). As the assignment required an attention to rhythm, but did not specify the necessity of using the Pulse character to create a rhythm, this student’s choice to work with the Pulse character points to both a self-guided movement beyond the requirements of the assignment, and a developing poetic sensitivity to words and texts as rhythmic bodies that effect meaning through performance. In Stepworks, rhythm can be created by working down the spreadsheet’s rows (as in the Hamilton or Unlimited Greatness pieces in Figures 2 and 3), however this rhythm is reliant on the player’s interactive choices and personal sensitivity to the poetics. Using a pulse to effect rhythm overrides the player’s choice and assigns a rhythm to the contents within a single cell, performing them on a set delay following the user’s single-click.[6] Working with the pulse, then, the student is letting their creative and critical ownership of their poetic design lead them to a direct confrontation with the culturally-conditioned paradox of user-control and freedom in interactive environments. This point is explicitly evoked later in the reflection, as the student expands on the effects of the pulse, writing:

The timing of the words appearing on the screen … also enhance the impact and significance of certain words. My favorite example is how one word … “Hallelujah,” can be stretched out to take up 8 units of time, signifying the importance of just that one word
(see Figure 4, the leftmost text in the second row).

Indeed, across my courses students have demonstrated a willingness to go beyond the specifications of this assignment in order to fully realize their poetic vision. Besides the Pulse, students often explore syntax for “sampling” the spreadsheet’s contents to perform a kind of remix, customizing fonts or colors in the display, or adding multimedia like images, gifs, or sound. Thus, as Stepworks supports this kind of work, it simultaneously supports students’ hands-on learning of digital poetics, especially those popularized by works created with Flash.

Finally, I’d like to point to one final aspect of Stepworks that makes it an ideal platform for teaching e-literature through feminist critical making pedagogies. Because of its integration with google sheets, Stepworks supports collaboration, unfettered by distance or synchronicity. Speaking for my own classes and experiences teaching with Stepworks—particularly the Spring 2020 class—this is where the program really excels pedagogically, as it opens a space for teaching e-poetry through sharing ideas and poetic decisions, creating and experimenting “together,” and supporting students to learn from and with one another, even when the shared situatedness of the classroom is inaccessible. A powerful pedagogical practice in its own right, collaboration is also a core feminist technology, central to feminist praxis across disciplines, and a cornerstone of digital humanities work, broadly speaking. Indeed, it is one of the tenants of digital humanities that has contributed to the field’s disruption and challenge to expectations of humanities scholarship.

As an illustration of what can be done in the (virtual, asynchronous) classroom through a collaborative Stepworks piece, I will end this case study with a piece my students created to close out the Spring 2020 semester—a moment that, to echo the opening of this article, threw us into a space of technological inaccess, of temporal obsolescence. Unable to access many works of Flash-based e-lit that had been on the original syllabus, critical making through Stepworks was our primary—in some cases, only—mode through which to engage with e-poetics, particularly of interaction and kinetics. Working from a single google spreadsheet, each student took a character-column and added a poem or song or lyric that was, in some way, meaningful to them in this moment. The resulting performance (Figure 4), appears as a multi-voiced, found text; watching it or playing it, it is almost as if you are in a room, surrounded by others, sharing a moment of speaking, hearing, watching, performing poetry.

Figure 4. A recording of a collaborative Stepworks piece of Found Poetry built by students in Sarah Laiola’s Spring 2020 Introduction to the Digital Humanities class.The piece presents text on a 3 by 3 grid, and each grid performs the text of a different poem or song, chosen by the students.

Conclusion

Technological obsolescence will undoubtedly continue to present a challenge for teaching digital texts and electronic literatures. As our systems update into outdatedness, we face the loss of both readerly access to existing texts, and writerly access to creative potentialities. Flash, which as of this writing, is facing its final days, is just one example of this cycle and its effects on electronic and digital literatures. As I have shown here, however, even as platforms (like Flash) become obsolete for contemporary machines, critical making through newer platforms with low barriers to entry like Stepworks can offer productive counters to this loss, particularly from the writerly perspective of, in this case, kinetic poetics. Indeed, this approach can enhance teaching e-literature and digital textuality more broadly, for other inaccessible platforms. For instance, Twine, a free platform for creating text-based games that runs on contemporary browsers, is a productive space for teaching hypertextual storytelling in place of Storyspace, which though still functional on contemporary systems, is cost-prohibitive at nearly $150; similarly, Kate Compton’s Tracery offers an opportunity for first-time-coders to create simple text generators, in contrast to comparatively more advanced languages like javascript, which require such focus on the code, that the lesson on poetics of textual generation may be lost by overwhelmed students. Whatever the case may be, critical making through low-cost, contemporary platforms that are easy to use offer a robust space for teaching e-literary and digital media creation that maintains a feminist, inclusive, and equitable classroom ethos to counter technological obsolescence.

Notes

[1] Throughout this article, I am purposely using different capitalizations for referencing the digital humanities. Capital DH, as used here indicates the formalized field as such, while lowercase digital humanities refers to broad practices of digital humanities work that is not always recognizable as part of the field.

[2] Although I cite Svensson here, the term is taken from the 2011 MLA conference theme. Svensson, however, theorizes the big tent in this piece.

[3] Anecdotal data.

[4] For example, see Hélène Cixous’s écriture feminine, or feminist language-oriented poetry such as that by Susan Howe.

[5] The unessay is a form of student assessment that replaces the traditional essay with something else, often a creative work of any media or material. For examples see Cate Denial’s blog reflecting on the assignment, Hayley Brazier and Heidi Kaufman blog post “Defining the Unessay,” or Ryan Cordell’s assignment in his Technologies of Text class.

[6] As a reminder, Stepworks texts are single-click interactions, where the entire contents of a cell in the spreadsheet are performed on a click.

Bibliography

Balsamo, Anne, Dale MacDonald, and John Winet. 2017. “AIDS Quilt Touch: Virtual Quilt Brower.” In Making Things and Drawing Boundaries: Experiments in the Digital Humanities, edited by Jentery Sayers. Debates in the Digital Humanities. Minneapolis: University of Minnesota Press. https://dhdebates.gc.cuny.edu/read/untitled-aa1769f2-6c55-485a-81af-ea82cce86966/section/acee4e20-acf5-4eb3-bb98-0ebaa5c10aaa#ch32.

Belojevic, Nina. 2017. “Glitch Console.” In Making Things and Drawing Boundaries: Experiments in the Digital Humanities, edited by Jentery Sayers. Debates in the Digital Humanities. Minneapolis: University of Minnesota Press. https://dhdebates.gc.cuny.edu/read/untitled-aa1769f2-6c55-485a-81af-ea82cce86966/section/b1dbaeab-f4ad-4738-9064-8ea740289503#ch20.

Bouse, Brad. 2012. Between Page and Screen. New York: Siglio Press. https://www.betweenpageandscreen.com.

Burtch, Allison, and Eric Rosenthal. 2017. “Mic Jammer.” In Making Things and Drawing Boundaries: Experiments in the Digital Humanities, edited by Jentery Sayers. Debates in the Digital Humanities. Minneapolis: University of Minnesota Press. https://dhdebates.gc.cuny.edu/read/untitled-aa1769f2-6c55-485a-81af-ea82cce86966/section/f8dcc03c-2bc3-499a-8c1b-1d17f4098400#ch11.

Chess, Shira. 2020. Play Like a Feminist. Playful Thinking. Cambridge: MIT Press.

D’Ignazio, Catherine, and Lauren F. Klein. 2020. Data Feminism. Ideas. Cambridge: MIT Press.

ELL Team, Dene. 2020. “Electronic Literature Lab.” Electronic Literature Lab. Accessed June 30. http://dtc-wsuv.org/wp/ell/.

Endres, Bill. 2017. “A Literacy of Building: Making in the Digital Humanities.” In Making Things and Drawing Boundaries: Experiments in the Digital Humanities, edited by Jentery Sayers. Debates in the Digital Humanities. Minneapolis: University of Minnesota Press. https://dhdebates.gc.cuny.edu/read/untitled-aa1769f2-6c55-485a-81af-ea82cce86966/section/2acf33b9-ac0f-4411-8e8f-552bb711e87c#ch04.

FemTechNet. 2012. “Manifesto.” FemTechNet. https://femtechnet.org/publications/manifesto/.

FemtechNet White Paper Committee. 2013. “Transforming Higher Education with Distributed Open Collaborative Courses (DOOCS): Feminist Pedagogies and Networked Learning.” FemTechNet. https://femtechnet.org/about/white-paper/.

Flores, Leonardo. 2019. “Third Generation Electronic Literature.” Electronic Book Review, April. doi:https://doi.org/10.7273/axyj-3574.

Hayles, N. Katherine. 2006. “The Time of Digital Poetry: From Object to Event.” In New Media Poetics: Contexts, Technotexts, and Theories, edited by Adalaide Morris and Thomas Swiss, 181–210. Cambridge, Massachusetts: MIT Press.

Hayles, N. Katherine, and Ryan House. 2020. “How to Teach Electronic Literature.” Teaching Electronic Literature.

Knight, Kim A. Brillante. 2017. “Fashioning Circuits, 2011–Present.” In Making Things and Drawing Boundaries: Experiments in the Digital Humanities, edited by Jentery Sayers. Debates in the Digital Humanities. Minneapolis: University of Minnesota Press. https://dhdebates.gc.cuny.edu/read/untitled-aa1769f2-6c55-485a-81af-ea82cce86966/section/05613b02-ec93-4588-98e0-36cd4336e7a0#ch26.

Losh, Elizabeth, and Jacqueline Wernimont, eds. 2018. Bodies of Information: Intersectional Feminism and Digital Humanities. Debates in the Digital Humanities. Minneapolis: University of Minnesota Press. https://dhdebates.gc.cuny.edu/projects/bodies-of-information.

Salter, Anastasia, and John Murray. 2014. Flash: Building the Interactive Web. Cambridge: MIT Press.

Sayers, Jentery, ed. 2017. Making Things and Drawing Boundaries: Experiments in the Digital Humanities. Debates in the Digital Humanities. Minneapolis: University of Minnesota Press. https://dhdebates.gc.cuny.edu/projects/making-things-and-drawing-boundaries.

Slocum, Holly, Nicholas Schiller, Dragan Espenscheid, and Greg Philbrook. 2019. “After Flash: Proposal for the 2019 National Endowment for the Humanities, Humanities Collections and Reference Resources, Implementation Grant.” Electronic Literature Lab. http://dtc-wsuv.org/wp/ell/afterflash/.

Stefans, Brian Kim. 2000. The Dreamlife of Letters. Electronic Literature Collection: Volume 1. https://collection.eliterature.org/1/works/stefans__the_dreamlife_of_letters.html.

Svensson, Patrik. 2012. “Beyond the Big Tent.” In Debates in the Digital Humanities, edited by Matthew K. Gold. Minneapolis: University of Minnesota Press.

Wardrip-Fruin, Noah. 2003. “From Instrumental Texts to Textual Instruments.” In Streaming Worlds. Melbourne, Australia. http://www.hyperfiction.org/texts/textualInstrumentsShort.pdf.

About the Author

Sarah Whitcomb Laiola is an assistant professor of Digital Culture and Design at Coastal Carolina University, where she also serves as coordinator for the Digital Culture and Design program. She holds a PhD in English from the University of California, Riverside, and specializes in teaching new media poetics, visual culture, and contemporary digital technoculture. Her most recent peer-reviewed publications appear in Electronic Book Review (Nov 2020), Syllabus (May 2020), Hyperrhiz (Sept 2019), Criticism (Jan 2019), and American Quarterly (Sept 2018). She will be joining the JITP Editorial Collective in 2021.

Side angle view of a 3D-scanned leafy plant woodblo
0

Make-Ready: Fabricating a Bibliographic Community

Abstract

The 3Dhotbed project was established in 2016 with the goal of leveraging maker culture to enhance the study of material culture. An acronym for 3D-Printed History of the Book Education, the project extends the tradition of experiential learning set forth by the bibliographical press movement through the development of open-access teaching tools. After successfully implementing its initial curricular offerings, the project developed an engaged community of practice across the United States, Canada, and the United Kingdom, whose members have since called for further development. This paper reports upon recent efforts to answer these demands through the design of a community-populated repository of 3D-printable teaching tools for those engaging in bibliographical instruction and research. Among the key findings are the demonstration that the same pedagogical methods that made the bibliographical press movement successful are now applicable more broadly throughout the expanding discipline of book history, and that 3D technologies, distributed by a digital library platform, are ideal for providing open access to the tools that can promote this pedagogy on a global scale. By outlining the theoretical grounding for the project’s work and charting the hurdles and successes the group has encountered in furthering these efforts in the burgeoning bibliographical maker movement, the 3Dhotbed project is well positioned to serve as a model for others seeking to utilize 3D technologies, digital library infrastructure, and/or multiple institutional partnerships in the design of interactive pedagogy.

Introduction

In the mid-twentieth century, a new approach to research and instruction surrounding the history of the book emerged, which advocated for a hands-on mode of inquiry complementing the traditional work performed in a reading room.[1] The bibliographical press movement, as it came to be known, demonstrated that first-hand experience with the tools and methods of, say, an Elizabethan pressroom allowed researchers to put into practice the book history they theorized. In recent decades, the movement’s continued success has hinged largely upon access to appropriate equipment with which students and researchers can conduct these experiential experiments. The reality today is that many more would like to offer an immersive approach to book history pedagogy than have the proper equipment to do so. However, with the proliferation of affordable 3D technologies and advancements in digital library distribution, there is a new opportunity to democratize and significantly enhance access to this and other forms of instruction. This paper reports on efforts of the 3Dhotbed project (3D-printed History of the Book Education) to build a community-populated repository of open-access, 3D-printable teaching tools for those engaging in bibliographical instruction and research. Following a brief analysis of how the project has evolved from decades of application in book history-related instruction, the paper will outline 3Dhotbed’s work toward establishing a community 3D data repository and fostering a community of practice. Beyond the realm of book history, these findings will be relevant to those developing projects using 3D technologies, or anyone working on a digital project that relies upon workflows distributed among partners at many institutions.

An Overview of the Bibliographical Press Movement

A crucial element in the expansion of bibliography during the last century was the advent of the bibliographical press movement. Defined by one of its key proponents, Philip Gaskell, as “a workshop or laboratory which is carried on chiefly for the purpose of demonstrating and investigating the printing techniques of the past by means of setting type by hand, and of printing from it on a simple press,” these pedagogical experiments have provided an ideal environment for scholars seeking to understand the printed texts they study (Gaskell 1965, 1). The movement appeared in the wake of a call-to-action made in 1913 by R. B. McKerrow, which has served as a refrain for book historians over the last century: “It would, I think, be an excellent thing if all who propose to edit an Elizabethan work from contemporary printed texts could be set to compose a sheet or two in as exact facsimile as possible of some Elizabethan octavo or quarto, and to print it on a press constructed on the Elizabethan model” (McKerrow 1913, 220). As he postulated, with even a rudimentary experiential lesson in historic printing, students “would have constantly and clearly before their minds all the processes through which the matter of the work before them has passed, from its first being written down by the pen of its author to its appearance in the finished volume, and would know when and how mistakes are likely to arise” (McKerrow 1913, 220). It would be some twenty years before McKerrow’s course of study was brought to fruition with the establishment of a bibliographical press by Hugh Smith at University College, London in 1934. By 1964, at least twenty-five such presses had been established across three continents (Gaskell 1965, 7–13).

In practice, the work McKerrow’s successors have embarked upon with these bibliographical presses has served to further both pedagogy and research. The hands-on approach encourages students to internalize the processes, forging an intimate knowledge of them that better informs subsequent analysis of texts. In the past several decades, experiential learning theorists have confirmed Gaskell’s adage that “there is no better way of recognizing and understanding [printers’] mistakes than that of making them oneself” (Gaskell 1965, 3). As Sydney Shep has written:

By applying technical knowledge to the creation of works which challenge the relationship between structure and meaning, form and function, sequentiality and process, text and reader, students achieve not only a profound understanding of nature and behaviour of the materials they are handling, but an awareness of the interrelationship of book production methods and their interpretive consequences. (Shep 2006, 39–40)

The continued proliferation of these bibliographical presses stands as a testament to the efficacy of an experiential approach that focuses on the process to better understand the product of printing. This presswork has formalized a methodology through which book historians have established and tested hypotheses regarding how books were printed in the hand-press period. Indeed, much of the foundational work done during the twentieth century to investigate the history of the book relied upon the reverse engineering of historical methods in bibliographical press environments (Shep 2006, 39). However, such advances have not come without significant logistical obstacles. Notably, the acquisition of proper equipment has been a perennial challenge for those wishing to take up this work.

Writing in the early twentieth century, Philip Gaskell described how he relied on the charity of his colleagues at the Cambridge University Press, who lent him their castoff equipment to start his operations. However, these two nineteenth-century iron hand-presses differed significantly from those used in the Elizabethan period. Gaskell acknowledges, “Ideally we should try to imitate the ways of the hand-press period as closely as we can, and the establishments which have built replicas of old common presses are certainly nearest to that ideal” (Gaskell 1965, 3). Modern iron hand-presses such as those given to him mimicked the operations of an Elizabethan common press well enough to foster an understanding of early printing, so long as one understood the historical concessions being made.

Concerns regarding how to procure the right equipment pervaded the early decades of the bibliographical press movement. In his article advocating for an experiential approach, Gaskell included a census of current bibliographical presses and took pains to list the equipment each held, paying particular attention to the make and model of the press and how it was acquired (Gaskell 1965, 7–13). Notably, only one of these was fashioned before the nineteenth century, with most dating to the late nineteenth or early twentieth centuries. Today, even locating latter-day printing equipment proves to be a struggle. Steven Escar Smith has commented upon the increasing difficulty of doing so as the printing industry continues to evolve, adding that “the scarcity and age of surviving tools and equipment, especially in regard to the hand press period, makes them too precious for student use” (Smith 2006, 35). This being the case, one must resort to custom fabrication in order to facilitate the kind of hands-on experiential learning that Gaskell and his colleagues advocated. In today’s market, where securing even a few cases of type can be prohibitively expensive for many instructors, such a prospect is a nonstarter. In order for the bibliographical press movement to grow and expand in the twenty-first century, an updated approach must be adopted.

New Solutions for Old Problems: 3D Technologies in Book History Instruction

The 3Dhotbed project was devised to alleviate the difficulties that would-be instructors in book history encounter when trying to develop experiential learning curricula. Using a complement of 3D scanning and modelling techniques, this multi-institutional collaboration has created highly accurate, 3D-printable replicas that are freely available for download from an online repository. The project is positioned as a confluence of book history, maker culture, and the digital humanities, and is grounded by core values such as openness, transparency, and collaboration (Spiro 2012). Succinctly put, the project proposes (a) the same methods that made the bibliographical press movement successful in the instruction of early modern western printing can and should be applied more broadly throughout the expanding discipline of book history; and (b) 3D technologies, distributed via a digital library platform, are ideal for providing open access to the tools that can promote this pedagogy on a global scale.

Extending the practice of relying upon custom fabrication to build multi-faceted collections of book history teaching tools, we envision a future for bibliographical instruction that embraces additive manufacturing to create hands-on teaching opportunities. The project aspires to continue the spirit of the bibliographical press movement, which was marked by experimentation as inquiry and a commitment to experiential learning, within burgeoning digital spaces. By leveraging the sustainable, open-access digital library at the University of North Texas (UNT), the project democratizes access to a growing body of 3D-printable teaching tools and the knowledge they engender beyond often isolated and institutionally-bound physical classrooms. Ultimately, 3Dhotbed envisions an international bibliographic community whose members not only benefit from the availability and replicability of these tools, but also contribute their own datasets and teaching aids to the project.

3Dhotbed: From Product to Project

The 3Dhotbed team’s initial goal was developing a typecasting toolkit to teach the punch-matrix system for producing moveable metal type. The team captured 3D scans of working replicas from the collection of the Book History Workshop at Texas A&M University as the basis for the toolkit’s design. The workshop’s own wood-and-metal replicas were painstakingly fabricated by experts in analytical bibliography and early modern material culture, and have been used in hands-on typographical instruction for almost two decades.

Rotating gif of a 3D-modeled hand mould.
Figure 1. Rotating gif of Moveable Hand Mould Side A. One piece of a two-part mould used for casting hand-made type. Slid together, the two parts of the mould create a cavity that adjusts for varying letter widths. A typecaster inserts a matrix for a specific letter into the hand mould and fills the resulting cavity with molten type metal, creating a piece of type (Jacobs, McIntosh, and O’Sullivan 2017).

Successfully developing the typecasting toolkit was an iterative process that included scanning, processing, and modeling the exemplar wood-and-metal replicas into functional 3D-printable tools.[2] The resulting toolkit served as a proof-of-concept, not only for the creation and use of 3D tools in hands-on book history instruction, but also for the formation of an active community of practice invested in their use and further development (Jacobs, McIntosh, and O’Sullivan 2018). Since being launched in the summer of 2017, the 3Dhotbed collection has been used more than 3,100 times at institutions across the United States, Canada, and the United Kingdom.

In developing the first toolkit, the project illustrated the potential pedagogical impact of harnessing maker culture to advance the study of material culture. As Dale Dougherty, founder of Make magazine, framed the term, a “maker” is someone deeply invested in do-it-yourself (DIY) activities, who engages creatively with the physicality of an object and thinks critically about the ideas that have informed it. It is a definition that places a maker as being more akin to a tinkerer than an inventor (Dougherty 2012, 11–14). As a tool designed for classroom use, the first 3Dhotbed toolkit includes several distinct pieces, which the end-user must interpret, physically assemble, and enact in order to fully understand. Inspired by the maker movement, this personal engagement with the design and mechanical operations of tools of the fifteenth century, as mediated through those of the twenty-first century, encourages a student to actively construct their knowledge of the punch-matrix system (Sheridan et al. 2014, 507).

The team members immediately implemented the typecasting toolkit into curricula across their respective institutions. At the University of North Texas, Jacobs partnered with Dr. Dahlia Porter to incorporate hands-on book history instruction into a graduate level course focusing on bibliography and historical research methods. The required readings introduced students to the theoretical processes of type design, typecasting, typesetting, imposition, and hand-press printing, but many struggled to connect those processes to the resulting physical texts they were assigned to analyze prior to their visit. In a post-instruction survey of the visit, Dr. Porter stated, “It is only when students model the processes themselves that they fully understand the relationship between the technical aspects of book making and the book in its final, printed and bound form.” At Texas A&M University, where an experiential book history program was already established, O’Sullivan incorporated the typecasting toolkit in support of more ad-hoc educational outreach, such as informal instruction and community events. Doing so allowed detailed bibliographic instruction to be brought into spaces where it might not otherwise appear, illustrating the immediate benefits of portability and adaptability provided by using 3D models outside a pressroom environment.

From the outset, 3Dhotbed was envisioned as a community resource, and the team has repeatedly solicited feedback and guidance from end-users. For example, the team hosted a Book History Maker Fair in 2016, attracting a varied group of students, faculty, and community members. Attendees widely reported that handling these tools was helpful in crafting their understanding of historical printing practices, and expressed an eagerness to attend similar events in the future. Following the project’s official debut at the RBMS Conference in summer 2017, instructors contacted the team to discuss how they were successfully incorporating the typecasting toolkits in their book history classrooms and offered ideas for additional tools to be developed.[3] Simultaneously, scholars involved in similar work reached out to the team with datasets they had generated, hoping to make them publicly available for teaching and research (see Figure 2). This international group of scholars, students, and special collections librarians offered innovative use cases, additional content, and suggestions regarding areas in which to grow, which encouraged the team to develop the 3Dhotbed project from a stand-alone toolkit into a multi-faceted digital collection.

A key feature of this second phase is the involvement of this community—not only in the reception and use of 3Dhotbed tools, but in their creation and description as well. These continuing endeavors have developed into networked and peer-led learning experiences. In moving to this community-acquisition model, the project draws from a wider selection of available artifacts to digitize and include, as well as a broader array of expertise in the history of the book. Moving toward such a model also created an opportunity for the project to benefit from the perspective of independent designers, book history enthusiasts, and makers who work outside the academy.

The success of this distributed model depends on three things: the utilization of a trusted digital library for the data, the development of logical workflows with clear directives for ingesting and describing community-generated data, and considerations for responsibly stewarding datasets of collections owned by partner institutions. Rather than create an unsustainable, purpose-built repository to host the datasets, 3Dhotbed leverages the existing supported system at UNT Libraries. This digital library encourages community partnerships and has a successful history of facilitating distributed workflows for thorough content description (McIntosh, Mangum, and Phillips 2017). The team built upon this existing structure and developed additional tools to facilitate varying levels of collaboration. This model enables the 3Dhotbed project to explore new partnerships within the international bibliographic community while promising sustainability and open access.

Community-Generated Data in Theory

The twenty-first century has witnessed a boom in the development and use of 3D technologies across both public and private sectors. In 2014, Erica Rosenfeld Halverson and Kimberly M. Sheridan pointed to the makerspace.com directory as listing more than 100 makerspaces worldwide, citing this as an indicator of the maker movement’s expansion (Halverson and Sheridan 2014, 503). As of this writing, only six years later, the same directory now lists nearly ten times that figure.[4] With this massive growth in available resources has also come a great expansion in the types of projects and products incorporating 3D technologies and similar maker tools. In some cases, the focus of activities taking place in a makerspace is upon the making itself, with the end-product being only incidental—the relic of a constructive process and physical expression of more abstract ideas (Ratto 2011; Hertz 2016). In others, however, the process is driven toward a specific goal, an end-product that might possess future research, instruction, or commercial value. Such activities include custom fabrication, prototyping, and 3D digitization. In these instances, where the deliverable could be considered a work of enduring value, particular attention must be given not only to the design and manufacturing of the product, but also to its metadata and sustainable long-term storage (Boyer et al. 2017).

In the humanities, where 3D projects have begun to proliferate, we must develop methodologies to guide work of enduring value to ensure its rigor and validity. As in any digitization project, it is important to understand what information has been lost between the original artifact and the resulting 3D model when incorporating it into research and instruction. The field of book history is well prepared to utilize 3D models, as scholars in the discipline have regularly turned to functional replicas to inform their inquiry where precise historical artifacts were not available (Gaskell 1965, 1–2; Samuelson and Morrow 2015, 90–95).

The 3Dhotbed team strives for full transparency in this regard, communicating where concessions to historical accuracy have been made, and attending in particular to the anticipated learning outcomes of each dataset. For instance, the 3Dhotbed typecasting teaching toolkit cannot be used to cast molten metal type when 3D printed from plastic polymers, but is nevertheless effective in communicating the mechanics of how a punch, matrix, and hand-mould may be used to produce multiple identical pieces of movable type.

These decisions are an integral part of producing a detailed 3D model. Most are inconspicuous, but cumulatively they have a real impact on the finished product. Understanding where and how these decisions are made has been one significant outcome of this project. With the progression toward a collaborative model, there is a communal responsibility to make informed recommendations as to how the scanning or modelling ought to be carried out, ensuring a consistent level of quality across the digital assets ingested into the repository. Likewise, it is essential that those who download, print, and use future replicas are able to evaluate historical concessions for themselves. An analogous concern is encountered by anyone conducting research with a microfilmed or digitized copy of a book or newspaper (Correa 2017, 178). These surrogates provide much of a book’s content (e.g. the printed text), but they cannot convey all of the information contained therein (e.g. bibliographical features such as watermarks or fore-edge paintings)—a fact that has limited their utility to book historians.

By contrast, 3D models can enter a repository as complex objects accompanied by supplemental files and other forms of metadata. Through these we have the ability—and the ethical responsibility—to contextualize the digital surrogate with full catalog descriptions, details regarding the scanning and post-processing, and photographic documentation of the original artifact (Manžuch 2017, 9–10). To facilitate the fabrication and use of 3Dhotbed models, information on how best to 3D print these tools is included within the metadata. For example, woodblock facsimiles require a granularity in printing not afforded through fused deposition modeling, or FDM, printers (the most common 3D printer type among hobbyists and publicly available makerspaces). Rather, these tools are better served by resin printing, a process that is suited to the production of finely detailed models. By identifying the appropriate 3D printing process, these supplemental data help mitigate frustration and wasted resources on the part of the end-user.

Side angle view of a 3D-scanned leafy plant woodblock.
Figure 2. Due to its fine detail, the Mattioli woodblock requires resin-printing (Sweitzer-Lamme 2016).

Community-Generated Data in Production

Expanding the collection to include data contributed from a broader community of practice is the ideal way to continue the original mission of the 3Dhotbed project. By including objects from a variety of international printing traditions, the collection encourages the study of the book over the course of millennia beyond its history in western Europe. Doing so also affords the opportunity to include a more diverse range of expertise for the contribution of metadata, scholarship, and teaching materials in concert with the data itself. However, in expanding the repository, it was imperative the team institute workflows to ensure this phase of the project adhered to the same principles of transparency and accuracy for both data and metadata. Thus, this phase of the project signals a shift from 3Dhotbed acting solely as content generator to content curator as well.

This shift introduced unanticipated hurdles not encountered in previous additions to the 3Dhotbed collection. While some were specific to the objects themselves, other broader issues were the byproduct of putting a community model into production. These included describing uncataloged and/or foreign language materials and managing the institutional relationships that come with hosting others’ data. The first community-generated teaching tool added to the 3Dhotbed project illustrates some of these challenges. Like the typecasting toolkit, it was fashioned in response to a distinct pedagogical demand. While developing the curriculum for a history class covering Qing history through Manchu sources at UCLA, Devin Fitzgerald (an instructor in the English Department) contacted UCLA Library Special Collections staff to arrange a class visit. During their visit, University Archivist Heather Briston introduced the group to a complete set of thirty-six xylographic woodblocks carved during the eighteenth century to print miracle stories, or texts based on the ancient Buddhist Diamond Sutra. Due to their age and fragility, the Diamond Sutra blocks are no longer suitable for inking and printing. In response to the instructor’s desire to facilitate a hands-on learning experience for his students, Coordinator of Scholarly Innovation Technologies Doug Daniels led a project in the UCLA Lux Lab to digitize all thirty-six blocks and create high-quality, resin-printed facsimiles that could be inked and printed to recreate the original text. Working with Fitzgerald and Daniels, the 3Dhotbed team ingested one of these digitized blocks into the UNT Digital Library as a tool for those in the broader community interested in studying and teaching xylography and eastern printing practices (Daniels 2017).

Rotating gif of a carved xylographic woodblock illustrating on one half a standing teacher and three seated students, and, on the other, Chinese characters.
Figure 3. 3D dataset model of a xylographic woodblock containing pages 31–32 of “An Illustrated Explanation of Miraculous Response Appended to the Heart Sutra and Diamond Sutra,” an ancient Buddhist text published by Zhu Xuzeng 朱續曾 in 1798 (Daniels 2017).

Like many items from complex artifactual collections, the Diamond Sutra woodblock had not yet been cataloged by its home institution, due in part to a foreign language barrier. This limited the 3Dhotbed team’s ability to mine existing metadata to describe the derivative 3D model. However, in collaboration with the instructor, a specialist in both European and East Asian bibliography, the woodblock data was described in both English and Chinese. The addition of the Diamond Sutra woodblock model into the growing corpus of the 3Dhotbed project is an exciting development, as it enhances access to educational tools that help decenter the often anglophonic focus of book history instruction. Developing a system for community-generated description in which scholars are encouraged to describe foreign-language materials is central to developing an international bibliographic community of practice. Additionally, the project’s website supplements the digital collection and creates a space where scholars can host valuable pedagogical materials to help contextualize the teaching tools within multiple fields of study.[5]

Integrating the Diamond Sutra woodblock into the collection also helped refine workflows that will be necessary for taking on larger partnerships in the future. For example, the team realized that community-generated data necessitates community-generated description. Facilitated by the infrastructure already in place in the UNT Digital Library, 3Dhotbed provides two methods for description via distributed workflows, allowing content generators to provide detailed metadata for objects within their area of expertise. These two methods are the direct-access description approach used for larger, more complex projects and a mediated description approach for smaller projects.

The direct access description approach leverages the partnership model already implemented by the UNT Digital Library. UNT Digital Projects staff create metadata editor accounts for individuals and institutions submitting data. The describers then have access, either pre- or post-ingestion, to edit the individual records for items uploaded to the collections under their purview through the UNT Digital Library metadata editor platform. The editing interface was built in-house and includes embedded metadata input guidelines as well as various examples to guide standards-compliant data formatting. The editing platform’s low barrier of entry eases the training required for new metadata creators, and makes it flexible for use by partners with varying backgrounds in description. Additionally, users are able to update descriptions continuously, meaning partners can add additional description based on future cataloging projects or other developing knowledge about an item.

Despite its low barrier of entry, partners still require some base-level training in order to provide standards-compliant description in the metadata editing platform. The 3Dhotbed project required a workaround for ad-hoc partnerships, which led to developing a mediated description approach for smaller scale and discrete datasets. The project team replicates the required fields necessary for standards-compliant description into a Google document along with guidelines and examples that mimic the metadata editing platform. Project partners populate the document with descriptive information for their 3D data, often with guidance from the project team. The project team then formats the metadata as needed for compliance, then ingests the metadata together with the data for the object itself into the digital library.

These foundations offer numerous possibilities for small- and large-scale partnerships with individual content generators as well as institutions. Flexible workflows provide multiple modes of entry for potential partners, scaffolding the iterative steps from acquisition to description, and enable various levels of investment depending on the nature of the project. Additionally, both approaches ensure item description adheres to the metadata standards required for all items ingested into the UNT Digital Library, regardless of format.

With the possibility of varying levels of collaboration comes the added challenge of managing relationships with varied partners. The team anticipated the potential sensitivity of hosting data generated by other makers, which may duplicate digital collections held in other repositories. Fortunately, this is another area in which the 3Dhotbed project benefits from the existing infrastructure in place within the UNT Digital Library and its partner site, the Portal to Texas History. The Portal models itself as a “gateway … to primary source materials” and provides access to collections and data from hundreds of content partners to provide “a vibrant, growing collection of resources” (Portal, n.d.). It mitigates conflict over attribution by establishing clear ownership and credit for each hosted item. The growth of the 3Dhotbed project into a community-generated resource hinges upon carefully navigating these relationships with our core principles of openness, transparency, and collaboration.

The Bibliographical Maker Movement

With the aid of twenty-first century tools and digital library infrastructure, the 3Dhotbed project creates broad and open access to the previously rarified opportunity to work with models of the tools and materials used in historical book production. Moreover, the future success of 3Dhotbed is not solely based on the volume, diversity, or rarity of individual items, but also on the ability of the platform to put these items in conversation. Distributed workflows facilitate the creation of an innovative scholarly portal, which inspired a community around book history instruction in digital spaces while also making possible new facets of original research around these aggregated materials. As this collection of teaching toolkits continues to grow, the analytical practices set forth in the mid-twentieth century are expanded and refined. With the inclusion of a wider array of objects and tools, this bibliographical maker movement reflects the global reach of book history as a discipline. The project strives not only to make examples of these global book traditions more readily available, but also to enable diversity in experiential book history instruction. The Diamond Sutra woodblock now hosted in the digital repository provides users the opportunity to view a full 3D-scan of the original woodblock and download the data necessary to 3D print a facsimile. By virtue of its inclusion in the 3Dhotbed project, the Diamond Sutra is accompanied by a video demonstrating the processes of carving, inking, and printing woodblocks in various time periods, further enhancing the data’s potential as a teaching tool.[6]

In moving toward a community-led project in the second phase, 3Dhotbed has sought mutual investment from the international bibliographic community in developing and expanding a growing corpus of datasets to facilitate their diverse research. In addition to affordably fabricated teaching tools, the 3Dhotbed project will now include high-resolution 3D models of various typographical and technical collections items related to book history, contributed by these new members of our community. As the repository continues to grow, it will digitally collect and preserve 3D models of tools and collections items that are physically distributed across the globe. Thus, the 3Dhotbed project will become a valuable research portal in its own right—a total greater than the sum of its parts that will facilitate research and instruction across institutions and between disciplines to further our understanding of global book history.

Notes

[1] Authorship of this article was shared in equal collaboration; attribution is thus listed alphabetically.
[2] For more detail on the team’s toolkit development process, see Jacobs, McIntosh, and O’Sullivan 2018.
[3] Audio of this presentation is available at https://alair.ala.org/handle/11213/8581.
[4] The current directory is located at https://makerspaces.make.co/.
[5] For examples, see https://www.3dhotbed.info/instructionalresources.
[6] See https://www.3dhotbed.info/blog/2020/10/5/printing-with-woodblocks.

Bibliography

Boyer, Doug M., Gregg F. Gunnell, Seth Kaufman, and Timothy M. McGeary. 2017. “MorphoSource: Archiving and Sharing 3-D Digital Specimen Data.” The Paleontological Society Papers 22: 157–181.

Correa, Dale J. 2017. “Digitization: Does It Always Improve Access to Rare Books and Special Collections?” Preservation, Digital Technology & Culture 45, no. 4: 177–179. doi: https://doi.org/10.1515/pdtc-2016-0026.

Daniels, Doug. 2017. “Diamond Sutra [Dataset: Jin gang jing xin jing gan ying tu shuo 金剛經心經感應圖說 pp. 31–32].” University of North Texas Libraries, Digital Library. doi: https://digital.library.unt.edu/ark:/67531/metadc1259405/.

Dougherty, Dale. 2012. “The Maker Movement.” Innovations 7, no. 3: 11–14.

Gaskell, Philip. 1965. “The Bibliographical Press Movement.” Journal of the Printing Historical Society 1: 1–13.

Halverson, Erica Rosenfeld, and Kimberly M. Sheridan. 2014. “The Maker Movement in Education.” Harvard Educational Review 84, no. 4: 495–504.

Hertz, Garnet. 2016. “What is Critical Making?” Current 7. https://current.ecuad.ca/what-is-critical-making.

Jacobs, Courtney, Marcia McIntosh, and Kevin M. O’Sullivan. 2017. “Dataset: Moveable Type Hand Mould Side A.” University of North Texas Libraries, Digital Library. doi: https://digital.library.unt.edu/ark:/67531/metadc967385/.

———. 2018. “Making Book History: Engaging Maker Culture and 3D Technologies to Extend Bibliographical Pedagogy.” RBM 19, no. 1: 1–11.

Manžuch, Zinaida. 2017. “Ethical Issues in Digitization Of Cultural Heritage.” Journal of Contemporary Archival Studies 4, no. 2: Article 4. http://elischolar.library.yale.edu/jcas/vol4/iss2/4.

McIntosh, Marcia, Jacob Mangum, and Mark E. Phillips. 2017. “A Model for Surfacing Hidden Collections: The Rescuing Texas History Mini-Grant Program at The University of North Texas Libraries.” The Reading Room 2, no. 2: 39–59. doi: https://digital.library.unt.edu/ark:/67531/metadc987474/.

McKerrow, Ronald B. 1913. “Notes on Bibliographical Evidence for Literary Students and Editors of English Works of the Sixteenth and Seventeenth Centuries.” Transactions of the Bibliographical Society 12: 213–318.

Portal to Texas History. n.d. “Overview.” The Portal to Texas History. Accessed October 14, 2019. https://texashistory.unt.edu/about/portal/.

Ratto, Matt. 2011. “Critical Making: Conceptual and Material Studies in Technology and Social Life.” The Information Society 27, no. 4: 252–260.

Samuelson, Todd, and Christopher L. Morrow. 2015. “Empirical Bibliography: A Decade of Book History at Texas A&M.” The Papers of the Bibliographical Society of America 109, no. 1: 83–109.

Shep, Sydney J. 2006. “Bookends: Towards a Poetics of Material Form.” In Teaching Bibliography, Textual Criticism and Book History, edited by Ann R. Hawkins, 38–43. London: Pickering & Chatto.

Sheridan, Kimberly M., Erica Rosenfeld Halverson, Breanne K. Litts, Lisa Brahms, Lynette Jacobs-Priebe, and Trevor Owens. 2014. “Learning in the Making: A Comparative Case Study of Three Makerspaces.” Harvard Educational Review 84, no. 4: 505–531.

Smith, Steven Escar. 2006. “‘A Clear and Lively Comprehension’: The History and Influence of the Bibliographical Laboratory.” In Teaching Bibliography, Textual Criticism and Book History, edited by Ann R. Hawkins, 32–37. London: Pickering & Chatto.

Spiro, Lisa. 2012. “This is Why We Fight: Defining the Values of the Digital Humanities.” In Debates in the Digital Humanities, edited by Matthew K. Gold. Minneapolis: University of Minnesota Press. doi: http://dhdebates.gc.cuny.edu/debates/text/13.

Sweitzer-Lamme, Jon. 2016. [Dataset: Mattioli woodblock, [art original] 457. Malva maior altera.] University of North Texas Libraries, Digital Library. Accessed October 14, 2019. doi: https://digital.library.unt.edu/ark:/67531/metadc1477779/.

About the Authors

Courtney “Jet” Jacobs is the Head of Public Services, Outreach, and Community Engagement for UCLA Library Special Collections where she oversees the department’s research and reference services, instruction, exhibits, programming and community events, as well as the Center for Primary Research and Training. She holds a BA in English from Ohio State University and an MSLIS from Syracuse University. In partnership with teaching faculty, she has developed and delivered multiple lectures and workshops on book history and printing history utilizing primary source materials in support of course curriculum.

Marcia McIntosh first began working in digital collection development as an undergraduate student at Washington University in St. Louis. After earning a BA in English and African & African American Studies, she went on to earn an MS in Information Studies from the University of Texas at Austin’s School of Information. She is currently the Digital Production Librarian at the University of North Texas where she assists in the coordination, management, and training necessary to create digital projects.

Kevin M. O’Sullivan is the Curator of Rare Books & Manuscripts at Texas A&M University, where he also serves as Director of the Book History Workshop. Prior to this, he was awarded an IMLS Preservation Administration Fellowship to work at Yale University Libraries as part of the Laura Bush 21st Century Librarian Program. He holds a BA in English from the University of Notre Dame, and received his MS from the School of Information with a concurrent Certificate of Advanced Study from the Kilgarlin Center for the Preservation of the Historical Record, both at the University of Texas at Austin. Currently, he is working toward his PhD in English at Texas A&M University.

A computer-rendered pile of black question marks on a black background is punctuated by three scattered orange question marks that glow.
0

Using Wikipedia in the Composition Classroom and Beyond: Encyclopedic “Neutrality,” Social Inequality, and Failure as Subversion

Abstract

Instructors who use Wikipedia in the classroom typically focus on teaching students how to adopt the encyclopedia’s content practices so that they can improve their writing style and research skills, and conclude with an Edit-a-Thon that invites them to address Wikipedia’s social inequalities by writing entries about minority groups. Yet these approaches do not sufficiently highlight how Wikipedia’s social inequalities function at the level of language itself. In this article, I outline a pedagogical approach that invites students to examine the ways that language and politics shape, and are shaped by, each other on Wikipedia. In the case of my Spring 2020 class, my approach encouraged students to examine the relationship between Wikipedia’s content policies and white supremacy, and Wikipedia’s claims to neutrality. I also draw on the Edit-A-Thon that I organized at the end of the unit to show how instructors can extend a critical engagement with Wikipedia by building in moments of failure, in addition to success. In the process, my pedagogical approach reminds instructors—especially in composition and writing studies—to recognize that it is impossible to teach writing decoupled from the politics of language.

Wikipedia has become a popular educational tool over the last two decades, especially in the fields of composition and writing studies. The online encyclopedia’s “anyone-can-edit” ethos emphasizes the collective production of informative writing for public audiences, and instructors have found that they can use it to teach students about writing processes such as citation, collaboration, drafting, editing, research, and revision, in addition to stressing topics such as audience, tone, and voice (Purdy 2009, 2010; Hood 2009; Vetter, McDowell, and Stewart 2019; Xing and Vetter 2020). Composition courses that use Wikipedia have thus begun to follow a similar pattern. Students examine Wikipedia’s history, examine the way its three content policies (Neutral point-of-view [NPOV], no original research, and verifiability) govern how entries are written and what research sources are cited, and discuss the advantages and limits of Wikipedia’s open and anonymous community of volunteer contributors. Then, as a final assignment, instructors often ask students to edit an existing Wikipedia entry or write their own. By contrast, instructors in fields like cultural studies, feminism, and postcolonialism foreground Wikipedia’s social inequalities by asking students to examine how its largely white and male volunteer editors have resulted in the regrettable lack of topics about women and people of color (Edwards 2015; Pratesi, Miller, and Sutton 2019; Rotramel, Parmer, and Oliveira 2019; Montez 2017; Koh and Risam n.d.). When they ask students to edit or write Wikipedia entries, these instructors also invite students to focus on minority groups or underrepresented topics, thus transforming the typical final assignment into one that mirrors the Edit-A-Thons hosted by activist groups like Art + Feminism.

The socially conscious concerns that instructors in cultural studies, feminism, and postcolonialism have raised are compelling because they foreground Wikipedia’s power dynamics. When constructing my own first-year undergraduate writing course at the University of Virginia, then, I sought to combine these concerns with the general approach instructors in composition and writing studies are using. In the Fall 2019 iteration of my course, my students learned about topics like collaborative writing and citation, in addition to examining academic and journalistic articles about the encyclopedia’s racial and gender inequalities. The unit concluded with a two-day Edit-A-Thon focused on African American culture and history. The results seemed fabulous: my brilliant students produced almost 20,000 words on Wikipedia, and created four new entries—one about Harlem’s Frederick Douglass Book Center and three about various anti-slavery periodicals.[1] In their reflection papers, many conveyed that Edit-A-Thons could help minority groups and topics acquire greater visibility, and argued that the encyclopedia’s online format accelerates and democratizes knowledge production.

Yet, as an instructor, I felt that I had failed to sufficiently emphasize how Wikipedia’s content policies also played a role in producing the encyclopedia’s social inequalities. Although I had devoted a few classes to those policies, the approaches I adapted for my unit from the composition and the cultural studies fields meant my students only learned how to adopt those policies—not how to critically interrogate them. The articles we read also obscured how these policies relate to the encyclopedia’s social inequalities because scholars and journalists often conceptualize such inequalities in terms of proportion, describing how there is more or less information about this particular race or that particular gender (Lapowsky 2015; Cassano 2015; Ford 2011; Graham 2011; John 2011). Naturally, then, that’s how our students learn to frame the issue, too—especially when the Edit-A-Thons we organize for them focus on adding (or subtracting) content, rather than investigating how Wikipedia’s inequalities also occur due to the way the encyclopedia governs language. Similar observations have been raised by feminist instructors like Leigh Gruwell, who has found that Wikipedia’s policies “exclude and silence feminist ways of knowing and writing” and argued that current pedagogical models have not yet found ways to invoke Wikipedia critically (Gruwell 2015).[2]

What, then, might a pedagogical model that does invoke Wikipedia critically look like? I sought to respond to this question by creating a new learning goal for the Wikipedia unit in the Spring 2020 iteration of my course. This time around, I would continue to encourage my students to use Wikipedia’s content policies to deepen their understanding of the typical topics in a composition course, but I would also invite them to examine how those policies create—and then conceal—inequalities that occur at the linguistic level. In this particular unit, we concentrated on how various writers had used Wikipedia’s content policies to reinscribe white supremacy in an entry about UVa’s history. The unit concluded with an Edit-A-Thon where students conducted research on historical materials from UVa’s Albert and Shirley Small Special Collections Library to produce a Wikipedia page about the history of student activism at UVa. This approach did not yield the flashy, tweet-worthy results I saw in the Fall. But it is—to my mind—much more important, not only because it is influenced by postcolonial theorists such as Gayatri Spivak, who has demonstrated that neutrality or “objectivity” is impossible to achieve in language, but also because it prompted my students to discuss how language and politics shape, and are shaped by, each other. In the process, this approach also reminds instructors—especially in composition and writing studies—to recognize that it is impossible to teach writing decoupled from the politics of language. Indeed, Jiawei Xing and Matthew A. Vetter’s recent survey of 113 instructors who use Wikipedia in their classrooms reveals that they did so to develop their students’ digital communication, research, critical thinking, and writing skills, but only 40% of those instructors prompted their students to engage with the encyclopedia’s social inequalities as well (Xing and Vetter 2020). While the study’s participant pool is small and not all the instructors in that pool teach composition and writing courses, the results remain valuable because they suggest that current pedagogical models generally do not ask students to examine the social inequalities that Wikipedia’s content policies produce. This article therefore outlines an approach that I used to invite my students to explore the relationship between language and social inequalities on Wikipedia, with the hope that other instructors may improve upon, and then interweave, this approach into existing Wikipedia-based courses today.

Given that this introduction (and the argument that follows) stress a set of understudied issues in Wikipedia, however, my overall insistence that we should continue using Wikipedia in our classrooms may admittedly seem odd. Wouldn’t it make more sense, some might ask, to support those who have argued that we should stop using Wikipedia altogether? Perhaps—but I would have to be a fool to encourage my students to disavow an enormously popular online platform that is amassing knowledge at a faster rate than any other encyclopedia in history, averages roughly twenty billion views a month, and shows no signs of slowing down (“Wikimedia Statistics – All Wikis” n.d.). Like all large-scale projects, the encyclopedia contains problems—but, as instructors, we would do better to equip our students with the skills to address such problems when they arise. The pedagogical approach that I describe in this paper empowers our students to identify some problems directly embedded in Wikipedia’s linguistic structures, rather than studying demographic data about the encyclopedia alone. Only when these internal dynamics are grasped can the next generation then begin to truly reinvent one of the world’s most important platforms in the ways that they desire.

1. Wikipedia’s Neutrality Problem

Wikipedia’s three interdependent content policies—no original research, verifiability, and neutral point of view—are a rich opportunity for students to critically engage with the encyclopedia. Neutral point of view is the most non-negotiable policy of the three, and the Wikipedia community defines it as follows:

Neutral point of view (NPOV) … means representing fairly, proportionately, and, as far as possible, without editorial bias, all the significant views that have been published by reliable sources about the topic … [it means] carefully and critically analyzing a variety of reliable sources and then attempting to convey to the reader the information contained in them fairly, proportionally … including all verifiable points of view. (“Wikipedia: Neutral Point of View” 2020)

Brief guidelines like “avoid stating opinions as facts” and “prefer nonjudgmental language” (“Wikipedia: Neutral Point of View” 2020) follow this definition. My students in both semesters fixated on these points and the overall importance of eschewing “editorial bias” when engaging with NPOV for the first time—and for good reason. A writing style that seems to promise fact alone is particularly alluring to a generation who has grown up on fake news and photoshopped Instagram bodies. It is no surprise, then, that my students responded enthusiastically to the first writing exercise I assigned, which asks them to pick a quotidian object and describe it from what they understood to be a neutral point of view as defined by Wikipedia. The resulting pieces were well-written. When I ran my eyes over careful descriptions about lamps, pillows, and stuffed animals, I glimpsed what Purdy and the composition studies cadre have asserted: that writing for Wikipedia does, indeed, provoke students to write clearly and concisely, and pay closer attention to grammar and syntax.

Afterwards, however, I asked my students to consider the other part of NPOV’s definition: that the writer should proportionally articulate multiple perspectives about a topic (“Wikipedia: Neutral Point of View” 2020). A Wikipedia entry about our planet, for example, would include fringe theories claiming the Earth is flat—but a writer practicing NPOV would presumably ensure that these claims do not carry what Wikipedians describe as “undue weight” over the scientific sources which demonstrate that the Earth is round. Interestingly, the Wikipedia community’s weighing rhetoric associates the NPOV policy with the archetypal symbol of justice: the scales. Wikipedians do not merely summarize information. By adopting NPOV, they appear to summarize information in the fairest way. They weigh out different perspectives and, like Lady Justice, their insistence on avoiding editorial bias seems to ensure that they, too, are metaphorically “blindfolded” to maintain impartiality.

Yet, my students and I saw how NPOV’s “weighing” process, and Wikipedia’s broader claims to neutrality, quickly unraveled when we compared a Wikipedia entry to another scholarly text about the same subject. Comparing and contrasting texts is a standard pedagogical strategy, but the exercise—when raised in relation to Wikipedia—is often used to emphasize how encyclopedic language differs from fiction, news, or other writing genres, rather than provoking a critical engagement with Wikipedia’s content policies. In my Spring 2020 course, then, I shifted the purpose of this exercise. This time around, we compared and contrasted two documents—UVa’s Wikipedia page and Lisa Woolfork’s “‘This Class of Persons’: When UVa’s White Supremacist Past Meets Its Future”—to study the limits of Wikipedia’s NPOV policy.

Both documents construct two very different narratives to describe UVa’s history. My students and I discovered that their differences are most obvious when they discuss why Thomas Jefferson established UVa in Charlottesville, and the role that enslaved labor played in constructing the university:

Wikipedia Woolfork
In 1817, three Presidents (Jefferson, James Monroe, and James Madison) and Chief Justice of the United States Supreme Court John Marshall joined 24 other dignitaries at a meeting held in the Mountain Top Tavern at Rockfish Gap. After some deliberation, they selected nearby Charlottesville as the site of the new University of Virginia. [24]. (“University of Virginia” 2020) On August 1, 1818, the commissioners for the University of Virginia met at a tavern in the Rockfish Gap on the Blue Ridge. The assembled men had been charged to write a proposal … also known as the Rockfish Gap Report. … The commissioners were also committed to finding the ideal geographical location for this undertaking [the university]. Three choices were identified as the most propitious venues: Lexington in Rockbridge County, Staunton in Augusta County, and Central College (Charlottesville) in Albemarle County. … The deciding factor that led the commissioners to choose Albemarle County as the site for the university was exclusively its proximity to white people. The commissioners observed, “It was the degree of the centrality to the white population of the state which alone then constituted the important point of comparison between these places: and the board … are of the opinion that the central point of the white population of the state is nearer to the central college….” (Woolfork 2018, 99–100)
Like many of its peers, the university owned slaves who helped build the campus. They also served students and professors. The university’s first classes met on March 7, 1825. (“University of Virginia” 2020) For the first fifty years of its existence, the university relied on enslaved labor in a variety of positions. In addition, enslaved workers were tasked to serve students personally. … Jefferson believed that allowing students to bring their personal slaves to college would be a corrosive influence. … [F]aculty members, however, and the university itself owned or leased enslaved people. (Woolfork 2018, 101)

Table 1. Comparison of Wikipedia and Woolfork on why Thomas Jefferson established UVa in Charlottesville, and the role that enslaved labor played in constructing the university

Although the two Wikipedia extracts “avoid stating opinions as facts,” they expose how NPOV’s requirement that a writer weigh out different perspectives to represent all views “fairly, proportionately, and, as far as possible” is precisely where neutrality breaks down. In the first pair of extracts, the Wikipedia entry gives scant information about why Jefferson selected Charlottesville. Woolfork’s research, however, outlines that what contributors summarized as “some deliberation” was, in fact, a discussion about locating the university in a predominantly white area. The Wikipedia entry cites source number 24 to support the summary, but the link leads to a Shenandoah National Park guide that highlights Rockfish Gap’s location, instead of providing information about the meeting. Woolfork’s article, by contrast, carefully peruses the Rockfish Gap Report, which was produced in that meeting.

One could argue, as one of my students did, that perhaps Wikipedia’s contributors had not thought to investigate why Jefferson chose Charlottesville, and therefore did not know of the Rockfish Gap Report’s existence—and that is precisely the point. The Wikipedia entry’s inclusion of all three Presidents and the Chief Justice suggests that, when “weighing” different sources and pursuing a range of perspectives about the university’s history, previous contributors decided—whether knowingly or unconsciously—that describing who was at the meeting was a more important viewpoint. They fleshed out a particular strand of detail that would cement the university’s links to American nationalism, rather than inquire how and why Charlottesville was chosen. An entry that looks comprehensive, balanced, well-cited, and “neutral,” then, inevitably prioritizes certain types of information based on the information and the lines of inquiry its contributors choose to expand upon.

The second pair of extracts continue to reveal the fractures in the NPOV policy. Although Woolfork’s research reveals that the university used enslaved labor for the first 50 years, the only time the 10,000-word Wikipedia entry mentions slavery is buried within the three sentences I copied above, which undercuts NPOV’s claims to proportionality. Moreover, the first sentence carefully frames the university’s previous ownership of slaves as usual practice (“like many of its peers”). It is revealing that the sentence does not gaze, as it has done for the majority of the paragraph where this extract is located, on UVa alone—but expands outward to include all universities when conveying this specific fact about slavery. Interestingly, these facts about enslaved labor also come before the sentence about the university’s first day of classes. This means that the entry, which has so far proceeded in chronological fashion, suddenly experiences a temporal warp. It places the reader within the swing of the university’s academic life when it conveys that students and professors benefitted from enslaved labor, only to pull the reader backwards to the first day of classes in the next sentence, as though it were resetting the clock.

I want to stress that the purpose of this exercise was not to examine whether Woolfork’s article is “better” or “truer” than the Wikipedia entry, nor was it an opportunity to undercut the writers of either piece. Rather, the more complex concept my students grappled with was how the article and the entry demonstrate why the true/false—or neutral/biased—binaries that Wikipedia’s content policies rely on are themselves flawed. One could argue that both pieces about UVa are “true,” but the point is that they are slanted differently. The Wikipedia entry falls along an exclusively white axis, while Woolfork’s piece falls along multiple axes—Black and white—and demonstrates how both are actually intertwined due to the university’s reliance on enslaved labor. From a pedagogical standpoint, then, this exercise pushed my students in two areas often unexplored in Wikipedia assignments.

First, it demonstrated to my students that although phrases like “editorial bias” in Wikipedia’s NPOV guidelines presuppose an occasion where writing is impartial and unadulterated, such neutrality does not—and cannot—exist. Instructors in composition studies often ask students to practice NPOV writing for Wikipedia to improve their prose. This process, however, mistakenly conveys that neutrality is an adoptable position even though the comparative exercise I outlined above demonstrates neutrality’s impossibility.

Second, the comparative exercise also demonstrated to my students that Wikipedia’s inequalities occur at the linguistic level as much as the demographic level. Instructors in cultural studies frequently host Edit-A-Thons for their students to increase content about minority cultures and groups on Wikipedia, but this does not address the larger problem embedded in NPOV’s “weighing” of different perspectives. The guidelines state that Wikipedians must weigh perspectives proportionally—but determining what proportionality is to begin with is up to the contributors, as evinced by the two Wikipedia extracts I outlined. Every time a writer weighs different sources and perspectives to write an entry, what they are really doing is slanting their entry along certain axes of different angles, shapes, shades, and sizes. In the articles my students read, the most common axis Wikipedians use, whether knowingly or unconsciously, is one that centers white history, white involvement, and white readers. For example, as my students later discovered in Wikipedia’s “Talk” page for the entry about UVa, when two editors were discussing whether the university’s history of enslaved labor rather than its honor code should be mentioned in the entry’s lead, one editor claimed that the enslaved labor was not necessarily “the most critical information that readers need to know” (“University of Virginia” 2020).[3] Which readers? Who do Wikipedians have in mind when they use that phrase? In this instance, we see how “weighing” different perspectives not only leads one to elevate one piece of information over another, but also one type of reader over another.

As instructors, we need to raise these questions about audience, perspective, and voice in Wikipedia for our students. It is not so much that we have not covered these topics: we just haven’t sufficiently asked our students to engage with the social implications of these topics, like race (and, as Gruwell has said so cogently, gender). One way to begin doing so is by inflecting our pedagogical approaches with the discoveries in fields such as postcolonial studies and critical race studies. For example, my pedagogical emphasis on the impossibility of neutrality as I have outlined it above is partially indebted to critics like Gayatri Spivak. Her work has challenged the western critic’s tendency to appear as though they are speaking from a neutral and objective perspective, and demonstrated how these claims conceal the ways that such critics represent and re-present a subject in oppressive ways (Spivak 1995). Although her scholarship is rooted in deconstructionism and postcolonial theory, her concerns about objectivity’s relationship to white western oppression intersects with US-based critical race theory, where topics like objectivity are central. Indeed, as Michael Omi and Howard Winant have explained, racism in the United States proliferated when figures like Dr. Samuel Morton established so-called “objective” biological measures like cranial capacity to devalue the Black community while elevating the white one (Omi and Winant 1994).

I mention these critics not to argue that one must necessarily introduce a piece of advanced critical race theory or postcolonial theory to our students when using Wikipedia in the composition classroom (although this would of course be a welcome addition for whoever wishes to do so). After all, I never set Spivak’s “Can The Subaltern Speak?” as reading for my students. But what she revealed to me about the impossibility of neutrality in that famous paper prompted me to ask my students about Wikipedia’s NPOV policy in our class discussions and during our comparative exercise, rather than taking that policy for granted and inviting my students to adopt it. If instructors judiciously inflect their pedagogical practices with the viewpoints that critical race theory and postcolonial theory provide, then we can put ourselves and our students in a better position to see how digital writing on sites like Wikipedia are not exempt from the dynamics of power and oppression that exist offline. Other areas in critical race theory and postcolonial theory can also be brought to bear on Wikipedia, and I invite others to uncover those additional links. Disciplinary boundaries have inadvertently created the impression that discoveries in postcolonialism or critical race theory should concern only the scholars working within those fields, but the acute sensitivity towards power, marginalization, and oppression that these fields exhibit mean that the viewpoints their scholars develop are relevant to any instructor who desires to foster a more socially conscious classroom.

2. The Edit-a-Thon: Failure as Subversion

Composition classes that use Wikipedia usually conclude with an assignment where students are invited to write their own entry. For cultural studies courses in particular, students address the lack of content about minority cultures or groups by participating in a themed Edit-A-Thon organized by their instructor. These Edit-A-Thons mirror the Edit-A-Thons hosted by social justice organizations and activism groups outside of the university. These groups usually plan Edit-A-Thons in ways that guarantee maximum success for the participants because many are generously volunteering their time. Moreover, for many participants, these Edit-A-Thons are the first time where they will write for Wikipedia, and if the goal is to inspire them to continue writing after the event, then it is crucial that their initial encounter with this process is user-friendly, positive, and productive. This is why these events frequently offer detailed tutorials on adopting Wikipedia’s content policies, and provide pre-screened secondary source materials that adhere to Wikipedia’s guidelines about “no original research” (writing about topics for which no reliable, published sources exist) and “verifiability” (citing sources that are reliable). Indeed, these thoughtful components at the Art + Feminism Edit-A-Thon event I attended a few years ago at the Guggenheim Museum ensured that I had a smooth and intellectually stimulating experience when I approached Wikipedia as a volunteer writer for the first time. It was precisely because this early experience was so rewarding that Wikipedia leapt to the forefront of my mind when I became an instructor, and was searching for ways to expand student interest in writing.

It is because I am now approaching Wikipedia as an instructor rather than a first-time volunteer writer, however, that I believe we can amplify critical engagement with the encyclopedia if we set aside “success” as an end goal. Of course, there is no reason why one cannot have critical engagement and success as dual goals, but when I was organizing the Edit-a-Thon in my class, I noticed that building in small instances of “failure” enriched the encounters that my students had with Wikipedia’s content policies.

The encyclopedia stipulates that one should not write about organizations or institutions that they are enrolled in or employed by, so I could not invite my students to edit the entry about UVa’s history itself. Instead, I invited them to create a new entry about the history of student activism at UVa using materials at our library.[4] When I was compiling secondary sources for my students, however, I was more liberal with this list in the Spring than I was in the Fall. Wikipedians have long preferred secondary sources like articles in peer-reviewed journals, books published by university presses or other respected publishing houses, and mainstream newspapers (“Wikipedia: No Original Research” 2020) to ensure that writers typically center academic knowledge when building entries about their topic. Thus, like the many social justice and non-profit organizations who host Edit-A-Thons, for Fall 2019 I pre-screened and curated sources that adhered to Wikipedia’s policies so that my students could easily draw from them for the Edit-A-Thon.

In Spring 2020, however, I invited my students to work with a range of primary and secondary sources—meaning that some historical documents like posters, zines, and other paraphernalia, either required different reading methods than academically written secondary sources, or were impossible to cite because to write about them would constitute as “original research.” Experiencing the failure to assimilate other documents and forms of knowledge that are not articulated as published texts can help students interrogate Wikipedia’s lines of inclusion and exclusion, rather than simply taking them for granted. For example, during one particularly memorable conversation with a student who was studying hand-made posters belonging to student activist groups that protested during UVa’s May Days strikes in 1950, she said that she knew she couldn’t cite the posters or their contents, but asked: “Isn’t this history, too? Doesn’t this count?”

By the end of the Spring Edit-a-Thon, my students produced roughly the same amount of content as the Fall class, but their reflection papers suggested that they had engaged with Wikipedia from a more nuanced perspective. As one student explained, a Wikipedia entry may contain features that signal professional expertise, like clear and formal prose or a thick list of references drawn from books published by university presses and peer-reviewed journals, but still exclude or misconstrue a significant chunk of history without seeming to do so.

A small proportion of my students, however, could not entirely overcome one particular limitation. Some continued describing Wikipedia’s writing style as neutral even after asserting that neutrality in writing was impossible in previous pages of their essay. It is possible that this dissonance occurred accidentally, or because such students have not yet developed a vocabulary to describe what that style was even when they knew that it was not neutral. My sense, however, is that this dissonance may also reflect the broader (and, perhaps, predominantly white) desire for the fantasy of impartiality that Wikipedia’s policies promise. Even if it is more accurate to accept that neutrality does not exist on the encyclopedia, this knowledge may create discomfort because it highlights how one has always already taken up a position on a given topic even when one believes they have been writing “neutrally” about it, especially when that topic is related to race. Grasping this particular point is perhaps the largest challenge facing both the student—and the instructor—in the pedagogical approach I have outlined.

Notes

[1] The results for the Fall 2019 Edit-A-Thon are here. The results for the Spring 2020 Edit-A-Thon are here.

[2] Some of those in composition studies, like Paula Patch and Matthew A. Vetter, partially address Gruwell’s call. Patch, for example, has constructed a framework for critically evaluating Wikipedia that prompts students to focus on authorship credibility, reliability, interface design, and navigation (Patch 2010), often by comparing various Wikipedia entries to other scholarly texts online or in print. By contrast, Vetter’s unit on Appalachian topics on Wikipedia focused on the negative representations of the region within a larger course that sought to examine the way that Appalachia is continually marginalized in mainstream media culture (Vetter 2018).

[3] An extract from the Talk page conversation:

Natureium removed the mention of slavery from the lead as undue. I don’t see why that fact would be undue, but the dozens and dozens of other facts in the lead are not. I mean, the university is known for its student-run honor code? Seriously? (None of the sources in the section on that topic seem to prove that fact.) In addition, I see language in the lead such as “UVA’s academic strength is broad”—if there’s work to be done on the lead, it should not be in the removal of a foundation built with slave labor. If anything, it balances out what is otherwise little more than a jubilation of UvA that could have been written by the PR department. Drmies (talk) 17:40, 18 September 2018 (UTC)

I think this is an area where we run into a difficult problem that plagues projects like Wikipedia that strive to reflect and summarize extant sources without publishing original research. As a higher ed scholar I agree that an objective summary of this (and several other U.S.) university [sic] would prominently include this information. However, our core policies restrict us from inserting our own personal and professional judgments into articles when those judgments are not also matched by reliable sources. So we can’t do this until we have a significant number of other sources that also do this. (I previously worked at a research center that had a somewhat similar stance where the director explained this kind of work to some of us as “we follow the leading edge, we don’t make it.”) ElKevbo (talk) 19:36, 18 September 2018 (UTC)

But here, UvA has acknowledged it, no? Drmies (talk) 20:52, 18 September 2018 (UTC)

Yes and there are certainly enough reliable sources to include this information in the article. But to include the information in the lede is to assert that it’s the most critical information that readers need to know about this subject and that is a very high bar that a handful of self-published sources are highly unlikely to cross. Do scholars and other authors include “was built by slaves” when they first introduce this topic or summarize it? If not, we should not do so. ElKevbo (talk) 21:27, 18 September 2018 (UTC)

[4] The COVID-19 pandemic, which began toward the end of our Edit-A-Thon, meant that my students and I were unable to clean up the draft page and sufficiently converse with other editors who had raised various concerns about notability and conflict of interest, so it is not yet published on Wikipedia. We hope to complete this soon. In the meantime, I want to note that had the pandemic not occurred, I would have presented the concerns of these external editors to my students, and used their comments as another opportunity to learn more about the way that Wikipedia prioritizes certain types of knowledge. The first concern was the belief that the history of student activism at UVa was not a notable enough topic for Wikipedia because there was not enough general news coverage about it. Although another editor later refuted this claim, the impulse to rely on news coverage to determine whether a topic was notable enough is interesting within the context of student activism, and other social justice protests more broadly. Activist movements are galvanized by the very premise that a particular minority group or issue has not yet been taken seriously by those in power, or by the majority of a population. Some protests, like the Black Lives Matter movement and Hong Kong’s “Revolution of our Times,” have gained enough news coverage across the globe to count as notable topic. Does that mean, however, that protests on a smaller scale, and with less coverage, are somehow less important?

The second concern about conflict of interest also raises another question: Does the conflict of interest policy prevent us (and others) from fulfilling UVa’s institutional responsibility to personally confront our university’s close relationships to enslaved labor, white supremacy, and colonization, and foreground the activist groups and initiatives within UVa that have tried to dismantle these relationships? If so, will—or should—Wikipedia’s policy change to accommodate circumstances like this? These are questions that I wish I had the opportunity to pose to my students.

Bibliography

Cassano, Jay. 2015. “Black History Matters, So Why Is Wikipedia Missing So Much Of It?” Fast Company, January 29, 2015. https://www.fastcompany.com/3041572/black-history-matters-so-why-is-wikipedia-missing-so-much-of-it.

Edwards, Jennifer C. 2015. “Wiki Women: Bringing Women Into Wikipedia through Activism and Pedagogy.” The History Teacher 48, no. 3: 409–36.

Ford, Heather. 2011. “The Missing Wikipedians.” In Critical Point of View: A Wikipedia Reader, edited by Geert Lovink and Nathaniel Tkacz, 258–68. Amsterdam: Institute of Networked Cultures.

Graham, Mark. 2011. “Palimpsests and the Politics of Exclusion.” In Critical Point of View: A Wikipedia Reader, edited by Geert Lovink and Nathaniel Tkacz, 269–82. Amsterdam: Institute of Networked Cultures.

Gruwell, Leigh. 2015. “Wikipedia’s Politics of Exclusion: Gender, Epistemology, and Feminist Rhetorical (In)Action.” Computers and Composition 37: 117–31.

Hood, Carra Leah. 2009. “Editing Out Obscenity: Wikipedia and Writing Pedagogy.” Computers and Composition Online. http://cconlinejournal.org/wiki_hood/wikipedia_in_composition.html.

John, Gautam. 2011. “Wikipedia In India: Past, Present, Future.” In Critical Point of View: A Wikipedia Reader, edited by Geert Lovink and Nathaniel Tkacz, 283–87. Amsterdam: Institute of Networked Cultures.

Koh, Adeline, and Roopika Risam. n.d. “The Rewriting Wikipedia Project.” Postcolonial Digital Humanities (blog). Accessed July 3, 2020. https://dhpoco.org/rewriting-wikipedia/.

Lapowsky, Issie. 2015. “Meet the Editors Fighting Racism and Sexism on Wikipedia.” Wired, March 5, 2015. https://www.wired.com/2015/03/wikipedia-sexism/.

Montez, Noe. 2017. “Decolonizing Wikipedia through Advocacy and Activism: The Latina/o Theatre Wikiturgy Project.” Theatre Topics 27, no. 1: 1–9.

Omi, Michael, and Howard Winant. 1994. Racial Formation in the United States. 2nd ed. New York: Routledge.

Patch, Paula. 2010. “Meeting Student Writers Where They Are: Using Wikipedia to Teach Responsible Scholarship.” Teaching English in the Two-Year College 37, no. 3: 278–85.

Pratesi, Angela, Wendy Miller, and Elizabeth Sutton. 2019. “Democratizing Knowledge: Using Wikipedia for Inclusive Teaching and Research in Four Undergraduate Classes.” Radical Teacher: A Socialist, Feminist, and Anti-Racist Journal on the Theory and Practice of Teaching 114: 22–34.

Purdy, James. 2009. “When the Tenets of Composition Go Public: A Study of Writing in Wikipedia.” College Composition and Communication 61, no. 2: 351–73.

———. 2010. “The Changing Space of Research: Web 2.0 and the Integration of Research and Writing Environments.” Computers and Composition 27: 48–58.

Rotramel, Ariella, Rebecca Parmer, and Rose Oliveira. 2019. “Engaging Women’s History through Collaborative Archival Wikipedia Projects.” The Journal of Interactive Technology and Pedagogy 14 (January). https://jitp.commons.gc.cuny.edu/engaging-womens-history-through-collaborative-archival-wikipedia-projects/.

Spivak, Gayatri Chakravorty. 1995. “Can the Subaltern Speak?” In The Post-Colonial Studies Reader, edited by Bill Ashcroft, Gareth Griffiths, and Helen Tiffin. 28–37. 2nd ed. Oxford: Routledge.

Wikipedia. 2020.“University of Virginia.” Last modified 2020. In Wikipedia. https://en.wikipedia.org/w/index.php?title=University_of_Virginia&oldid=965972109.

Vetter, Matthew A. 2018. “Teaching Wikipedia: Appalachian Rhetoric and the Encyclopedic Politics of Representation.” College English 80, no. 5: 397–422.

Vetter, Matthew A., Zachary J. McDowell, and Mahala Stewart. 2019. “From Opportunities to Outcomes: The Wikipedia-Based Writing Assignment.” Computers and Composition 59: 53–64.

Wikipedia. “Wikimedia Statistics – All Wikis.” n.d. Accessed October 13, 2020. https://stats.wikimedia.org/#/all-projects.

Wikipedia. “Wikipedia:Neutral Point of View.” 2020. https://en.wikipedia.org/w/index.php?title=Wikipedia:Neutral_point_of_view&oldid=962777774.

Wikipedia. “Wikipedia:No Original Research.” 2020. https://en.wikipedia.org/w/index.php?title=Wikipedia:No_original_research&oldid=966320689.

Woolfork, Lisa. 2018. “‘This Class of Persons’: When UVA’s White Supremacist Past Meets Its Future.” In Charlottesville 2017: The Legacy of Race and Inequity, edited by Claudrena Harold and Louis Nelson. Charlottesville: UVA Press.

Xing, Jiawei, and Matthew A. Vetter. 2020. “Editing for Equity: Understanding Instructor Motivations for Integrating Cross-Disciplinary Wikipedia Assignments.” First Monday 25, no. 6. https://doi.org/10.5210/fm.v25i6.10575.

Acknowledgments

My thanks must go first to John Modica, my wonderful friend and peer. I am so grateful for his insightful suggestions and constant support when I was planning this Wikipedia unit, for agreeing to pair up his students with mine for the ensuing Spring 2020 Edit-A-Thon and for one of our discussion sessions, and for introducing me to Lisa Woolfork’s excellent article when I was searching for a text for the compare and contrast exercise. I am also indebted to UVa’s Wikimedian-in-Residence, Lane Rasberry, and UVa Library’s librarians and staff—Krystal Appiah, Maggie Nunley, and Molly Schwartzburg—for their help when I hosted my Edit-A-Thons; Michael Mandiberg and Linda Marci for their detailed and rigorous readers’ comments; John Maynard for his smart feedback; Brandon Walsh for his encouragement from start to finish; Kelly Hammond, Elizabeth Alsop, and the editorial staff at JITP; UVa’s Writing and Rhetoric Program for their support; and all of my ENWR students in Fall 2019 and Spring 2020, and John Modica’s Spring 2020 ENWR students as well.

About the Author

Cherrie Kwok is a PhD Candidate and an Elizabeth Arendall Tilney and Schuyler Merritt Tilney Jefferson Fellow at the University of Virginia. She is also the Graduate English Students Association (GESA) representative to UVa’s Writing and Rhetoric Program for the 2020–21 academic year. Her interests include global Anglophone literatures (especially from the Victorian period onwards), digital humanities, poetry, and postcolonialism, and her dissertation examines the relationship between anti-imperialism and late-Victorian Decadence in the poetry and prose of a set of writers from Black America, the Caribbean, China, and India. Find out more about her here.

A sepia-toned stereoscopic image from the turn of the twentieth century depicts a woman in a drawing room, herself looking into a stereoscope.
0

Interdisciplinarity and Teamwork in Virtual Reality Design

Abstract

Virtual Reality Design has been co-taught annually at Vanderbilt University since 2017 by professors Bobby Bodenheimer (Computer Science) and Ole Molvig (History, Communications of Science and Technology). This paper discusses the pedagogical and logistical strategies employed during the creation, execution, and subsequent reorganization of this course through multiple offerings. This paper also demonstrates the methods and challenges of designing a team-based project course that is fundamentally structured around interdisciplinarity and group work.

Introduction

What is virtual reality? What can it do? What can’t it do? What is it good/bad for? These are some of the many questions we ask on the first day of our course, Virtual Reality Design (Virtual Reality for Interdisciplinary Applications from 2017–2018). Since 2017, professors Ole Molvig of the History Department and Bobby Bodenheimer of Computer Science and Electrical Engineering have been co-teaching this course annually to roughly 50 students at a time. With each offering of the course, we have significantly revamped our underlying pedagogical goals and strategies based upon student feedback, the learning literature, and our own experiences. What began as a course about virtual reality has become a course about interdisciplinary teamwork.

Both of those terms, interdisciplinarity and teamwork, have become deeply woven into our effort. While a computer scientist and a historian teach the course, up to ten faculty mentors from across the university participate as “clients.” The course counts toward the computer science major’s project-class requirement, but nearly half the enrolled students are not CS majors. Agile design and group mechanics require organizational and communication skills above all else. And the projects themselves, as shown below, vary widely in the topic and demands, requiring flexibility, creativity, programming, artistry, and most significantly, collaboration.

This focus on interdisciplinary teamwork, and not just in the classroom, has led to a significant, if unexpected, outcome: the crystallization of a substantial community of faculty and students engaging in virtual reality related research from a wealth of disciplinary viewpoints. Equipment purchased for the course remain active and available throughout campus. Teaching projects have grown into research questions and collaborations. A significant research cluster in digital cultural heritage was formed not as a result of, but in synergy with, the community of class mentors, instructors, and students.

Evolution of the Course

Prior to offering the joint course, both Bodenheimer (CS) and Molvig (History) had previously offered single-discipline VR based courses.

From the Computer Science side, Bodenheimer had taught a full three-credit course on virtual reality to computer science students. In lecture and pedagogy this course covered a fairly standard approach to the material for a one semester course, as laid out by the Burea and Coiffet textbook or the more recent (and applicable) Lavalle textbook (Lavalle 2017). Topically, the course covered such material as virtual reality hardware, displays, sensors, geometric modeling, three-dimensional transformations, stereoscopic viewing, visual perception, tracking, and the evaluation of virtual reality experiences. The goal of the course was to teach the computer science students to analyze, design, and develop a complex software system in response to a set of computing requirements and project specifications that included usability and networking. The course was also project-based with teams of students completing the projects. Thus it focused on collaborative learning, and teamwork skills were taught as part of the curriculum, since there is significant work that shows these skills are best taught and do not emerge spontaneously (Kozlowski and Ilgen 2006). This practice allowed a project of significant complexity to be designed and implemented over the course of the semester, giving a practical focus to most of the topics covered in the lectures.

From History, Molvig offered an additional one credit “lab course” option for students attached to a survey of The Scientific Revolution. This lab option offered students the opportunity to explore the creation of and meaning behind historically informed re-constructions or simulations. The lab gave students their first exposure to a nascent technology alongside a narrative context in which to guide their explorations. Simultaneous to this course offering, Vanderbilt was increasing its commitment to the digital humanities, and this course allowed both its instructor and students to study the contours of this discipline as well. While this first offering of a digital lab experience lacked the firm technical grounding and prior coding experience of the computer science offering, the shared topical focus (the scientific revolution) made for boldly creative and ambitious projects within a given conceptual space.

Centering Interdisciplinarity

Unlike Bodenheimer, Molvig did not have a career-long commitment to the study of virtual reality. Molvig’s interest in VR comes rather from a science studies approach to emergent technology. And in 2016, VR was one of the highest profile and most accessible emergent technologies (alongside others such as artificial intelligence, machine learning, CRISPR, blockchain, etc). For Molvig, emergent technologies can be pithily described as those technologies that are about to go mainstream, that many people think are likely to be of great significance, but no one is completely certain when, for whom, how, or really even if, this will happen.

For VR then, in an academic setting, these questions look like this: Which fields is VR best suited for? Up to that point, it was reasonably common in computer science and psychology, and relatively rare elsewhere. How might VR be integrated into the teaching and research of other fields? How similar or dissimilar are the needs and challenges of these different disciplines pedagogical and research contexts?

Perhaps most importantly, how do we answer these questions? Our primary pedagogical approach crystallized around two fundamental questions:

  1. How can virtual reality inform the teaching and research of discipline X?
  2. How can discipline X inform the development of virtual reality experiences?

Our efforts to answer these questions led to the core feature that has defined our Virtual Reality Design course since its inception: interdisciplinarity. Rather than decide for whom VR is most relevant, we attempted to test it out as broadly as possible, in collaboration with as many scholars as possible.

Our course is co-taught by a computer scientist and a humanist. Furthermore, we invite faculty from across campus to serve as “clients,” each with a real-world, disciplinary specific problem toward which virtual reality may be applicable. While Molvig and Bodenheimer focused on both questions, our faculty mentors focused on question 1: is VR surgery simulation an effective tool? Can interactive, immersive 3D museums provide users new forms of engagement with cultural artifacts? How can VR and photogrammetry impact the availability of remote archeological sites? We will discuss select projects below, but as of our third offering of this course, we have had twenty-one different faculty serve as clients representing twelve different departments or schools, ranging from art history to pediatrics and chemistry to education. A full list of the twenty-four unique projects may be found in Appendix 1.

At the time of course planning, Vanderbilt began a program of University Courses, encouraging co-taught, cross disciplinary teaching experiments, incentivizing each with a small budget, which allowed us to purchase the hardware necessary to offer the course. One of our stated outcomes was to increase access to VR hardware, and we have intentionally housed the equipment purchased throughout campus. Currently, most available VR hardware available for campus use is the product of this course. Over time, purchases from our course have established 10 VR workstations across three different campus locations (Digital Humanities Center, The Wond’ry Innovation Center, and the School of Engineering Computer Lab). Our standard set up has been the Oculus Rift S paired with desktop PCs with minimum specs of 16GB RAM and 1080GTX GPUs.

As the design of the joint, team-taught and highly interdisciplinary course was envisioned, several course design questions presented themselves. In our first iteration of the course, a condensed and more accessible version of the computer science virtual reality class was lectured on. Thus Bodenheimer, the computer science instructor, lectured on most of the same topics he had lectured on but at a more general level, and focused on how the concepts were implemented in Unity, rather than from a more theoretical perspective that was present in the prior offering. Likewise, Molvig brought with him several tools of his discipline, a set of shared readings (such as the novel Ready Player One (Cline 2012)) and a response essay to the moral and social implications of VR. The class was even separated for two lectures, allowing Bodenheimer to lecture in more detail on C#, and Molvig to offer strategies on how to avoid C# entirely within Unity.

Subsequent offerings of the course, however, allowed us to abandon most of this structure, and to significantly revise the format. Our experience with how the projects and student teams worked and struggled led us to re-evaluate the format of the course. Best practices in teaching and learning recommend active, collaborative learning where students learn from their peers (Kuh et al. 2006). Thus, we adopted a structured format more conducive to teamwork, based on Agile (Pope-Ruark 2017). Agile is a framework and set of practices originally created for software development but which has much wider applicability today. It can be implemented as a structure in the classroom with a set of openly available tools that allow students to articulate, manage, and visualize a set of goals for a particular purpose, in our case, the creation of a virtual experience tailored to their clients specific research. The challenge for us, as instructors, was to develop methods to instrument properly the Agile methods so that the groups in our class can be evaluated on their use of them, and get feedback on them so that they can improve their practices. This challenge is ongoing. Agile methods are thus used in our class to help teams accomplish their collaborative goals and teach them teamwork practices.

Course Structure

We presume no prior experience with VR, the Unity3D engine, or C# for either the CS or non-CS students. Therefore the first third of the course is mainly focused on introducing those topics, primarily through lecture, demonstration, and a series of cumulative “daily challenges.” By the end of this first section of the course, all students are familiar with the common tools and practices, and capable of creating VR environments upon which they can act directly through the physics engine as well as in a predetermined, or scripted, manner. During the second third of the course, students begin working together on their group projects in earnest, while continuing to develop their skills through continued individual challenges, which culminate in an individual project due at the section’s end. For the second and third sections of the course, all group work incorporates aspects of the Agile method described above, with weekly in-class group standups, and a graded, bi-weekly sprint review, conducted before the entire class. The final section of the course is devoted entirely to the completion of the final group project, which culminates in an open “demo day” held during final examinations, which has proven quite popular.

Three-fifths of our students are upper level computer science students fulfilling a “project course” major requirement, while two-fifths of our students can be from any major except computer science. Each project team is composed of roughly five students with a similar overall ratio, and we tend to have about 50 students per offering. This distribution and size are enforced at registration because of the popularity of the CS major and demand for project courses in it. The typical CS student’s experience will involve at least three semesters of programming in Java and C++, but usually no knowledge of computer graphics or C#, the programming language used by Unity, our virtual reality platform. The non-CS students’ experience is more varied, but currently does not typically involve any coding experience. To construct the teams, we solicit bids from the students for their “top three” projects and “who they would like to work with.” The instructors then attempt to match students and teams so that everyone gets something that they want.

It is a fundamental assertion of this course that all members of a team so constructed can contribute meaningfully and substantially to the project. As it is perhaps obvious what the CS students contribute, it is important to understand what the non-CS students contribute. First, Unity is a sophisticated development platform that is quite usable, and, as mentioned, we spend significant course time teaching the class to use it. There is nothing to prevent someone from learning to code in C# using Unity. However, not everyone taking our class wants to be a coder, but they are interested in technology and using technical tools. Everyone can build models and design scenes in Unity. Also, these projects must be robust. Testing that incremental progress works and is integrated well into the whole project is key not only to the project’s success as a product, but also to the team’s grade. We also require that the teams produce documentation about their progress, and interact with their faculty mentor about design goals. These outward-facing aspects of the project are key to the project’s success and often done by the non-CS students. Each project also typically requires unique coding, and in our experience the best projects are one in which the students specialize into roles, as each project typically requires a significant amount of work. The Agile framework is key here, as it provides a structure for the roles and a way of tracking progress in each of them.

Since each project is varied, setting appropriate targets and evaluating progress at each review is one of the most significant ongoing challenges faced by the instructors.

Projects

A full list of the twenty-four projects may be found in Appendix 1.

Below are short descriptions and video walkthroughs of four distinctive projects that capture the depth, breadth, and originality fostered by our emphasis on interdisciplinarity in all aspects of the course design and teaching.

Example Project: Protein Modeling

The motivation for this project, mentored by Chemistry Professor Jens Meiler, came from a problem common to structural chemistry: the inherent difficulty of visualizing 3D objects. For this prototype, we aimed to model how simple proteins and molecules composed of a few tens of atoms interact and “fit” together. In drug design and discovery, this issue is of critical importance and can require significant amounts of computation (Allison et al. 2014). These interactions are often dominated by short-range van der Waals forces, although determining the correct configuration for the proteins to bind is challenging. This project illustrated that difficulty by letting people explore binding proteins together. Two proteins were given in an immersive environment that were graspable, and users attempted to fit them together. As they fit together, a score showing how well they fit was displayed. This score was computed based on an energy function incorporating Van der Waals attractive and repulsive potentials. The goal was to get the minimum score possible. The proteins and the energy equation were provided by the project mentor, although the students implemented a Van der Waals simulator within Unity for this project. Figures 1 and 2 show examples from the immersive virtual environment. The critical features of this project worth noting are that the molecules are three-dimensional structures that are asymmetric. Viewing them with proper depth perception is necessary to get an idea of their true shape. It would be difficult to recreate this simulation with the same effectiveness using desktop displays and interactions.

While issues of efficiency and effectiveness in chemical pedagogy drove our mentor’s interest, the student creators and demo day users were drawn to this project for its elements of science communication and gamification. By providing a running “high score” and providing a timed element, users were motivated to interact with the objects and experience far longer than with a 2D or static 3D visualization. One student member of this group did possess subject matter familiarity which helped incorporate the energy function into the experience.

[archiveorg figure-1 width=640 height=480 frameborder=0 webkitallowfullscreen=true mozallowfullscreen=true]
Figure 1. Two proteins shown within the simulation. The larger protein on the left is the target protein to which the smaller protein (right) should be properly fit. A menu containing the score is shown past the proteins. Proteins may be grabbed, moved, and rotated using the virtual reality controllers. Embedded video: Figure 1. Two proteins shown within the simulation. The larger protein on the left is the target protein to which the smaller protein (right) should be properly fit. A menu containing the score is shown past the proteins. Proteins may be grabbed, moved, and rotated using the virtual reality controllers.

Example Project: Vectors of Textual Movement in Medieval Cypress

Professor of French Lynn Ramey served as the mentor for this project. Unlike most other mentors, Prof. Ramey had a long history of using Unity3D and game technologies in both her research and teaching. Her goal in working with us was to recreate an existing prototype in virtual reality, and determine the added values of visual immersion and hand tracked interactivity. This project created a game that simulates how stories might change during transmission and retelling (Amer et al. 2018; Ramey et al. 2019). The crusader Kingdom of Cyprus served as a waypoint between East and West during the years 1192 to 1489. This game focuses on the early period and looks at how elements of stories from The Thousand and One Nights might have morphed and changed to please sensibilities and tastes of different audiences. In the game, the user tells stories to agents within the game, ideally gaining storytelling experience and learning the individual preferences of the agents. After gaining enough experience, the user can gain entry to the King’s palace and tell a story to the King, with the goal of impressing the King. During the game play, the user must journey through the Kingdom of Cyrus to find agents to tell stories to.

This project was very successful at showcasing the advantages of an interdisciplinary approach. Perhaps the project closest to a traditional video game, faculty and students both were constantly reminded of the interplay between technical and creative decisions. However, this was not simply an “adaption” of a finished cultural work into a new medium, but rather an active exploration of an open humanities research project asking how, why, when, and for whom are stories told. No student member of this group majored in the mentor’s discipline.

This project is ongoing, and more information can be found here: https://medievalstorytelling.org.

A video walkthrough of the game can be seen below.

[archiveorg figure-2_20201130 width=640 height=480 frameborder=0 webkitallowfullscreen=true mozallowfullscreen=true]
Figure 2. Video walk-through of gameplay. Embedded video: Fig 2. Video walk-through of medieval storytelling project gameplay. Video shows gameplay in main screen, with small inset filming user in VR headset. Gameplay shows the goal and user interface by which players tell stories and explore medieval village. Scenes include a market, a castle, and a village environment.

Example Project: Interactive Geometry for K–8 Mathematical Visualization

In this project, Corey Brady, Professor of Education, challenged our students to take full advantage of the physical presence offered by virtual environments, and build an interactive space where children can directly experience “mathematical dimensionality.” Inspired by recent research (Kobiela et al. 2019; Brady et al. 2019) examining physical geometrical creation in two dimensions (think paint, brushes and squeegees), the students created a brightly lit and colored virtual room, where the user is initially presented with a single point in space. Via user input, the point can be stretched into a line, the line into a plane, and the plane into a solid (rectangles, cylinders, and prisms). While doing so, bar graph visualizations of length, width, height, surface area, and volume are updated in real-time while the user increases or decreases the object along its various axes.

Virtual Reality as an education tool has proven very popular, both amongst our students and in industry. No student member of this group specialized in education, but all members had of course first hand experience learning these concepts themselves as children. The opportunity to reimagine a nearly universal learning process was a significant draw for this project. After this course offering, Brady and Molvig have begun a collaboration to expand its utility.

A video demonstration of the project can be seen below.

[archiveorg figure-3_202011 width=640 height=480 frameborder=0 webkitallowfullscreen=true mozallowfullscreen=true]
Figure 3. User manipulates the x, y, and z axes of a rectangle. Real-time calculations of surface area and volume are shown in the background. Embedded video: Figure 3. Video demonstration of geometry visualization project gameplay. User manipulates the x, y, and z axes of a various shapes, including regular polygons and conic sections. Real-time calculations of surface area and volume are shown in the background.

Example Project: Re-digitizing Stereograms

For this project, Molvig led a team to bring nineteenth-century stereographic images into 21st century technology. Invented by Charles Wheatstone in 1838 and later improved by David Brewster, stereograms are nearly identical paired photographs that when viewed through a binocular display, a single “3D image” [1] was perceived by the viewer, often with an effect of striking realism. For this reason, stereoscopy is often referred to as “Victorian VR.” Hundreds of thousands of scanned digitized stereo-pair photos exist in archives and online collections, however it is currently extremely difficult to view these as intended in stereoscopic 3D. Molvig’s goal was to create a generalizable stereogram viewer: capable of bringing stereopair images from remote archives for viewing within a modern VR headset.

Student interest quickly coalesced around two sets of remarkable stereoscopic anatomical atlases, the Edinburgh Stereoscopic Atlas of Anatomy (1905) and Bassett Collection of Stereoscopic Images of Human Anatomy from the Stanford Medical Library. Driven by student interest, the 2019 project branched into a VR alternative to wetlab or flat 2D medical anatomy imagery. This project remains ongoing, as is Molvig’s original generalized stereo viewer, which now includes a machine learning based algorithm to automated the import and segmentation of any stereopair photograph.

Two demonstrations of the stereoview player are below, the first for medical anatomy images, the second are stereophotos taken during the American Civil War. All images appear in stereoscopic depth when viewed in the headset.

[archiveorg figure-4_202011 width=640 height=480 frameborder=0 webkitallowfullscreen=true mozallowfullscreen=true]
Figure 4. Demonstration of anatomy stereoscopic viewer. Images from the Bassett Collection of Stereoscopic Images of Human Anatomy, Stanford Medical Library. Embedded video: Figure 4. Video demonstration of medical anatomy stereoscopic viewer project gameplay. User selects and relocates various stereoscopic images of cranial anatomy. Images from the Bassett Collection of Stereoscopic Images of Human Anatomy, Stanford Medical Library.
[archiveorg figure-5_202011 width=640 height=480 frameborder=0 webkitallowfullscreen=true mozallowfullscreen=true]
Figure 5. Demonstration of Civil War stereoviews. Images from the Robert N. Dennis collection of stereoscopic views, New York Public Library Digital Collection. Embedded video: Figure 5. Video demonstration of Civil War stereoview project gameplay. User selects and and relocated various stereoscopic images taken during the American Civil War. Images depict scenes from battlefields, army encampments, and war material preparations. Images from the Robert N. Dennis collection of stereoscopic views, New York Public Library Digital Collection.

Challenges

This course has numerous challenges, both inside and outside of the classroom, and we have by no means solved them all.

Institutional

Securing support for co-teaching is not always easy. We began offering this course under a Provost level initiative to encourage ambitious teaching collaborations across disciplines. This initiative made it straightforward to count co-teaching efforts with our Deans, and provided some financial support for the needed hardware purchases. However, that initiative was for three course offerings, which we have now completed. Moving forward, we will need to negotiate our course with our Deans.

We rely heavily on invested Faculty Mentors to provide the best subject matter expertise. So far we have had no trouble finding volunteers, and the growing community of VR engaged faculty has been one of the greatest personal benefits, but as VR becomes less novel, we may experience a falloff in interest.

Interdisciplinarity

This is both the most rewarding and most challenging aspect of this course. Securing student buy-in on the value of interdisciplinary teamwork is our most consistent struggle. In particular, these issues arise around the uneven distribution of C# experience, and perceived notions of what type of work is “real” or “hard.” To mitigate these issues, we devote significant time during the first month of the course exposing everyone to all aspects of VR project development (technical and non-technical), and require the adoption of “roles” within each project to make responsibilities clear and workload distributed.

Cost

Virtual reality is a rapidly evolving field, with frequent hardware updates and changing requirements. We will need to secure new funding to significantly expand or update our current equipment.

Conclusions and Lessons Learned

Virtual reality technology is more accessible than ever, but it is not as accessible as one might wish in a pedagogical setting. It is difficult to create even moderately rich and sophisticated environments, without the development expertise gleaned through exposure to the computer science curriculum. A problem thus arises on two fronts. First, exposure to the computer science curriculum at the depth currently required to develop compelling virtual reality applications should ideally not be required of everyone. Unfortunately, the state of the art of our tools currently makes this necessary. Second, those who study computer science and virtual reality focus on building the tools and technology of virtual reality, the theories and algorithms integral to virtual reality, and the integration of these into effective virtual reality systems. Our class represents a compromise solution to the accessibility problem by changing the focus away from development of tools and technology toward collaboration and teamwork in service of building an application.

Our class is an introduction to virtual reality in the sense that students see the capability of modern commodity-level virtual reality equipment, software, and these limitations. They leave the class understanding what types of virtual worlds are easy to create, and what types of worlds are difficult to create. From the perspective of digital humanities, our course is a leveraged introduction to technology at the forefront of application to the humanities. Students are exposed to a humanities-centered approach to this technology through interaction with their project mentors.

In terms of the material that we, the instructors, focus most on in class, our class is about teamwork and problem-solving with people one has not chosen to work with. We present this latter skill as one essential to a college education, whether it comes from practical reasons, e.g., that is what students will be faced with in the workforce (Lingard & Barkataki 2013), or from theoretical perspectives on best ways to learn (Vygotsky 1978). The interdisciplinarity that is a core feature of the course is presented as a fact of the modern workforce. Successful interdisciplinary teams are able to communicate and coordinate effectively with one another, and we emphasize frameworks that allow these things to happen.

Within the broader Vanderbilt curriculum, the course satisfies different curricular requirements. For CS students, the course satisfies a requirement that they participate in a group design experience as part of their major requirements. The interdisciplinary nature of the group is not a major requirement, but is viewed as an advantage, since it is likely that most CS majors will be part of interdisciplinary teams during their future careers. For non-CS students, the course currently satisfies the requirements of the Communication of Science and Technology major and minor.[2]

Over the three iterations of this course, we have learned that team teaching an interdisciplinary project course is not trivial. In particular, it requires more effort than each professor lecturing on their own specialty, and expecting effective learning to emerge from the two different streams. That expectation was closer to what we did in the first offering of this course, where we quickly perceived that this practice was not the most engaging format for the students, nor was it the most effective pedagogy for what we wanted to accomplish. The essence of the course is on creating teams to use mostly accessible technology to create engaging virtual worlds. We have reorganized our lecture and pedagogical practices to support this core. In doing this, each of us brings to the class our own knowledge and expertise on how best to accomplish that goal, and thus the students experience something closer to two views on the same problem. While we are iteratively refining this approach, we believe it is more successful.

Agile methods (Pope-Ruark 2017) have become an essential part of our course. They allow us to better judge the progress of the projects and determine where bottlenecks are occurring more quickly. They incentivize students to work consistently on the project over the course of the semester rather than trying to build everything at the end in a mad rush of effort. By requiring students to mark their progress on burn down charts, the students have a better visualization of the task remaining to be accomplished. Project boards associated with Agile can provide insight into the relative distribution of work that is occurring in the group, ideally allowing us to influence group dynamics before serious tensions arise.

This latter effort is a work in progress, however. A limitation of the course as it currently exists is that we need to do a better job evaluating teams (Hughes & Jones 2011). Currently our student evaluations rely too heavily on the final outcome of the project and not enough on the effectiveness of the teamwork within the team. Evaluating teamwork, however, has seemed cumbersome, and the best way to give meaningful feedback to improve teamwork practices is something we are still exploring. If we improved this practice, we could give students more refined feedback throughout the semester on their individual and group performance, and use that as a springboard to teach better team practices. Better team practices would likely result in increased quality of the final projects.

Notes

[1] These images are not truly three dimensional, as they cannot be rotated or peered behind. Rather two images are created precisely to fool the brain into adding a perception of depth into a single combined image.
[2] https://as.vanderbilt.edu/cst/. There is currently no digital humanities major or minor at Vanderbilt.

References

Allison, Brittany, Steven Combs, Sam DeLuca, Gordon Lemmon, Laura Mizoue, and Jens Meiler. 2014. “Computational Design of Protein–Small Molecule Interfaces.” Journal of Structural Biology 185, no. 2: 193–202.

Amer, Sahar, and Lynn Ramey. 2018. “Teaching the Global Middle Ages with Technology.” Parergon: Journal of the Australian and New Zealand Association for Medieval and Early Modern Studies 35: 179–91.

Brady, Corey, and Richard Lehrer. 2020. “Sweeping Area Across Physical and Virtual Environments.“ Digital Experiences in Mathematics Education: 1–33. https://link.springer.com/article/10.1007/s40751-020-00076-2.

Cline, Ernest. 2012. Ready Player One. New York: Broadway Books.

Hughes, Richard L., and Steven K. Jones. 2011. “Developing and assessing college student teamwork skills.“ New Directions for Institutional Research 149: 53–64.

Kobiela, Marta, and Richard Lehrer. 2019. “Supporting Dynamic Conceptions of Area and its Measure.” Mathematical Thinking and Learning: 1–29.

Kozlowski, Steve W.J., and Daniel R. Ilgen. 2006. “Enhancing the Effectiveness of Work Groups and Teams.” Psychological Science in the Public Interest 7, no.3: 77–124.

Kuh, George D., Jillian Kinzie, Jennifer A. Buckley, Brian K. Bridges, and John C. Hayek. 2006. What Matters to Student Success: A Review of the Literature. Vol. 8. Washington, DC: National Postsecondary Education Cooperative.

LaValle, Steve 2017. Virtual Reality. Cambridge, UK: Cambridge University Press.

Lingard, Robert, and Shan Barkataki 2011. “Teaching Teamwork in Engineering and Computer Science.” 2011 Frontiers in Education Conference. Institute of Electrical and Electronics Engineers.

Pope-Ruark, Rebecca. 2017. Agile Faculty: Practical Strategies for Managing Research, Service, and Teaching. Chicago: University of Chicago Press.

Ramey, Lynn, David Neville, Sahar Amer, et al. 2019. “Revisioning the Global Middle Ages: Immersive Environments for Teaching Medieval Languages and Culture.” Digital Philology 8: 86–104.

Takala, Tuukka M., Lauri Malmi, Roberto Pugliese, and Tapio Takala. 2016. “Empowering students to create better virtual reality applications: A longitudinal study of a VR capstone course.” Informatics in Education 15, no. 2: 287–317.

Zimmerman, Guy W., and Dena E. Eber. 2001. “When worlds collide!: an interdisciplinary course in virtual-reality art.” ACM SIGCSE Bulletin 33, no. 1.

Appendix 1: Complete Project List

Project Title (Mentor, Field, Year(s))

  1. Aristotelian Physics Simulation (Molvig, History of Science, 2017, 2018).
  2. Virtual Excavation (Wernke, Archeology, 2017, 2018).
  3. Aech’s Basement: scene from Ready Player One (Clayton, English, 2017).
  4. Singing with Avatar (Reiser, Psychology, 2017).
  5. Visualizing Breathing: interactive biometric data (Birdee, Medicine, 2017).
  6. Memory Palace (Kunda, Computer Science, 2017).
  7. Centennial Park (Lee, Art History, 2017).
  8. Stereograms (Peters, Computer Science, 2017).
  9. Medieval Storytelling (Ramey, French, 2017, 2018, 2019).
  10. VR locomotion (Bodenheimer, Computer Science, 2017).
  11. 3D chemistry (Meiler, Chemistry, 2018).
  12. Data Visualization (Berger, Computer Science, 2018).
  13. Adversarial Maze (Narasimham and Bodenheimer, Computer Science, 2018).
  14. Operating Room Tool Assembly (Schoenecker, Medicine, 2018).
  15. Autism Spectrum Disorder: table building simulation (Sarkar, Mechanical Engineering, 2019).
  16. Brain Flow Visualization (Oguz, Computer Science, 2019).
  17. Interactive Geometry (Brady, Learning Sciences, 2019).
  18. Jekyll and Hyde (Clayton, English, 2019).
  19. fMRI Brain Activation (Chang, Computer Science, 2019).
  20. Virtual Museum (Robinson, Art History, 2019).
  21. Peripersonal Space (Bodenheimer, Computer Science, 2019).
  22. Solar System Simulation (Weintraub, Astronomy, 2019).
  23. Accessing Stereograms (Molvig, History, 2019).

About the Authors

Ole Molvig is an assistant professor in the Department of History and the Program in Communication of Science and Technology. He explores the interactions among science, technology, and culture from 16th-century cosmology to modern emergent technologies like virtual reality or artificial intelligence. He received his Ph.D. in the History of Science from Princeton University.

Bobby Bodenheimer is a professor in the Department of Electrical Engineering and Computer Science at Vanderbilt University. He also holds an appointment in the Department of Psychology and Human Development. His research examines virtual and augmented reality, specifically how people act, perceive, locomote, and navigate in virtual and augmented environments. He is the recipient of an NSF CAREER award and received his Ph.D. from the California Institute of Technology.

A spiral of books on library shelves appears almost as though a pie chart.
5

Supporting Data Visualization Services in Academic Libraries

Abstract

Data visualization in libraries is not a part of traditional forms of research support, but is an emerging area that is increasingly important in the growing prominence of data in, and as a form of, scholarship. In an era of misinformation, visual and data literacy are necessary skills for the responsible consumption and production of data visualizations and the communication of research results. This article summarizes the findings of Visualizing the Future, which is an IMLS National Forum Grant (RE-73-18-0059-18) to develop a literacy-based instructional and research agenda for library and information professionals with the aim to create a community of praxis focused on data visualization. The grant aims to create a diverse community that will advance data visualization instruction and use beyond hands-on, technology-based tutorials toward a nuanced, critical understanding of visualization as a research product and form of expression. This article will review the need for data visualization support in libraries, review environmental scans on data visualization in libraries, emphasize the need for a focus on the people involved in data visualization in libraries, discuss the components necessary to set up these services, and conclude with the literacies associated with supporting data visualization.

Introduction

Now, more than ever, accurately assessing information is crucially important to discourse, both public and academic. Universities play an important role in teaching students how to understand and generate information. But at many institutions, learning how to effectively communicate findings from the research process is considered idiosyncratic for each field or the express domain of a particular department (e.g. applied mathematics or journalism). Data visualization is the use of spatial elements and graphical properties to display and analyze information, and this practice may follow disciplinary customs. However, there are many commonalities in how we visualize information and data, and the academic library, at the heart of the university, can play a significant role in teaching these skills. In the following article, we suggest a number of challenges in teaching complex technological and methodological skills like visualization and outline a rationale for, and a strategy to, implement these types of services in academic libraries. However, the same argument can be made for any academic support unit, whether college, library, or independently based.

Why Do We Need Data Visualization Support in Libraries?

In many ways the argument for developing data visualization services in libraries mirrors the discussion surrounding the inclusion and extension of digital scholarship support services throughout universities. In academic settings, libraries serve as a natural hub for services that can be used by many departments and fields. Often, data visualization (like GIS or text-mining) expertise is tucked away in a particular academic department making it difficult for students and researchers from different fields to access it.

As libraries already play a key role in advocacy for information literacy and ethics, they may also serve as unaffiliated, central places to gain basic competencies in associated information and data skills. Training patrons how to accurately analyze, assess, and create data visualizations is a natural enhancement to this role. Building competencies in these areas will aid patrons in their own understanding and use of complex visualizations. It may also help to create a robust learning community and knowledge base around this form of visual communication.

In an age of “fake news” and “post-truth politics,” visual literacy, data literacy, and data visualization have become exceedingly important. Without knowing the ways that data can be manipulated, patrons are not as capable of assessing the utility of the information being displayed or making informed decisions about the visual story being told. Presently, many academic libraries are investing resources in data services and subscriptions. Training students, faculty and researchers in ways of effectively visualizing these data sources increases their use and utility. Finally, having data visualization skills within the library also comes with an operational advantage, allowing more effective sharing of data about the library.

We are the Visualizing the Future Symposia, an Institute of Museum and Library Services National Forum Grant-funded group created to develop instructional and research materials on data visualization for library professionals and a community of practice around data visualization. The grant was designed to address the lack of community around data visualization in libraries. More information about the grant is available at the Visualizing the Future website. While we have only included the names of the three main authors; this work was a product of the work of the entire cohort, which includes: Delores Carlito, David Christensen, Ryan Clement, Sally Gore, Tess Grynoch, Jo Klein, Dorothy Ogdon, Megan Ozeran, Alisa Rod, Andrzej Rutkowski, Cass Wilkinson Saldaña, Amy Sonnichsen, and Angela Zoss.

We are currently halfway through our grant work and, in addition to providing publicly available resources for teaching visualization, are also in the process of synthesizing and collecting shared insights into developing and providing data visualization instruction. This present article represents some of the key findings of our grant work.

Current Environment

In order to identify some broad data visualization needs and values, we reviewed three environmental scans. The first was carried out by Angela Zoss, who is one of the co-investigators on the grant, at Duke University (2018) based on a survey that received 36 responses from 30 separate institutions. The second, by S.K. Van Poolen (2017), focuses on an overview of the discipline and includes results from a survey of Big Ten Academic Alliance institutions and others. And the final report by Ilka Datig for Primary Research Group Inc (2019) provides a number of in-depth case studies. While none of the studies claim to provide an exhaustive list of every person or institution providing data visualization support in libraries, in combination they provide an effective overview of the state of the field.

Institutions

The combined environmental scans represent around thirty-five institutions, primarily academic libraries in the United States. However, the Zoss survey also includes data from the Australian National University, a number of Canadian universities, and the World Bank Group. The universities represented vary greatly in size and include large research institutions, such as the University of California Los Angeles, and small liberal arts schools, such as Middlebury and Carleton College.

Some appointments were full-time, while others reported visualization as a part of other job responsibilities. In the Zoss survey, roughly 33% of respondents reported the word “visualization” in their job title.

Types of activities

The combined scans include a variety of services and activities. According to the Zoss survey, the two most common activities (i.e. activities that the most respondents said they engaged in) were providing consultations on visualization projects and giving short workshops or lectures on data visualization. After that other services offered include: providing internal data visualization support for analyzing and communicating library data; training on visualization hardware and spaces (e.g. large scale visualization walls, 3D CAVEs); and managing such spaces and hardware.

Resources needed

These three environmental scans also collectively identify a number of resources that are critical for supporting data visualization in librarians. One of the key elements is training for new librarians, or librarians new to this type of work, on visualization itself and teaching/consulting on data visualization. They also mention that resources are required to effectively teach and support visualization software, including access to the software, learning materials, but also ample time is required for librarians to learn, create and experiment themselves so that they can be effective teachers. Finally they outline the need for communities of practice across institutions and shared resources to support visualization.

It’s About the People

In all of our work and research so far, one important element seems worth stressing and calling out on its own: It is the people who make data visualization services work. Even visualization services focused on advanced instructional spaces or immersive and large scale displays, require expertise to help patrons learn how to use the space, maintain and manage technology, schedule events to create interest, and, especially in the case of advanced spaces, create and manage content to suggest the possibilities. An example of this is the North Carolina State University Libraries’ Andrew W. Mellon Foundation-funded project “Immersive Scholar” (Vandegrift et al. 2018), which brought visiting artists to produce immersive artistic visualization projects in collaboration with staff for the large scale displays at the library.

We encourage any institution that is considering developing or expanding data visualization services to start by defining skill sets and services they wish to offer rather than the technology or infrastructure they intend to build. Some of these skills may include programming, data preparation, and designing for accessibility, which can support a broad range of services to meet user needs. Unsupported infrastructure (stale projects, broken technology, etc.) is a continuing problem in providing data visualization services, and starting any conversation around data visualization support by thinking about the people needed is crucial to creating sustainable, ethical, and useful services.

As evidenced by both the information in the environmental scans and the experiences of Visualizing the Future fellows, one of the most consistently important ways that libraries are supporting visualization is through consultations and workshops that span technologies from Excel to the latest virtual reality systems. Moreover, using these techniques and technologies effectively requires more than just technical know-how; it requires in-depth considerations of design aesthetics, sustainability, and the ethical use and re-use of data. Responsible and effective visualization design requires a variety of literacies (discussed below), critical consideration of where data comes from, and how best to represent data—all elements that are difficult to support and instruct without staff who have appropriate time and training.

Services

Data visualization services in libraries exist both internally and externally. Internally, data visualization is used for assessment (Murphy 2015), marketing librarians’ skills and demonstrating the value of libraries (Bouquin and Epstein 2015), collection analysis (Finch 2016), internal capacity building (Bouquin and Epstein 2015), and in other areas of libraries that primarily benefit the institution. 

External services, in contrast, support students, faculty, researchers, non-library staff, and community members. Some examples of services include individual consultations, workshops, creating spaces for data visualization (both physical and virtual), and providing support for tools. Some libraries extend visualization services into additional areas, like the New York University Health Sciences Library’s “Data Visualization Clinic,” which provides a space for attendees to share and receive feedback on their data visualizations from their peers (Zametkin and Rubin 2018), and the North Carolina State University Libraries’ Coffee and Viz Series, “a forum in which NC State researchers share their visualization work and discuss topics of interest” that is also open to the public (North Carolina State University Libraries 2015).

In order to offer these services, libraries need staff who have some interest and/or experience with data visualization. Some models include functional roles, such as data services librarians or data visualization librarians. These functional librarian roles ensure that the focus is on data and data visualization, and that there is dedicated, funded time available to work on data visualization learning and support. It is important to note that if there is a need for research data management support, it may require a position separate from data visualization. Data services are broad and needs can vary, so some assessment on the community’s greatest needs would help focus functional librarian positions. 

Functional librarian roles may lend themselves to external facing support and community building around data visualization outside of internal staff. A needs assessment can help identify user-centered services, outreach, and support that could help create a community around data visualization for students, faculty, researchers, non-library staff, and members of the public. Having a community focused on data visualization will make sure that services, spaces, and tools are utilized and meeting user needs. 

There is also room to develop non-librarian, technical data visualization positions, such as data visualization specialists or tool-specific specialist positions. These positions may not always have an outreach or community building focus and may be best suited for internal library data visualization support and production. Offering data visualization support as a service to users is separate from data visualization support as a part of library operations, and the decision on how to frame the positions can largely be determined by library needs. 

External data visualization services can include workshops, training sessions, consultations, and classroom instruction. These services can be focused on specific tools, such as Tableau, R, Gephi, and so on. They can be focused on particular skills, such as data cleaning and normalizing, dashboard design, and coding. They can also address general concerns, such as data visualization transparency and ethics, which may be folded into all of the services.

There are some challenges in determining which services to offer:

  • Is there an interest in data visualization in the community? This question should be answered before any services are offered to ensure services are utilized. If there are any liaison or outreach librarians at your institution, they may have deeper insight into user needs and connections to the leaders of their user groups.
  • Are there staff members who have dedicated time to effectively offer these services and support your users?
  • Is there funding for tools you want to teach?
  • Do you have a space to offer these services? This does not have to be anything more complicated than a room with a projector, but if these services begin to grow, it is important to consider the effectiveness of these services with a larger population. For example, a cap on the number of attendees for a tool-specific workshop might be needed to ensure the attendees receive enough individual support throughout the session.

If all of these areas are not addressed, there will be challenges in providing data visualization services and support. Successful data visualization services have adequate staffing, access to the required tools and data, space to offer services (not necessarily a data wall or makerspace, but simply a space with sufficient room to teach and collaborate), and community that is already interested and in need of data visualization services. 

Literacies

The skills that are necessary to provide good data visualization services are largely practical. We derive the following list from our collective experience, both as data visualization practitioners and as part of the Visualizing the Future community of practice. While the following list is not meant to be exhaustive, these are the core competencies that should be developed to offer data visualization services, either from an individual or as part of a team. 

A strong design sense: Without an understanding of how information is effectively conveyed, it is difficult to create or assess visualizations. Thus, data visualization experts need to be versed in the main principles of design (e.g. Gestalt, accessibility) and how to use these techniques to effectively communicate visual information.

Awareness of the ethical implications of data visualizations: Although the finer details are usually assessed on a case by case basis, a data visualization expert should be able to interpret when a visualization is misleading and have the agency to decline to create biased products. This is a critical part of enabling the practitioner to be an active partner in the creation of visualizations. 

An understanding, if not expertise, in a variety of visualization types: Network visualizations, maps, glyphs, Chernoff Faces, for example. There are many specialized forms of data visualization and no individual can be an expert in all of them, but a data visualization practitioner should at least be conversant in many of them. Although universal expertise is impractical, a working knowledge of when particular techniques should be used is a very important literacy.

A similar understanding of a variety of tools: Some examples include Tableau, PowerBI, Shiny, and Gephi. There are many different tools in current use for creating static graphics and interactive dashboards. Again, universal expertise is impractical, but a competent practitioner should be aware of the tools available and capable of making recommendations outside their expertise.

Familiarity with one or more coding languages: Many complex data visualizations happen at the command line (at least partially) so there is a need for an effective practitioner to be at least familiar with the languages most commonly used (likely either R or Python). Not every data visualization expert needs to be a programmer, but familiarity with the potential for these tools is necessary.

Conclusion

The challenges inherent in building and providing data visualization instruction in academic libraries provide an opportunity to address larger pedagogical issues, especially around emerging technologies, methods, and roles in libraries and beyond. In public library settings, the needs for services may be even greater, with patrons unable to find accessible training sources when they need to analyze, assess, and work with diverse types of data and tools. While the focus of our grant work has been on data visualization, the findings reflect the general difficulties of balancing the need and desire to teach tools and invest in infrastructure with the value of teaching concepts and investing in individuals. It is imperative that work teaching and supporting emerging technologies and methods focus on supporting the people and the development of literacies rather than just teaching the use of specific tools. To do so requires the creation of spaces and networks to share information and discoveries.

Bibliography

Bouquin, Daina and Helen-Ann Brown Epstein. 2015. “Teaching Data Visualization Basics to Market the Value of a Hospital Library: An Infographic as One Example.” Journal of Hospital Librarianship 15, no. 4: 349–364. https://doi.org/10.1080/15323269.2015.1079686.

Datig, Ilka. 2019. Profiles of Academic Library Use of Data Visualization Applications. New York: Primary Research Group Inc.

Finch, Jannette L. and Angela R. Flenner. 2016. “Using Data Visualization to Examine an Academic Library Collection.” College & Research Libraries 77, no. 6: 765–778. https://doi.org/10.5860/crl.77.6.765.

Micah Vandegrift, Shelby Hallman, Walt Gurley, Mildred Nicaragua, Abigail Mann, Mike Nutt, Markus Wust, Greg Raschke, Erica Hayes, Abigail Feldman Cynthia Rosenfeld, Jasmine Lang, David Reagan, Eric Johnson, Chris Hoffman, Alexandra Perkins, Patrick Rashleigh, Robert Wallace, William Mischo, and Elisandro Cabada. 2018. Immersive Scholar. Released on GitHub and Open Science Framework. https://osf.io/3z7k5/.

LaPolla, Fred Willie Zametkin and Denis Rubin. 2018. “The “Data Visualization Clinic”: a library-led critique workshop for data visualization.” Journal of the Medical Library Association 106, no. 4: 477–482. https://doi.org/10.5195/jmla.2018.333.

Murphy, Sarah Anne. 2015. “How data visualization supports academic library assessment.” College & Research Libraries News 76, no. 9: 482–486. https://doi.org/10.5860/crln.76.9.9379.

North Carolina State University Libraries. “Coffee & Viz.” Accessed December 4, 2019. https://www.lib.ncsu.edu/news/coffee–viz

Van Poolen, S.K. 2017. “Data Visualization: Study & Survey.” Practicum study at the University of Illinois. 

Zoss, Angela. 2018. “Visualization Librarian Census.” TRLN Data Blog. Last modified June 16, 2018. https://trln.github.io/data-blog/data%20visualization/survey/visualization-librarian-census/.

About the Authors

Negeen Aghassibake is the Data Visualization Librarian at the University of Washington Libraries. Her goal is to help library users think critically about data visualization and how it might play a role in their work. Negeen holds an MS in Information Studies from the University of Texas at Austin.

Matthew Sisk is a spatial data specialist and Geographic Information Systems Librarian based in Notre Dame’s Navari Family Center for Digital Scholarship. He received his PhD in Paleolithic Archaeology from Stony Brook University in 2011 and has worked extensively in GIS-based archaeology and ecological modeling.  His research focuses on human-environment interactions, the spatial scale environmental toxins and community-based research.

Justin Joque is the Visualization Librarian at the University of Michigan. He completed his PhD in Communications and Media Studies at the European Graduate School and holds a Master of Science in Information (MIS) from the University of Michigan.


Skip to toolbar