Issue Eighteen

An autumn leaf in sharp focus before a vibrant autumnal background.
1

Introduction

In their introduction to the previous issue of The Journal of Interactive Technology and Pedagogy, the editors wrote of the profound and ongoing loss they felt assembling a collection of essays as the global crisis of COVID-19 unfolded. We have built Issue 18 under these same circumstances. This issue is a testament to the efforts of the JITP editorial community and our authors, who continued to collaborate on pedagogy scholarship amid increased burden. Its contributors, reviewers, editors, and stagers have endured radical shifts in their work and family lives—especially true for those who are caregivers and members of marginalized communities—and have fielded the emotional, economic, and physical toll of the pandemic. To be mindful of our colleagues’ current realities and our own, and to try to mitigate the pandemic’s often inequitable impacts, we attempted to be flexible with deadlines and offer increased opportunities for feedback where we could. We balanced this commitment with retaining JITP’s editorial workflow and publication schedule, which aims to provide space for the needs of early-career scholars as both editors and authors.

While these articles in some cases represent several years of work, a number of them speak specifically to how our teaching responds to such immediate external pressures. The COVID-19 pandemic has resurfaced some long-standing tensions about the role of educational technologies in teaching and learning. Instructors may be forced to adopt proprietary platforms that come with troubling implications for data privacy. An increasingly shifting landscape drives us to learn these new technologies while anticipating the continual changes in data standards, provenance practices, and platform ubiquity and ownership that will require future time and effort. Since exciting new research and teaching methods require extensive training, instructors set out to extend their network of collaborators and provide supportive infrastructure to this challenge. Individual scholars continue to incorporate technological and data literacy into their classes, impelling students to experiment with analytical practices that vary with institutional context and intellectual tradition.

A number of the articles deal explicitly with questions of loss, recovery, and intervention. In “Reading Texts in Digital Environments: Applications of Translation Alignment for Classical Language Learning,” Chiara Palladino argues for the creative use of translation alignment technologies as a means of facilitating Classical language learning. Classical studies requires that scholars attempt to synthesize information themselves without the benefit of consulting native speakers, and so Palladino offers translation pedagogy involving the comparison of multiple sources as a unique way of teaching slow, methodical information processing, a skill set particularly relevant to our present moment of information saturation. Palladino discusses a series of digital tools and assignments from her own course that, together, carry the pedagogical lesson that all reading is a reflective process. The translation process she describes does not establish one-to-one equivalences, but, rather, requires students to consider the “continuous dialogue between cultural and linguistic systems.”

In “Back in a Flash: Critical Making Pedagogies to Counter Technological Obsolescence,” Sarah Whitcomb Laiola seeks a similar remedy in the face of software expiration. As 2020 ends, so too does Adobe’s support of Flash, a medium in which e-lit has thrived. An NEH-funded project, AfterFlash, offers some balm to the loss, preserving access to texts born digitally in Flash and Shockwave, but it fails to preserve a means for generating them, and such generation, Laiola argues, is essential to student understanding of the texts themselves. She shares her experimentation to simulate that creative process, specifically investigating Stepworks as a classroom alternative, but also suggesting a path forward as one technology inevitably gives way to another. Preservation, after all, isn’t about rescuing only artifacts, but also the processes and pedagogies those artifacts enable.

Courtney Jacobs, Marcia McIntosh, and Kevin M. O’Sullivan are on a rescue mission of their own to collect and provide access to models of the printmaking tools of the past. In “Make Ready: Fabricating a Bibliographic Community,” they share their experiences creating 3Dhotbed, a repository of 3D-printable models, to investigate book production and printmaking. For scholars of book history, the files themselves can enable the critical hands-on work that has informed the discipline for nearly seventy years. The collection, though, is greater than the sum of its replicated parts. As the authors put it, “The future success of 3Dhotbed is not solely based on the volume, diversity, or rarity of individual items, but also on the ability of the platform to put these items in conversation.” Jacobs, McIntosh, and O’Sullivan’s work is meaningful to those outside their fields as well. In constructing 3Dhotbed, they have identified pitfalls and opportunities in navigating institutional partnerships, striking the balance between academic protocols and broader access, and continuing to expand the field beyond the Global North.

In “Using Wikipedia in the Composition Classroom and Beyond: Encyclopedic ‘Neutrality,’ Social Inequality, and Failure as Subversion,” Cherrie Kwok explores a different kind of loss—the damaging effects that can occur when attempts at neutrality gloss over difficult truths. Kwok invites instructors and students alike to leverage the power of failure to explore the very nature of language. Like others in her field, Kwok notes that Wikipedia can serve as a tool for teaching tight writing and edit-a-thons can generate deep student investment in intervening in the cultural record. But Kwok sees even greater value in what her students learn as they try to achieve Wikipedia’s second foundational principle of writing articles “from a neutral point of view.” They learn that language is not neutral and our attempts to make it seem so only cloak the systematic biases and issues of positionality.

As much of this issue makes clear, the people engaged in digital-pedagogy teaching and research provide essential infrastructure for this work. In “Interdisciplinarity and Teamwork in Virtual Reality Design,” Ole Molvig and Bobby Bodenheimer describe the evolution in logistics and pedagogy of a course they taught, Virtual Reality Design, at Vanderbilt University. In particular, they note that the interdisciplinary and collaborative requirements of their team-based course gave rise to a community of like-minded researchers over time. Regarding a growing demand for support of data visualization, Negeen Aghassibake, Justin Joque, and Matthew L. Sisk offer a different approach to cultivating such interdisciplinary collaboration: leveraging the library. In “Supporting Data Visualization Services in Academic Libraries,” the authors identify a host of factors that can lead to more successful support of responsible data visualization and the fundamental literacies that underpin it. Data visualization, they note, is not just about products, but about the scholarly processes that require well-aimed questions, research, data and data management, ethical practices, and design—in addition to software and hardware decisions.

The articles in our Forum on Data and Computational Pedagogy attend to how these concerns arise in the classroom when using computational methods to teach processes of data collection, transformation, and presentation. In our call for papers, we asked submitters to address the challenges and opportunities that arise when teaching with data and promoting data literacy. We were especially interested in how students and instructors grappled with issues of power and agency when acting as “data users” (Gonzalez and DeVoss 2016). The authors of the articles in this Forum span academic job roles and work in a wide variety of institutional contexts that inform their data pedagogy. What coheres their contributions is a humanistic approach to data analysis—one that understands working with data as an exploratory and iterative analytical process of regularization, and which foregrounds data’s context-embeddedness and malleability. As Katie Rawson and Trevor Muñoz (2016) remind us, this feature of data is too often obscured when we think about data “cleaning” as its preparation for scholarly work rather than recognizing “messiness” as an integral part of the work itself. By recognizing the analytical agency we have to remediate data, we may develop the commitment to using data-driven methods for justice, resisting the potential of data analysis’s associations with correctness and order to propagate bias and do harm (D’Ignazio and Klein 2020).

In “Ethnographies of Datasets: Teaching Critical Data Analysis through R Notebooks,” Lindsay Poirier writes on how her students confront datasets as cultural objects in an undergraduate course called Data Sense and Exploration at the University of California, Davis. Here, she draws from cultural anthropology’s experimental ethnography to guide students through a series of weekly lab assignments in which students record field notes while performing analysis of dataframes in R. Each of these labs invokes a concept: routines and rituals, semantics, classifications, calculations and narrative, chrono-politics, and geo-politics. She characterizes her students’ work with data as “ethnography” because of “their consistent, hands-on engagement with the data” and the opportunity it provides for “reflections on their own positionality.” This approach encourages students to see themselves as “critical data practitioners” who can account for, as well as critique, the “incompleteness, inconsistencies and biases” of publicly available data.

In “Thinking Through Data in the Humanities and in Engineering,” Elizabeth Alice Honig, Deb Niemeier, Christian F. Cloke, and Quint Gregory assess how students in two disparate fields engage with data’s embedded context. The authors describe an interdisciplinary effort at the University of Maryland, College Park, to teach the same historical network dataset to students in art history and engineering, as each group of students brought their entrenched disciplinary assumptions about data analysis and visualization to the same assignments. While the authors’ engineering students tended to value consistent design conventions in an approach framed by a pre-set analytical objective, their art history students tended to want to bring insights from visualization back to the dataset. On the flip side, the engineering students were less likely to incorporate context and “texture” in their visualizations, while the art history students tended to be less adept at properly labeling their graphs or ensuring their visualizations made effective communication choices. From the authors’ exploratory study, they conclude that emphases on digital training within humanities courses and project-based learning in engineering courses may not be enough alone for students to overcome these tendencies, and that additional formal training may be required.

In “Numbering Ulysses: Digital Humanities, Reductivism, and Undergraduate Research,” Erik Simpson describes the pedagogical implications of humanities data creation for Ashplant, a collaborative digital project developed in conjunction with Grinnell College students. As the students worked to describe James Joyce’s Ulysses in tabular form for presentation online, they were forced to reckon with the frustrations posed by data entry involving complex humanities materials. In the process, students found their digital humanities work placed in dialogue with analog methods of analyzing Ulysses, which already used numerical and hierarchical systems of classification. The piece closes by building on these pedagogical lessons to suggest a series of ways that undergraduate research might engage with the “creativity, resistance, and questioning” of digital work.

In “Data Fail: Teaching Data Literacy with African Diaspora Digital Humanities,” Jennifer Mahoney, Roopika Risam, and Hibba Nassereddine reflect, too, on the frustrations and failures of a data curation and visualization project. Situating their work within scholarship on Black Digital Humanities, they articulate the difficulty of reconciling “fragments of information” when trying to avoid reproducing or amplifying gaps in the archives they used for research. Having set out to plot networks of participation in Pan-Africanist intellectual and social movements, the authors describe the “virtually meaningless” initial results that revealed some flawed assumptions of their project’s methodology. Their writing, however, exceeds mere process narrative by reflecting on this realization’s implications for their own and others’ projects—their non-result, it turns out, provided an opportunity to reappraise their methods and identify the aspects of their dataset the methods didn’t capture. Moreover, as secondary-education teachers and students, the authors argue that high school students might have similar data epiphanies were such digital humanities projects featured in high school English language arts curricula, using students’ development of data literacy to promote inclusivity by way of representation and equity in the cultural record.

Data Literacy in Media Studies: Strategies for Collaborative Teaching of Critical Data Analysis and Visualization” addresses intra-institutional partnerships between librarians and faculty to support teaching critical data literacy. In this article, Andrew Battista, Katherine Boss, and Marybeth McCartin provide a template they use to encourage a variety of instructors to teach visualization instruction sessions each term. The model distributes the labor of teaching across a set of collaborators and supports the professional development of these instructors as they create shared and reproducible pedagogical materials. The program they describe is one that is ultimately more sustainable and “has a broad and demonstrated impact on student learning, strengthens ties between the library and the departments we serve, and allows librarians and data services specialists the opportunity to learn and grow from each other.” While directed toward teaching librarians, the piece also proves useful for faculty considering library partnerships to enrich the data or information literacy offerings of their programs.

Like teaching, this issue results from the work of many hands. The editors would like to thank every member of JITP’s editorial board who contributed energy to its publication under such difficult circumstances. An issue of this size required an especially large number of reviewers, and the editors deeply appreciate their willingness to entertain the unexpected requests. And, of course, we are grateful to all the authors who shared their work with us for consideration. Future issues of JITP will, undoubtedly, share work specific to the particular pedagogies of the pandemic itself. Nonetheless, we hope that this collection of essays will encourage reflection on how our teaching has always been called upon to respond to changing circumstances and must continue to do so. What we need, especially now, is more teachers sharing what works and what doesn’t, more authors responding to change as they see it happening in their work, and more voices calling out for change where it is not yet happening.

Bibliography

D’Ignazio, Catherine, and Lauren F. Klein. 2020. Data Feminism. Cambridge: The MIT Press.

Gonzales, Lauren and Dànielle Nicole DeVoss. 2016. “Digging into Data: Professional Writers as Data Users.” In Writing in an Age of Surveillance, Privacy, and Net Neutrality, edited by Cheryl E. Ball, special issue, Kairos: A Journal of Rhetoric, Technology, and Pedagogy 20, no. 2 (Spring). http://technorhetoric.net/20.2/topoi/beck-et-al/gon_devo.html.

Rawson, Katie, and Trevor Muñoz. 2016. “Against Cleaning.” Curating Menus (blog), July 6. http://curatingmenus.org/articles/against-cleaning/.

About the Editors

Kelly Hammond has focused on the intersection of humanities and technology in the classroom for over twenty years. She is currently the Director of Digital Pedagogy at the Chapin School in New York City. She is also pursuing her master’s degree in Digital Humanities at the City University of New York’s Graduate Center, and she serves on the editorial collective of The Journal of Interactive Technology and Pedagogy. Kelly is particularly interested in building online communities to facilitate dialogue and collaboration within the small but growing number of DH practitioners in K–12 environments. She is developing and testing the efficacy of micro-pd—tiny and targeted professional development to help faculty grow, even in times of crisis. She’s also a budding writer. Her fiction has appeared in online journals such as drafthorse and earned an Editor’s Prize from the Chautauqua Journal.

Gregory J. Palermo is a PhD candidate in English at Northeastern University. His research and teaching focus on the metaphors for disciplinary knowledge that structure digital methods used for plotting academic fields. His dissertation argues that citation analysis can be a tactical means of bringing together work from disparate traditions and promoting equity in scholarly publishing. His pedagogy foregrounds the implications of borrowing methods, rhetorical choices with data, and how algorithmic processes increasingly used for pattern-seeking analysis and surveillance can be useful for remix, intervention, and resistance. He has been a Research Associate and Project Manager for the Digital Scholarship Group in Northeastern University Library, a Graduate Fellow of the NULab for Texts, Maps, and Networks, and a co-instructor at the Digital Humanities Summer Institute (DHSI). His work has appeared in the Journal of Writing Analytics and Digital Humanities Quarterly (DHQ). He has been a managing editor of DHQ and now serves as an editor of The Journal of Interactive Technology and Pedagogy.

Brandon Walsh is Head of Student Programs in the Scholars’ Lab in the University of Virginia Library. Prior to that, he was Visiting Assistant Professor of English and Mellon Digital Humanities Fellow in the Washington and Lee University Library. He received his PhD and MA from the Department of English at the University of Virginia, where he also held fellowships in the Scholars’ Lab and acted as Project Manager of NINES. His dissertation examined modern and contemporary literature and culture through the lenses of sound studies and digital humanities, and these days he works primarily at the intersections of digital pedagogy and digital humanities. He serves on the editorial boards of The Programming Historian and The Journal of Interactive Technology and Pedagogy. He is a regular instructor at HILT, and he has work published or forthcoming with Programming Historian, Insights, the Digital Library Pedagogy Cookbook, Pedagogy, Digital Pedagogy in the Humanities, and Digital Scholarship in the Humanities, among others.

A hand holds a phone camera over a pile of leaves, with a pair of flip flops apparently made of grass in the center.
0

Table of Contents

Introduction
Kelly Hammond, Gregory J. Palermo, and Brandon Walsh

Reading Texts in Digital Environments: Applications of Translation Alignment for Classical Language Learning
Chiara Palladino

Back in a Flash: Critical Making Pedagogies to Counter Technological Obsolescence
Sarah Whitcomb Laiola

Make-Ready: Fabricating a Bibliographic Community
Courtney Jacobs, Marcia McIntosh, and Kevin M. O’Sullivan

Using Wikipedia in the Composition Classroom and Beyond: Encyclopedic “Neutrality,” Social Inequality, and Failure as Subversion
Cherrie Kwok

Interdisciplinarity and Teamwork in Virtual Reality Design
Ole Molvig and Bobby Bodenheimer

Supporting Data Visualization Services in Academic Libraries
Negeen Aghassibake, Justin Joque, and Matthew L. Sisk

Forum on Data and Computational Pedagogy

Ethnographies of Datasets: Teaching Critical Data Analysis through R Notebooks
Lindsay Poirier

Thinking Through Data in the Humanities and in Engineering
Elizabeth Alice Honig, Deb Niemeier, Christian F. Cloke, and Quint Gregory

Numbering Ulysses: Digital Humanities, Reductivism, and Undergraduate Research
Erik Simpson

Data Fail: Teaching Data Literacy with African Diaspora Digital Humanities
Jennifer Mahoney, Roopika Risam, and Hibba Nassereddine

Data Literacy in Media Studies: Strategies for Collaborative Teaching of Critical Data Analysis and Visualization
Andrew Battista, Katherine Boss, and Marybeth McCartin

Issue Eighteen Masthead

Issue Editors
Kelly Hammond
Gregory J. Palermo
Brandon Walsh

Managing Editor
Patrick DeDauw

Copyeditors
Param Ajmera
Elizabeth Alsop
Patrick DeDauw
Jojo Karlin
Benjamin Miller
Angel David Nieves
Nicole Zeftel
Dominique Zino

Staging Editors
Danica Savonick
Patrick DeDauw
Laura Wildemann Kane
Anna Alexis Larsson
Krystyna Michael
Teresa Ober
sava saheli singh
Inés Vañó García
Luke Waltzer

A network diagram shows links between names of languages with varying sizes. English, Latin, Greek, and Arabic all have the largest bubbles.
3

Reading Texts in Digital Environments: Applications of Translation Alignment for Classical Language Learning

Abstract

This paper illustrates the application of translation alignment technologies to empower a new approach to reading in digital environments. Digital platforms for manual translation alignment are designed to facilitate a particularly intensive and philological experience of the text, which is traditionally peculiar to the teaching and study of Classical languages. This paper presents the results of the experimental use of translation alignment in the context of Classical language teaching, and shows how the use of technology can empower a meaningful and systematic approach to information. We argue that translation alignment and similar technologies can open entirely new perspectives on reading practices, going beyond the opposed categories of “skimming” and traditional linear reading, and potentially overcoming the cognitive limitations connected with the fruition of digital content.

Reading and Digital Technologies: A New Challenge

It seems impossible to imagine a world where digital technologies are not a substantial part of our intellectual activities. The use of technology in teaching and learning is becoming increasingly prominent, even more now, as the massive public health crisis of COVID-19 created the need to access information without physical proximity. Yet, the way information is processed on digital platforms is substantially different from the cognitive standpoint, and not exempt from concerning consequences: recently, it has been emphasized that accessing content digitally stimulates superficial approaches and “skimming”, rather than reading, which may have a longstanding impact on the ways in which human brains understand, approach, and articulate complex information (Wolf 2018).

Therefore, we must ask ourselves if we are using digital technologies in the right way, and what can be done to address this problem. Instead of eliminating digital methods entirely (which in current times seems especially unrealistic), maybe the solution resides in using them to empower a different way of approaching information. In this paper, I will advocate that the practices embedded in the study of Classical texts can offer a new perspective on reading as a cognitive operation, and that, if appropriately empowered through the use of technology, they can create a new and meaningful approach to reading on digital platforms.

The study of Classical languages implies a very peculiar approach to processing information (Crane 2019). The most relevant aspect of studying Classical texts is that we cannot consult a native speaker to verify our knowledge: instead, “communication” is achieved through written sources and their interaction with other carriers of information, such as material culture and visual representations. On the other hand, we must never forget that we are engaging with an alien culture to which we do not have direct access. This necessity of navigating uncertainty requires a much more flexible approach to information, and a very different way of engaging with written sources, where the focus is on mediated cultural understanding through reading, rather than immediate communication.

Engaging with an ancient text is a deeply philological operation: a scholar of an ancient language never simply goes from one word to another with a secure understanding of their meaning. Their reading mode is much more immersive. It is an operation of reconstruction through reflection, pause, and exploration, which requires several skills: from the ability of active abstraction of the language and its mechanics, to the recognition of linguistic patterns that coincide with given models, to the reflection on what a word or expression “really means” in etymological, stylistic, and cultural terms, to the philological reconstruction of “why” that word is there, as a result of a long process of transmission, translation, and error.[1] Yet, the implications of this intensive reading mode, in the broader context of the cognitive transformations to reading and learning, are often overlooked.

The operations embedded in the reading of Classical languages respond to a different cognitive process, that is beyond the opposed categories of “skimming” and traditional linear reading. Because of this peculiarity, some of the technologies designed in the domain of Classical languages are created specifically to empower this approach, bringing it at the center of the reader’s experience.

Translation Alignment: Principles and Technologies

Digital technologies are widely used in Classics for scholarship and teaching, thanks to the widespread use of digital libraries like Perseus (Crane et al. 2018) and the Thesaurus Linguae Graecae (2020), and to the consolidation of various methods for digital text analysis (Berti 2019) and pedagogy (Natoli and Hunt 2019). One of the most interesting recent developments in the field is the proliferation of platforms for manual and semi-automated translation alignment.

Translation alignment is a task derived from one of the most popular applications in Natural Language Processing. It is defined as the comparison of two or more texts in different languages, also called parallel texts or parallel corpora, by means of automated or semi-automated methods. The main purpose is to define which parts of a source text correspond to which parts of a second text. The result is often a list of pairs of items – words, sentences, or larger texts chunks like paragraphs or documents. In Natural Language Processing, aligned corpora are used as training data for the implementation of machine translation systems, but also for many other purposes, such as information extraction, automated creation of bilingual lexica, or even text re-use (Dagan, Church, and Gale 1999).

The alignment of texts in different languages, however, is an exceptionally complex task, because it is often difficult to find perfect overlap across languages, and machine-actionable systems are often inefficient in providing equivalences for more sophisticated rhetorical or literary devices. The creation of manually aligned datasets is especially useful for historical languages, where available indexes and digitized dictionaries often do not provide a sufficient corpus to develop reliable NLP pipelines, and are remarkably inefficient for automated translation. Therefore, creating aligned translations is also a way to engage with a larger scholarly community and to support important tasks in Computer Science.

In the past few years, three generations of digital tools for the creation and use of aligned corpora have been developed specifically with Classical languages in mind. First, Alpheios provides a system for creating aligned bilingual texts, which are then combined with other resources, such as dictionary entries and morphosyntactic information (Almas and Beaulieu 2016; “The Alpheios Project” 2020). Second, the Ugarit Translation Alignment Editor was inspired by Alpheios in providing a public online platform, where users could perform bilingual and trilingual alignments. Ugarit is currently the most used web-based tool for translation alignment in historical languages: since it went online in March 2017, it has registered an ever-increasing number of visits and new users. It currently hosts more than 370 users, 23,900 texts, 47,600 aligned pairs, and 39 languages, many of which ancient, including Ancient Greek, Latin, Classical Arabic, Classical Chinese, Persian, Coptic, Egyptian, Syriac, Parthian, Akkadian, and Sanskrit. Aligned pairs are collected in a large dynamic lexicon that can be used to extract translations of different words, but also as a training dataset for implementing automated translation (Yousef 2019).

The alignment interface offered by Ugarit is simple and intuitive. Users can upload their own texts and manually align them by matching words or groups of words. Alignments are automatically displayed on the home page (although users can deactivate the option for public visibility). Corresponding aligned tokens are highlighted when the pointer hovers on them. The percentage of aligned tokens is displayed in the color bar below the text: the green indicates the rate of matching tokens, the red the rate of non-matching tokens. Resulting pairs are automatically stored in the database, and can be exported as XML or tabular data. For languages with non-Latin alphabets, Ugarit offers automatic transliteration, visible when the pointer hovers on the desired word.[2]

Overview of a trilingual alignment on Ugarit (Armenian, Greek, and Latin). The mouse pointer triggers the highlighting of aligned pairs, and activates the transliteration for the Armenian text. A color bar below the text shows the percentage of aligned pairs in green, and of non-aligned tokens in red.
Figure 1. Overview of a trilingual alignment on Ugarit (Armenian, Greek, Latin), with active transliteration for Armenian.

The structure of Ugarit was also used to display a manually aligned version of the Persian Hafez, in a study that tested how German and Persian speakers used translation alignment to study portions of Hafez using English as a bridge language. The results indicated that, with the appropriate scaffolding, users with no knowledge of the source language could generate word alignments with the same output accuracy generated by experts in both languages. The study showed that alignment could serve as a pedagogical tool with a certain effect on long-term retention (Palladino, Foradi, and Yousef forthcoming; Foradi 2019).

The third generation of digital tools is represented by DUCAT – Daughter of Ugarit Citation Alignment Tool, developed by Christopher Blackwell and Neel Smith (Blackwell and Smith 2019), which can be used for local text alignment and can be integrated with the interactive analysis of morphology, syntax, and vocabulary. The project “Furman University Editions” shows the potential of these interactive views, which are currently part of the curriculum of undergraduate Classics teaching at Furman and elsewhere.

This proliferation of tools shows that there is potential in the pedagogical application of this method: translation alignment can provide a new and imaginative way of using translations for the study of Classical texts, overcoming the hindrances normally associated with reading an ancient work through a modern-day version.

Text Alignment in the Classroom

The use of authorial translations to approach Classical texts is normally discouraged in the classroom, being perceived as “cheating” or as unproductive for a true, active engagement with the language. Part of this phenomenon is explained by the fact that, as “passive” readers, we don’t have any agency in assessing the relationship between a translation and the original, and reading them side by side on paper is rarely a systematic or intentional operation (Crane 2019). However, translations are an integral part of ancient cultures.[3] They are a crucial component of textual transmission, as they represent witnesses of the survival and fortune of Classical texts. Translations are also important testimonies of the scholarly problem of transferring an alien culture and its values onto a different one, to ensure effective communication, or to pursue a cultural and political agenda through the reshaping and recrafting of an important text (Lefevere 1992).

Translations are a medium between cultures, not just between languages. Engaging in an analytical comparison between a translation and the original means to have a deeper experience of how a text was interpreted in a given time, what meanings were associated to certain words, and, at the same time, how certain expressions can display multifaceted semantics that are often not entirely captured by another language. This is also an exercise in cultural dialogue and reflection, not only upon the language(s), but upon the civilization that used it to reflect its values. In other words, it is a philological exploration that resembles much of the reading mode of a Classicist.

Digital platforms for translation alignment offer an immersive and visually powerful environment to perform this task, where the reader can analytically compare texts token by token, and at the same time observe the results through an interactive visualization. It is the reader who decides what is compared, how, and to what extent: the comparison of parallel texts becomes an analytical, systematic operation, which at the same time encourages reflection and debate regarding the (im)perfect matching of words and expressions. In this way, translation alignment provides a way to navigate between traditional linguistic mastery and the complete dependence upon a translation. Not only this stimulates an active fruition of modern translations of ancient texts, but the public visibility of the result on a digital platform also provides a way to be part of a broader conversation on the reception and significance of an ancient text over time.

However, it is also important to apply this tool in the right way. For example, translation alignment needs to be coupled with some grammatical input, to encourage reflection on structural linguistic differences. Mechanical approaches, all too easy with the uncontrolled use of a clickable “matching tool”, should be discouraged by emphasizing the importance of focused word-by-word alignment. In practice, translation alignment needs to be used with caution and in meaningful ways, as a function of the goal and level of a course.

The following sections illustrate examples of application of translation alignment in the context of beginner, intermediate, and upper level classes in Ancient Greek and Latin. Translation alignment was structurally used during the courses to emphasize semantic and syntactic complexities through analytical comparison with English or other languages. Later, students were assigned various alignment tasks and exercises, designed to empower an analytical approach to the text.

Beginner Ancient Greek, first and second semester

The students were given two assignments, performed iteratively in two consecutive semesters, with variations in quantity (more words and sentences were assigned in the second semester):[4]

  1. Individuate specific given words in a chosen passage, and align them with the corresponding words in one translation. The goal of this exercise was to set the groundwork to develop a rough understanding of the depth of word meanings, by assessing how the same word in the source text could appear in different ways across the same translation.
  2. Use alignment to evaluate two translations of a shorter text chunk (1–3 sentences, or 10 verse lines). Identify precisely the corresponding sections of text in the source and in the translations. Assess which translation is most effective by using two criteria: 1, combination of number and quality of matched tokens; 2, pick particularly problematic words and look them up in a dictionary, to assess their meanings; compare the dictionary explanation with the general context of the passage, and assess how translations relate to the dictionary entries and how closely they render the “original sense” of the word.

The results were two short essays where the students articulated their considerations. Grading was based on the ability to give insightful analysis of how word choices impacted the tone and meaning of the translations, and discuss the semantic depth of the words in the original language. Bonus points were provided if the student was able to identify tangential aspects, such as word order, changes in cases, and syntax. Minor weight was given to the overall accuracy of the alignment, in consideration of the level: the design of the exercises was deliberate in discouraging the creation of longer alignments, which often result in the student doing the work without thinking about their alignment choices. Essay questions focused instead on close-reading, analytical, in-depth investigation into the semantics of the source language.

The Ancient Greek text is located at the center, and the two translations at the sides. The translation on the left displays a 75% of aligned pairs, the translation on the right a 73%.
Figure 2. Two aligned English translations of Odyssey 9.105–9.115.

Intermediate level of Ancient Greek and Latin, third semester

Students used translation alignment in the context of project-based learning. They were responsible for the alignment of a chosen text chunk against translations that they had selected, ranging from early modern to contemporary translations. The assignment was divided in phases:

  1. Alignment of the source text against two chosen translations in English, and systematic evaluation of both translations. The students were asked to focus on chosen phenomena of syntax, morphology, grammar and semantics, that were particularly relevant in the text: e.g. word order, participial constructs, adjectival constructs, passive/active constructions, changes in case, transposition of allusion and semantic ambiguity. The students used their knowledge of syntax and grammar to critically assess the performance of different translators, focusing on the different ways in which complex linguistic phenomena can be rendered in another language. This assignment was combined with side analysis of morphology and syntax: for example, the students of Ancient Greek designed a morphological dataset containing 200 parsed words from the same passage.
  2. Creation of a new, independent translation, with a discussion of where it distanced itself from the original, which aspects of it were retained, and how the problems individuated in the authorial translations were approached by the student.

The result was a written report submitted at the midterm or end of the semester, indicating: the salient aspects of the texts and its most relevant linguistic features; an analytical comparison of how those linguistic features appeared in the competing aligned translations, and an evaluation of the translator’s strategy; the student’s translation, with a critical assessment of the chosen strategy to approach the same problems. These aspects constituted the backbone of the grading strategy, with additional attention to the alignment accuracy.

Section of two aligned translations of Hesiod, Works and Days vv. 42–105, with the original ancient Greek at the center, and the English translations on the sides.
Figure 3. Section of two aligned translations of Hesiod, Works and Days vv. 42–105. The student used a comparison between two translations from the same period (Hugh G. Evelyn-White, 1914, and David W. Tandy and Walter C. Neale 1996) but with very different styles, and used adjective-noun combinations and participle constructions to systematically evaluate them. The 1996 translation was judged more literal than the other, and more useful for a student.

Upper level Ancient Greek and Latin, fourth to seventh semester (graduate and undergraduate)

The exercises assigned for the upper level were a more articulated version of the project-based ones given to the intermediate level. The students were assigned a research-based project where alignment would be one component of an in-depth analysis of a chosen source. At an intermediate stage of the semester, the students would submit a research proposal indicating: an extensive passage they chose to investigate, and why they chose it; the topic they decided to investigate, and a short account of previous literature on it; methodologies applied to develop the project; desired outcomes. The final result would be a project report submitted at the end of the semester, indicating: if the desired outcomes had been reached, what kinds of challenges were not anticipated, and what new results were achieved; strategies implemented to apply the chosen methodology, e.g. which alignment strategy was applied to ensure that the research questions were answered; what the student learned about the source, its cultural context and/or language. The results were graded as proper projects: the students were evaluated according to their ability to clearly delineate motivation and methodology, use of existing resources, and critical discussion of the outcomes.

Many students creatively integrated alignment in their projects. For example:

  • Creation of an aligned translation for non-expert readers, alongside a commentary and morphosyntactic annotations. To facilitate reading, the student developed a consistent alignment strategy that only matched words corresponding in meaning and grammatical function. This project was published on GitHub.
  • Trilingual alignment of English-Latin-German to investigate the matching rate between two similarly inflected languages. The student noted that, even if their knowledge of German was inferior to English and Latin, matching Latin against German proved easier and more streamlined, while the English translation was approached with more criticism for its verbosity (Figure 4).[5]
  • Trilingual alignment to compare different texts. The student conceived a project aimed at gathering systematic evidence of the verbatim correspondences between the so-called Fables of Loqman and the Aesopic fables: according to existing scholarship, the former would be an Arabic translation of the latter. The student used a French translation of the Loqman fables to leverage on the challenges of the Arabic, and examined the overall matching rate across the texts (see this sample passage).
Sample passage of Tacitus, Germania 1.1, with two aligned translations in English and German, located on the left and at the center respectively. The German translation at the center displays identical matching rate as the Latin text on the right (93%), while the English translation on the left only has 89% matching rate.
Figure 4. Sample passage of Tacitus, Germania 1.1, with two aligned translations in English and German.

Results

The students reported how alignment affected their understanding of the source and its linguistic features, and how approaching the original by comparing it against a modern translation gave them a deeper understanding and respect for the content. While the alignment process often resulted in some criticism of available translations, the students who had to discuss the challenges faced by translators (or who had to translate themselves) gained a stronger understanding of the issues involved in “transferring” not only words and constructs but also underlying cultural implications and multiple meanings. The students who used alignment in the context of research projects also benefited from the publication of their aligned translations, and some presented them as research papers at undergraduate conferences. Many students even reported to have used alignment independently afterwards in other courses, often to facilitate the study of new languages, both ancient and modern.

Some overarching tendencies in the evaluation of concurrent translations emerged, particularly at the Intermediate and Upper Level. This feedback was extremely interesting to observe, because most of it can only be explained as the result of a systematic comparison between target and source language, in a situation where the reader is an active operator and not a passive content consumer.

The students observed analytically the various ways in which translations cannot structurally convey peculiar aspects of the original: for example, dialectal variants, metrical arrangement, wordplays, or syntactic constructs. Most of them were still able to appreciate skillful modern translations, and even to diagnose why translators would distance themselves from the original. They definitely understood the challenge by engaging in translation tasks themselves. For many, however, the discovery that they could not fully rely on translations to understand what is happening in a text was astonishing. Students tend to be educated to the idea that authorial translations are necessarily “right” (and therefore “faithful”[6]) renderings of Classical texts, to the point where they often trust them over their own understanding of the language. With this exercise, they learned that “right” and “faithful” may not be the same thing, and that the literature of an ancient civilization preserves a depth and complexity of meaning that cannot be fully encompassed in a translation.

Interestingly, students often had a more positive judgement of translations that rendered difficult syntactic constructs more closely to the original without fundamentally altering the structure, or shifting the emphasis (e.g. by changing subject-object relations or by altering verb voices). Students at the Intermediate level, in particular, judged such translations more “literal”, as they found them more helpful in understanding important linguistic structures: Figure 3 shows an alignment of Hesiod’s Works and Days, where the student extrapolated adjective-noun combinations and participle constructs to draw a systematic comparison between two concurring translations. The translation that was judged “more literal”, and therefore more useful for a student, was the one that kept these structures closer to the way they appeared in the original. This phenomenon intensified with texts that had a strong amount of allusions and wordplay, which are often conveyed by means of very specific syntactic constructs: students who dealt with this kind of texts were merciless judges of translations that completely altered the original syntax and recrafted the phrasing to adapt it to a modern audience. The students indicated how such alterations regularly failed to convey the depths of sophisticated wordplay, where the syntax itself is not an accessory, but a structural part of the meaning.

The omission of words in the source language was considered particularly unforgiving: even though some words like adverbs and conjunctions are omitted in translations to avoid redundancies, some translations were found to leave out entire concepts or expressions for no apparent reason. The visualization of aligned texts on Ugarit certainly accentuated this aspect, as it tends to emphasize the relation between matched and non-matched tokens through the use of color, and it also provides matching rates to assess the discrepancy between texts. Almost all the students seem to have intensively taken advantage of this aspect, by emphasizing how translations missed entire expressions that appeared in the original and shaped its message: in other words, even if the omission only regarded one adjective or a particularly intensive adverb, they felt that translations did not convey the full meaning of the text they were reading.

The implications of such observations are interesting: the translations in question were “bad” translations not because they were not understandable or efficient in conveying the sense of a passage in English, but because they hindered the student’s understanding of the original. Readers, even classically-trained ones, normally enjoy translations that, while taking some liberties, are more efficient in conveying the content and artistic aspects of a text in a way that is more familiar to a modern audience. Students who read a text in translation often struggle with versions that try to be close to the original language (sometimes with rather clumsy results), and they also make limited use of printed aligned translations that used to be very popular in school commentaries of the past. However, when students became active operators of translation alignment, the focus shifted to the understanding of the original through the scaffolding provided by the translation. In other words, the focus was on how the translation served the reader of the source text: this suggests an extremely active engagement with the original, through the critical lenses of systematic linguistic comparison.

With the guidance provided by the exercises, the students used translation alignment to engage with linguistic and stylistic phenomena, and the assessment of the ineffectiveness of translations in conveying such complex nuances often made them more confident in approaching the original language. In their own translations, they became extremely self-aware of their position with respect to the text, and tried to justify every perceived variation from the structure and the style of the original. Some of them opted for very literal, yet clumsy, translations, which they reflected upon and elaborated more thoroughly in a commentary to the text; others, particularly at the Upper level, built upon aspects that they liked or disliked in the translations to create better versions of them, depending on their intended audience.

We can conclude that, if appropriately embedded in reflective exercises, translation alignment did not result in a mechanical operation of word matching, but nurtured an active philological approach to the text, and an exploration of it in all its different aspects, from linguistic constructs to word meanings, to the role of wordplays in a literary context. Despite growing skepticism in the ability of translations to convey the “full” meaning of a text, the students still believed in the necessity of using them in a thoughtful manner.

In fact, the students advocated for more and more varied use of digital tools, to compensate for the deficiencies of aligned corpora. At the Upper level in particular, many students complemented their translation alignments with additional data gathered through other digital resources: for example, while creating translation alignments directed at non-expert readers, they integrated the resource with a complete morphosyntactic analysis performed with treebanking (Celano 2019), with the intention of making up for the limitations of incomplete matching of word functions in specific linguistic constructs.

In this regard, it is important to emphasize that translation alignment is just one of the tools at our disposal. In a future where learning and reading are going to be prevalently performed through digital technologies, we need to create environments where readers can meaningfully engage in a philological exploration of texts at multiple levels: translation alignment, but also detection of textual variants, geospatial mapping, social network analysis, morphosyntactic reconstruction, up to the incorporation of sound and recording that can compensate for reading and visual disabilities (Crane 2019).

Conclusions

Overall, the experiment showed that a meaningful use of translation alignment can empower a reflective and active approach to Classical sources, by means of the continuous, systematic comparison of the cultural and semantic depths embedded in the language. Of course, translation alignment should not be the only option: digital technologies offer many opportunities of enhancing the reading experience as a philological exploration, through the interaction of many different data types, allowing a sophisticated approach to information from multiple perspectives. Even though these tools have been created to empower the reading processes specific of Classical scholars, their application promises new ways of approaching digital content in a much wider context, going beyond the categories of “close reading” and “skimming.”

Translation alignment is a tool that can empower a thoughtful and meaningful approach to reading on digital platforms. But more than that, it can also stimulate a deeper respect for cultural differences. In an increasingly globalized world, translations as means of communicating through cultural contexts and languages are increasingly important: automated translations, as well as interpreters and professional translators, represent a response to a generalized need of fast and broad access to information produced in different cultural contexts. However, being able to access translated content so easily can result in oversimplification, and in the overlooking of cultural complexities. Aligned translations offer an alternative. By discouraging the idea that every word has an exact equivalence, aligned translations add value to the original, rather than subtracting it, through a continuous dialogue between cultural and linguistic systems. Engaging with a translation meaningfully means so much more than merely establishing equivalences: by emphasizing the depth of semantic differences, it can promote better attitudes to cultural diversity and acceptance.

Notes

[1] In this sense, reading an ancient text is much closer to literary criticism than to the study of a foreign language. This is the reason why Classical languages are never fully embedded in current practices of foreign language teaching and assessment. This topic was recently treated, among others, by Nicoulin (2019).
[2] This feature is currently available for Greek, Arabic, Persian, Armenian, and Georgian.
[3] Translations were continuously used to ensure communication between different cultures and communities in the ancient world (Bettini 2012; Nergaard 1993). The practice of multi-lingual aligned texts as means of cultural communication was normal, if not frequent, in antiquity, with famous examples like the inscriptions of Behistun, the edicts of Ashoka the Great, and the Rosetta Stone.
[4] A variant of this assignment was also tested on a group of students with no knowledge of Greek, enrolled in courses of literature in translation (Palladino, Foradi, and Yousef forthcoming).
[5] Interestingly, trilingual alignment was used by some students to improve their mastery of a third language, often a modern one, by leveraging on their knowledge of their native tongue and the ancient language (Palladino, Foradi, and Yousef forthcoming).
[6] Incidentally, the “faithfulness” of a translation as a value judgement was introduced by the Christians: since God imprints his image on the text, every version of that text needs to be a faithful reproduction of it. Here resides the miraculous character of the translation of the Septuagint, which, according to tradition, came to be when seventy savants independently wrote an identical translation of the Bible (Nergaard 1993).

Bibliography

Almas, Bridget, and Marie-Claire Beaulieu. 2016. “The Perseids Platform: Scholarship for All!” In Digital Classics Outside the Echo-Chamber, edited by Gabriel Bodard and Matteo Romanello, 171–86. Ubiquity Press. https://doi.org/10.26530/OAPEN_608306.

Berti, Monica, ed. 2019. Classical Philology. Ancient Greek and Latin in the Digital Revolution. Berlin: De Gruyter Saur.

Bettini, Maurizio. 2012. Vertere. Un’antropologia della traduzione nella cultura antica. Torino: Einaudi.

Blackwell, Christopher, and N. Smith. 2019. “DUCAT – Daughter of Ugarit Citation Alignment Tool.” Accessed October 4, 2020. https://github.com/eumaeus/ducat.

Celano, Giuseppe. 2019. “The Dependency Treebanks for Ancient Greek and Latin”. In Digital Classical Philology. Ancient Greek and Latin in the Digital Revolution, edited by Monica Berti, 279–298. Berlin: De Gruyter Saur.

Cook, Guy. 2009. “Foreign Language Teaching.” In Routledge Encyclopedia of Translation Studies, edited by Monica Baker and Gabriela Saldanha, Second Edition, 112–15. London; New York: Routledge, Taylor & Francis Group.

Crane, Gregory. 2019. “Beyond Translation: Language Hacking and Philology.” Harvard Data Science Review 1, no. 2. https://doi.org/10.1162/99608f92.282ad764.

Crane, Gregory, Alison Babeu, Lisa Cerrato, Bridget Almas, Marie-Claire Beaulieu, and Anna Krohn. 2018. “Perseus Digital Library.” Accessed March 5, 2020. http://www.perseus.tufts.edu/hopper/.

Dagan, I., K. Church, and W. Gale. 1999. “Robust Bilingual Word Alignment for Machine Aided Translation.” In Natural Language Processing Using Very Large Corpora, edited by Susan Armstrong, Kenneth Church, Pierre Isabelle, Sandra Manzi, Evelyne Tzoukermann, and David Yarowsky, 209–24. Text, Speech and Language Technology. Dordrecht: Springer Netherlands. https://doi.org/10.1007/978-94-017-2390-9_13.

Foradi, Maryam. 2019. “Confronting Complexity of Babel in a Global and Digital Age. What Can You Produce and What Can You Learn When Aligning a Translation to a Language That You Have Not Studied?” In DH2019: Digital Humanities Conference, Utrecht University, July 9–12. Book of Abstracts. Utrecht.

Lefevere, André. 1992. Translation, Rewriting, and the Manipulation of Literary Fame. London; New York: Routledge.

Natoli, Bartolo, and Steven Hunt, eds. 2019. Teaching Classics with Technology. London; New York: Bloomsbury Academic.

Nergaard, Siri, ed. 1993. La teoria della traduzione nella storia. Milano: Bompiani.

Nicoulin, Morgan. 2019. “Methods of Teaching Latin: Theory, Practice, Application.” Arts & Sciences Electronic Theses and Dissertations, May. https://doi.org/10.7936/znvz-zd20.

Palladino, Chiara, Maryam Foradi, and Tariq Yousef. forthcoming. “Translation Alignment for Historical Language Learning: A Case Study.”

———.“The Alpheios Project.” 2020. Accessed June 23, 2020. https://alpheios.net/.

———.“TLG – Thesaurus Linguae Graecae.” 2020. Accessed June 18, 2020. http://stephanus.tlg.uci.edu/.

Wolf, Maryanne. 2018. Reader, Come Home: The Reading Brain in a Digital World. New York: Harper.

Yousef, Tariq. 2019. “Ugarit: Translation Alignment Visualization.” In LEVIA’19: Leipzig Symposium on Visualization in Applications 2019. Leipzig.

About the Author

Chiara Palladino is Assistant Professor of Classics at Furman University. She works on the application of digital technologies to the study of ancient texts. Her current main interests are in the use of semantic annotation and modelling for the analysis of ancient spatial narratives, and in the implementation of translation alignment platforms for reading and investigating historical languages.

A three-by-three grid of fragments of poems, with syllables appearing in different colors.
3

Back in a Flash: Critical Making Pedagogies to Counter Technological Obsolescence

Abstract

This article centers around the issue of teaching digital humanities work in the face of technological obsolescence. Because this is a wide-ranging topic, this article draws a particular focus on teaching electronic literature in light of the loss of Flash, which Adobe will end support for at the end of 2020. The challenge of preserving access to electronic literature due to technological obsolescence is integral to the field, which has already taken up a number of initiatives to preserve electronic literatures as their technological substrate of hardware and software become obsolete, with the recent NEH-funded AfterFlash project responding directly to the need to preserve Flash-based texts. As useful as these and similar projects of e-literary preservation are, they focus on preservation through one traditional modality of humanities work: saving a record of works so that they may be engaged, played, read, or otherwise consumed by an audience, a focus that loses the works’ poetics that are effected through materiality, physicality, and (often) interaction. This article argues that obsolescence may be most effectively countered through pedagogies that combine the consumption of e-textual record, with the critical making of texts that share similar poetics, as the process of making allows students to engage with material, physical poetics. The loss of Flash is a dual loss of both a set of consumable e-texts and a tool for making e-texts; the article thus provides a case study of teaching with a complementary program for making e-poetic texts in the place of Flash called Stepworks.

Technological obsolescence—the phenomenon by which technologies are rendered outdated as soon as they are updated—is a persistent, familiar problem when teaching with, and through digital technologies. Whether planned, a model of designed obsolescence where hardware and software are updated to be incompatible with older systems (thus forcing the consumer to purchase updated models) or perceived, a form of obsolescence where a given technology becomes culturally, if not functionally, obsolete, teaching in a 21st century ubicomp environment necessitates strategies for working with our own and our students’ “obsolete,” outdated, or otherwise incompatible technologies. These strategies may take any number of forms: taking time to help students install, emulate, or otherwise troubleshoot old software; securing lab space with machines that can run certain programs; embracing collaborative models of work; designing flexible assignments to account for technological limitations…the list could go on.

But in Spring 2020, as campuses around the globe went fully remote, online, and often asynchronous in response to the COVID-19 pandemic, many of us met the limits of these strategies. Tactics like situated collaboration, shared machinery, or lab space became impossible to employ in the face of this sudden temporal obsolescence, as our working, updated technologies became functionally “obsolete” within the time-boundedness of that moment. In my own case, the effects of this moment were most acutely felt in my Introduction to the Digital Humanities class, where the move to remote teaching coincided with our electronic literature unit, a unit where the combined effects of technological obsolescence and proprietary platforms already necessitate adopting strategies so that students may equitably engage the primary texts. For me, primary among these strategies are sharing machinery to experience texts together, guiding students through emulator or software installation as appropriate, and adopting collaborative critical making pedagogies to promote material engagement with texts. In Spring 2020, though, as my class moved to a remote, asynchronous model, each of these strategies was directly challenged.

I felt this challenge most keenly in trying to teach Brian Kim Stefans’s Dreamlife of Letters (2000) and Amaranth Borsuk and Brad Bouses’s Between Page and Screen (2012), two works that, though archived and freely available via the Electronic Literature Collection and therefore ideal for accessing from any device with an internet connection, were built in Flash, so require the Flash Player plug-in to run. Since Adobe’s announcement in 2017 that they would stop supporting Flash Player by the end of 2020, fewer and fewer students enter my class with machines prepared to run Flash-based works, either because they have not installed the plugin, or because they are on mobile devices which have never been compatible with Flash—a point that Anastasia Salter and John Murray (2014) argue, has largely contributed to Adobe’s decision. The COVID-19 move to remote, online instruction effectively meant that students with compatible hardware had to navigate the plug-in installation on their own (a process requiring a critical level of digital literacy), while those without compatible hardware were unable to access these works except as images or recordings. In the Spring 2020 teaching environment, then, the temporal obsolescence brought on by COVID-19 acted as a harbinger for the eventual inaccessibility of e-literary work that requires Flash to run.

The problem space that I occupy here is this: teaching electronic literature in the undergraduate classroom, as its digital material substrates move ever closer to obsolescence, an impossible race against time that speeds up each year as students enter my classroom with machines increasingly less compatible with older software and platforms. Specifically, I focus on the challenge of teaching e-literature in the undergraduate, digital humanities classroom, while facing the loss of Flash and its attendant digital poetics—a focus informed by my own disciplinary expertise, but through which I offer pedagogical responses and strategies that are useful beyond this scope.

Central to my argument, here, is that teaching e-literary practice and poetics in the face of technological obsolescence is most effective through an approach that interweaves practice-based critical making of e-literary texts, with the more traditional humanities practices of reading, playing, or otherwise consuming texts. Critical making pedagogy, however, is not without its critiques, particularly as it may promote inequitable classroom environments. For this reason, I start with a discussion of critical making as feminist, digital humanities pedagogy. From there, I turn more explicitly to the immediate problem at hand: the impending loss of Flash. I address this problem, first, by highlighting some of the work already being done to maintain access to e-literary works built in Flash, and second by pointing to some of the limits of these projects, as they prioritize maintaining Flash-based works for readerly consumption, even though the loss of Flash is also the loss of an amateur-friendly, low-stakes coding environment for “writing” digital texts. Acknowledging this dual loss brings me to a short case study of teaching with Stepworks, a contemporary, web-based platform that is ideal for creating interactive, digital texts, with poetics similar to those created in Flash. As such, it is a platform through which we can effectively use critical making pedagogies to combat the technological obsolescence of Flash. I close by briefly expanding from this case study, to reflect on some of the ways critical making pedagogies may help combat the loss of e-literary praxes beyond those indicative of and popularized by Flash.

Critical Making Pedagogy As DH Feminist Praxis

Critical making is a mode of humanities work that merges theory with practice by engaging the ways physical, hands-on practices like making, doing, tinkering, experimenting, or creating constitute forms of thinking, learning, analysis, or critique. Strongly associated with the digital humanities given the field’s own relationship to intellectual practices like coding and fabrication, critical making methodologically argues that intellectual activity can take forms other than writing, as coding, tinkering, building, or fabricating represent more than just “rote” or “automatic” technical work (Endres 2017). As critical making asserts, such physical practices constitute complex forms of intellectual activity. In other words, it takes to task what Bill Endres calls “an accident of available technologies” that has proffered writing its status as “the gold standard for measuring scholarly production” (2017).

Critical making scholarship can, thus, take a number of forms, as Jentery Sayers’s (ed.) Making Things and Drawing Boundaries showcases (2017). For example, as a challenge to and reflection on the invasiveness of contemporary personal devices, Allison Burtch and Eric Rosenthal offer Mic Jammer, a device that transmits ultrasonic signals that, when pointed towards a smartphone, “de-sense that phone’s microphone,” to “give people the confidence to know that their smartphones are non-invasively muted” (Burtch and Rosenthal 2017). Meanwhile, Nina Belojevic’s Glitch Console, a hacked or “circuit-bent” Nintendo Entertainment System, “links the consumer culture of video game platforms to issues of labor, exploitation, and the environment” through a play-experience riddled with glitches (Belojevic 2017). Moving away from fabrication, Anne Balsamo, Dale MacDonald, and Jon Winet’s AIDS Quilt Touch (AQT) Virtual Quilt Browser is a kind of preservation-through-remediation project that provides users with opportunities to virtually interact with the AIDS Memorial Quilt (Balsamo, MacDonald, and Winet 2017). Finally, Kim A. Brillante Knight’s Fashioning Circuits disrupts our cultural assumptions about digital media and technology, particularly as they are informed by gendered hierarchy; this project uses “wearable media as a lens to consider the social and cultural valences of bodies and identities in relation to fashion, technology, labor practices, and craft and maker cultures” (Knight 2017).

Each of these examples of critical making projects highlight the ways critical making disrupts the primacy of writing in intellectual activities. However they are also entangled in one of the most frequently lobbed and not insignificant critiques of critical making as a form of Digital Humanities (DH) praxis: that it reinforces the exclusionary position that DH is only for practitioners who code, make, or build. This connection is made explicit by Endres, as he frames his argument that building is a form of literacy through a mild defense of Steven Ramsay’s 2011 MLA conference presentation, which argued that building was fundamental to the Digital Humanities. Ramsay’s presentation was widely critiqued for being exclusionary (Endres 2017), as it promoted a form of gatekeeping in the digital humanities that, in turn, reinforces traditional academic hierarchies of gender, race, class, and tenure-status. As feminist DH in particular has shown, arguments that “to do DH, you must (also) code or build,” imagine a scholar who, in the first place, has had the emotional, intellectual, financial, and temporal means to acquire skillsets that are not part of traditional humanities education, and who, in the second place, has the institutional protection against precarity to produce work in modes outside “the gold standard” of writing (Endres 2017).

Critical making as DH praxis then, finds itself in a complicated bind: on the one hand, it effectively challenges academic hierarchies and scholarly traditions that equate writing with intellectual work; on the other hand, as it replaces writing with practices (akin to coding) like fabrication, building, or simply “making,”[1] it potentially reinforces the exclusionary logic that makes coding the price of entry into the digital humanities “big tent” (Svensson 2012).[2] Endres, however, raises an important point that this critique largely fails to account for: that “building has been [and continues to be] generally excluded from tenure and promotion guidelines in the humanities” (Endres 2017). That is, while we should perhaps not take DH completely off the hook for the field’s internalized exclusivity and the ways critical making praxis may be commandeered in service of this exclusivity, examining academic institutions more broadly, and of which DH is only a part, reveals that writing and its attendant systems of peer review and impact factors remains the exclusionary technology, gate-keeping from within.

To offer a single point of “anec-data:”[3] in my own scholarly upbringing in the digital humanities, I have regularly been advised by more senior scholars in the field (particularly other white women and women of color) to only take on coding, programming, building, or other making projects that I can also support with a traditional written, peer-reviewed article—advice that responds explicitly to writing’s position of primacy as a metric of academic research output, and implicitly to academia’s general valuation of research above activities like teaching, mentorship, or service. It is also worth noting that this advice was regularly offered with the clear-eyed reminder that, as valuable as critical making work is, ultimately the farther the scholar or practitioner appears from the cisgendered, heterosexual, able-bodied, white, male default, the more likely it is that their critical making work will be challenged when it comes to issues of tenure, promotion, or job competitiveness. While a single point of anec-data hardly indicates a pattern, the wider system of academic value that I sketch here is well-known and well-documented. It would be a disservice, then, not to acknowledge the space that traditional, peer-reviewed, academic writing occupies within this system.

Following Endres, my own experience, and systemic patterns across academia, I would argue that even though critical making can promote exclusionary practices in the digital humanities, the work that this methodological intervention does to disrupt technological hierarchies and exclusionary systems—including and especially, writing—outweighs the work that it may do to reinforce other hierarchies and systems. Indeed, I would go one step further to add my voice to existing arguments, implicit in the projects cited above and explicit throughout Jacqueline Wernimont and Elizabeth Losh’s edited collection, Bodies of Information: Intersectional Feminism and the Digital Humanities (2018), that critical making is feminist praxis, not least for the ways it contributes to feminism’s long-standing project of disrupting, challenging, and even breaking language and writing.[4]

The efficacy of critical making’s feminist intervention becomes even more evident, and I would argue, powerful, when it enters the classroom as undergraduate pedagogy. Following critical making scholarship, critical making pedagogy similarly disrupts the primacy of written text for intellectual work. Students are invited to demonstrate their learning through means other than a written paper, in a student learning assessment model that aligns with other progressive, student-centered practices like the un-essay.[5] Because research in digital and progressive pedagogy highlights the ways things like un-essays are beneficial to student learning, I will focus here on pointing to particular ways critical making pedagogy in the undergraduate digital humanities classroom operates as feminist praxis for disrupting heteropatriarchal assumptions about technology. For this, I will pull primarily from feminist principles of and about technology, as outlined by FemTechNet and Lauren Klein and Catherine D’Igniazio’s Data Feminism, and from my experiences teaching undergraduate digital humanities classes through and with critical making pedagogies.

In their White Paper on Transforming Higher Education with Distributed Open Collaborative Courses, FemTechNet unequivocally states “effective pedagogy reflects feminist principles” (FemTechNet White Paper Committee 2013), and the first and perhaps most consistent place that critical making pedagogies respond to this charge are in the ways that critical making makes labor visible, a value of intersectional feminism broadly, and data feminism specifically (D’Ignazio and Klein 2020). Issues of invisible labor have long been central to feminist projects, so it is no surprise that this would also be a central concern in response to Silicon Valley’s techno-libertarian ethos that blackboxes digital technologies as “labor free” for both end-users and tech workers. When students participate in critical making projects that relate to digital technologies—projects that may range from building a website or programming an interactive game, to fabricating through 3D printing or physical circuitry—they are forced to confront the labor behind (and rendered literally invisible in) software and hardware. This confrontation typically manifests as frustration, often felt in and expressed through their bodies, necessitating an engagement with affect, emotion, care, and often, collaboration in this work—all feminist technologies directly cited in both FemTechNet’s manifesto (FemTechNet 2012), and as core principles of data feminism (D’Ignazio and Klein 2020). Similarly, as students learn methods and practices for their critical making projects, they inevitably find themselves facing messiness, chaos, even fragments of broken things, and only occasionally is this “ordered” or “cleaned” by the time they submit their final deliverable. Besides recalling FemTechNet’s argument that “making a mess… [is a] feminist technolog[y]” (FemTechNet 2012), the physical and intellectual messiness of critical making pedagogy also requires a shift in values away from the “deliverable” output, and towards the process of making, building, or learning itself. At times, this shift in value toward process can manifest as a celebration of play, as the tinkering, experimentation, chaos, and messiness of critical making transform into a play-space. While I am hesitant to argue too forcefully for the playfulness of critical making in the classroom (not least for the ways play is inequitably distributed in academic and technological systems), Shira Chess has recently and compellingly argued for the need to recalibrate play as a feminist technology, so I name it here as an additional, potential effect of critical making pedagogy (Chess 2020). Whether it transforms to play or not, however, the shift in value away from a deliverable output is always a powerful disruption of the masculinist, capitalist narratives of “progress” and “value” that undergird technological, academic work.

These are, of course, also the narratives primarily responsible for the profitability of obsolescence, which brings us back to my primary thesis and central focus: the efficacy of adopting critical making pedagogies to counter the effects of technological obsolescence in electronic literature. Because technological obsolescence is a core concern of electronic literature, it will be worth spending some time and space to examine this relationship more deeply, and address some of the ways the field is already at work countering loss-through-obsolescence. Indeed, some of this work already anticipates possibilities for critical making to counter this loss.

Preservation and E-lit

As stated technological obsolescence is the phenomenon whereby technologies become outdated (obsolete) as they are updated. Historically, technological obsolescence has occurred in concert with developments in computing technologies that have radically altered what computers can do (eg: moves from primarily text to graphics processing power) or how they are used (eg: with the advent of programming languages, or the development of the graphical user interface). However, as digital technologies have developed into the lucrative consumer market of today, this phenomenon has become driven more heavily by capitalistic gains through consumer behaviors. Consider, for instance, iOS updates that no longer work on older iphone models, or new hardware models that do not fit old plugs, USB ports, or headphones. In each case, updates to the technology force consumers to purchase newer models, even if their old ones are otherwise functioning properly.

In the field of electronic literature, obsolescence requires attention from two perspectives: the readerly and the writerly. From the former, obsolescence threatens future access to e-literary texts, so the field must regularly engage with preservation strategies; from the latter, it requires that that the field regularly engage with new technologies for “writing” e-literary texts, as new platforms and programs both result from and in others’ obsolescence. E-lit, thus, occupies both sides of the obsolescence coin: on the one side, holding the outdated to ensure preservation and access, and on the other, embracing the updated, as new platforms, programs, and hardware prompt the field’s innovation. Much of the field’s focus is on combating obsolescence through attention to outdated (or, as in the case of Flash, soon-to-be-outdated) platforms, and maintaining these works for future audiences. However, this attention only weakly (if at all) accounts for the flipside of the writerly, where obsolescence also threatens loss of craft or practice in e-literary creation; this is where, I argue critical making as e-literary pedagogy is especially productive as a counter-force to loss. First though, it will be worth looking more closely at the ways obsolescence and e-literature are integrated with one another.

Maintaining access to obsolete e-literature is, and has been, a central concern of the field in large part because the field’s own history “is inextricably tied to the history of computing, networking, and their social adoption” (Flores 2019). A brief overview of e-literature’s “generations” effectively illustrates this relationship. The first generation of e-lit is primarily pre-web, text-heavy works developed between 1952 and 1995—dates that span the commercial availability of computing devices and the development of language-based coding, to the advent and adoption of the web (Flores 2019). Spanning over 40 years, this period also includes within it, the period 1980–1995 which is “dominated by stand-alone narrative hypertexts,” many of which “will no longer play on contemporary operating systems” (Hayles and House 2020, my italics). The second generation, beginning in 1995, spans the rise of personal, home computing and the web, and is characterized by web-based, interactive, and multimedia works, many of which are developed in Flash (Flores 2019). In 2005, the web of 1995 shifted into the platform-based, social “web 2.0” that we recognize and use today. Leonardo Flores uses this year to mark the advent of what he argues for as the third generation of e-lit, which “accounts for a massive scale of born digital work produced by and for contemporary audiences for whom digital media has become naturalized” (Flores 2019). In the Electronic Literature Organization (ELO)’s Teaching Electronic Literature initiative, N. Katherine Hayles and Ryan House characterize third generation e-lit, specifically, as “works written for cell phones” in contrast to “web works displayed on tablets and computer screens” (Hayles and House 2020).

Though Flores includes activities like storytelling through gifs and poetics of memes in his characterization of third generation e-lit, framing this moment through cell phones is helpful for thinking about the centrality of computing development and its attendant obsolescence to the field. In the first place, pointing to work developed for cell phones immediately brings to mind challenges of access due to Apple’s and Android’s competitive app markets. Here, there is on the one hand, the challenge of apps developed for only one of these hardware-based platforms, so inaccessible by the other; on the other hand, there are the continuous operating system updates that in turn, require continuous app updates which regularly results in apps seemingly, and sometimes literally, becoming obsolete overnight. In the second place, the 1995–2015 generation of “web works displayed on tablets and computer screens” is, as noted, a generation characterized by the rising ubiquity of Flash—a platform that was central to both user-generated webtexts, and e-literary practice of this time (Salter and Murray 2014). As noted, Flash-based works are currently facing their own impending obsolescence due to Adobe’s removal of support, a decision that Salter and Murray argue results directly from the rise of cell phones and/as smartphones which do not support Flash. Thus, cell phones and “works created for cell phones” once again demonstrates the intricate relationship between computing history, technological development, and e-literature.

As this brief history demonstrates, the challenge of ensuring access to e-lit in the face of technological obsolescence is absolutely integral to the field, as it ultimately ensures that new generations of e-lit scholars can access primary texts. Indeed, it is a central concern behind one of the field’s most visible projects: the Electronic Literature Collection (ELC). As of this writing, the ELC is made up of three curated volumes of electronic literature, freely available on the web and regularly maintained by the ELO. In addition to expected content like work descriptions and author’s/artist’s statements, each work indexed in the ELC is accompanied by notes for access; these may include links, files for download, required software and hardware, or emulators as appropriate. The ELC, thus, operates as both an invaluable resource for preserving access to e-lit in general, and for ensuring access to e-lit texts for teaching undergraduates. Most of the work in the ELC is presented either in its original experiential, playable form, or as recorded images or videos when experiencing the work directly is untenable for some reason (as in, for instance, locative texts which require their user to be in specific locations to access the text). However, some of the work is presented with materials that encourage a critical making approach—a move in direct conversation with my argument that critical making plays an important pedagogical role for experiencing and even preserving e-literary practice, even and especially if the text itself cannot be experienced or practiced directly. Borsuk and Bouse’s Between Page and Screen, an augmented reality work that requires both a physical, artist’s book of 2D barcodes specifically designed for the project, and a Flash-based web app that enables the computer’s camera to “read” these barcodes, offers a particularly strong example of this.

Between Page and Screen is indexed in the 3rd volume of the ELC. In addition to the standard information about the work and its authors, the entry’s “Begin” button contains an option to “DIY Physical Book,” which will take the user to a page on the work’s website that offers users access to 1) a web-based tool called “Epistles” for writing their own text and linking it to a particular bar code; and 2) a guide for printing and binding their own small chapbook of bar codes, which can then be held up and read through the project’s camera app. In this way, users who may be unable to access the physical book of barcodes that power Between Page and Screen are still offered an experiential, material engagement with the text through their own critical making practices. Engaging the text in this way allows users not only to physically experience the kinetics and aesthetics of the augmented reality text, but also to engage the materiality and interaction poetics at the heart of the piece—precisely those poetics that are lost when the only available access to a text is a recording to be consumed. At the same time, engaging the text through the critically made chapbook prompts a material confrontation between the analog and the digital as complementary, even intimate, information technologies. Of course, in this case it is (perhaps) ironically not the anticipated analog technology that is least accessible; rather it is the digital complement, the Flash-based program that allows the camera to read the analog bar codes, that is soon to be inaccessible.

Complementing the Electronic Literature Collections are the preservation efforts underway at media archeology labs around the country, most notably the Electronic Literature Lab (ELL) directed by Dene Grigar at Washington State University, Vancouver. Currently, the ELL is working on preserving Flash-based works of electronic literature, through the NEH-sponsored Afterflash project. Afterflash will preserve 447 works through a combination process where researchers:

1) preserve the works with Webrecorder, developed by Rhizome, that emulates the browser for which the works were published, 2) make the works accessible with newly generated URLs with six points of access, and 3) document the metadata of these works in various scholarly databases so that information about them is available to scholars (Slocum et. al. 2019).

Without a doubt, this is an exceptional project of preservation for ensuring some kind of access to Flash-based works of electronic literature, even after Adobe ends their support and maintenance of the software. In particular, the use of Webrecorder to capture the works means that the preservation will not just be a recorded “walk-through” of the text, but will capture the interactivity—an important poetic of this moment in e-literary practice.

As exceptional as this preservation project is, however, it is focused entirely (and not incorrectly) on preserving works so that they may be experienced by future readers, who do not have access to Flash. But what of preserving Flash as a program particularly suited for making interactive, multimedia webtexts? As Salter and Murray argue, a major part of the platform’s success and influence on turn-of-the-century web aesthetics, web arts, and electronic literature has to do with its low barrier-to-entry for creating interactive, multimedia works, even for users who were not coders, or who were new to programming (Salter and Murray 2014). Pedagogically speaking, the amateur focus of Flash also meant that it was particularly well-suited for teaching digital poetics through critical making. In the first place, it upheld the work of feminist digital humanities to disrupt and resist the primacy of original, complex, codework to the digital humanities and (more specifically) electronic literatures. In this way, it could operate as an ideal tool for feminist critical making pedagogies by, both promoting alternatives to writing for intellectual work, and by resisting the exclusivity behind prioritizing original, complex codework. In the second place, it allowed students to tinker with the particularities and subtleties of digital poetics—things like interaction, animation, kinetics, and visual / spatial design—without getting overwhelmed by the complexities and specifications of code. As the end of 2020 and Adobe’s support for Flash looms large, the question then becomes all the more urgent: if critical making offers an effective, feminist pedagogical model for teaching electronic literatures in the face of technological obsolescence, how can we maintain these practices in our undergraduate teaching in a post-Flash world?

Case Study: Stepworks

In response to this question, I propose: Stepworks. Created by Erik Loyer in 2017, Stepworks is a web-based platform for creating single-touch interactive texts that centers around the primary metaphor of staged, musical performance, an appropriate metaphor that resonates with traditions of e-literature and e-poetry that similarly conceptualize these texts in terms of performance. Stepworks performances are powered by the Stepwise XML, also developed by Loyer, but the platform does not require creators to work directly in the XML to make interactive texts. Instead, the program in its current iteration interfaces directly with Google Sheets—a point that, while positive for things like ease of use and supporting collaboration (features I discuss more fully in what follows), does introduce a problematic reliance on Google’s corporate decisions to maintain access to and workability of Stepworks and its texts. In the Stepworks spreadsheet, each column is a “character,” named across the first row, while the cells in each subsequent row contain what the character performs—the text, image, code, sound, or other content associated with that character. Though characters are often named entities that speak in text, they can also be things like instructions for use, metadata about the piece, musical instruments that will perform sound and pitch, or a “pulse,” a special character that defines a rhythm for the textual performance. Finally, each cell contains the content that will be performed with each single-click interaction, and this content will be performed in the order that it appears down the rows of the spreadsheet. Students can, therefore, easily and quickly experiment with different effects of single-click interactive texts, as they perform at the syllable, word, or even phrase level, just by putting that much content into the cell.

Figures 1, 2, and 3 illustrate some of these effects. Figure 1, a gif of a Stepworks text where each line of Abbott and Costello’s “Who’s on First” occupies each cell, showcases the effects of performing full lines of text with each interaction.

The gif includes text that appears in alternative lines on top and below one another, as each character speaks. This text reads: “Strange as it may seem, they give ball players nowadays very peculiar names / Funny names? / Nicknames, nicknames. / Now on the St. Louis team we have Who’s on first, What’s on second, I Don’t Know is on third-- / That’s what I want to find out / I want you to tell me the names of the fellows on the St. Louis team. / I’m telling you / Who’s on first, What’s on second, I Don’t Know is on third / You know the fellows’ names? / Yes.
Figure 1. An animated gif showing part of a Stepworks performance created by Erik Loyer that remediates the script of Abbott and Costello’s comedy sketch, “Who’s On First.” The gif was created by the author, through a screen recording of the Stepworks recording.

Figure 2, a gif from a Stepworks text based on Lin-Manuel Miranda’s Hamilton, showcases the effects of breaking up each cell by syllable and allowing the player to perform the song at their own pace.

The text appears on a three-by-three grid, and in each space of the grid the text gets a different color, indicating a different character. The gif begins with text in the center position that reads “but just you wait, just you wait…” and in the bottom right that reads “A voice saying.” Then text appears in each position on the grid around the center saying “Alex, you gotta fend for yourself,” and the bottom right space reads “He started retreatin’ and readin’ every treatise on the shelf.” The top left then presents text reading “There would have been nothin’ left to do for someone less astute.”
Figure 2. An animated gif showing part of a Stepworks performance created by Erik Loyer that remediates lyrics from Lin-Manuel Miranda’s broadway musical Hamilton. The gif was created by the author, through a screen recording of the Stepworks recording.

Finally, Figure 3 is taken from Unlimited Greatness, a text that remediates Nike’s ad featuring the story of Serena Williams as single words. In this text, the effects of filling each cell with a single word are on display.

There is text that appears on word at a time in the center of the screen which reads “Compton / sister / outsider / pro / #304 / winner / top 10 / Paris / London / New York / Melbourne / #1 / injured / struggling.”
Figure 3. An animated gif showing part of a Stepworks performance called Unlimited Greatness created by Erik Loyer that remediates a Nike ad about Serena Williams’s tennis career originally created by the design team, Wiedan+Kennedy. The gif was created by the author, through a screen recording of the Stepworks recording.

Pedagogically speaking, this range of illustrative content allows students to quickly grasp the different effects of manipulating textual performance in an interactive piece. At the same time, the ease of working in the spreadsheet offers them an effective place from which to experiment in their own work. For example, reflecting on the process of creating a Stepworks performance based on a favorite song, one student describes placing words purposefully into the spreadsheet, breaking up the entries by word and eventually by syllable, rather than by line:

I made them [the words] pop up one by one instead of the words popping up all together using the [&] symbol. I also had words that had multiple syllables in the song, so I broke those up to how they fit in the song. So, if the song had a 2-syllable word, then I broke it up into 2 beats.

Although the spreadsheet is not named explicitly, the student describes a creative process of purposefully engaging with the different effects of syllables, words, and lines, opting for a word- and syllable-based division in the spreadsheet to affect how the song’s remediation is performed (see Figure 4, the middle column of the top row).

Besides offering the spreadsheet as an accessible space in which to build interactive texts, Stepworks comes equipped with pre-set “stages” on which to perform those texts. Currently there are seven stages available for a Stepworks performance, and each of them offers a different look and feel for the text. For instance, the Hamilton piece discussed above (Figure 2) is performed on Vocal Grid, a stage which assigns each character a specific color and position on a grid that will grow to accommodate the addition of more characters; Who’s On First, by contrast, is performed on Layer Cake, a stage that displays character dialogue in stacked rows. As the stages offer a default set design for font, color, textual behavior, and background, they (like the spreadsheets) reduce the barrier-to-entry for creating and experimenting with kinetic, e-poetic works. Once a text is built in the spreadsheet and the spreadsheet published to the web, it may be directly loaded into any stage and performed; a simple page reload in the browser will update the performance with any changes to the spreadsheet, thereby supporting an iterative design process. The user-facing stage and design-facing spreadsheet are integrated with one another so that students may tinker, fiddle, experiment, and learn, moving between the perspective of the audience member and that of the designer.

In my own classes, it is at this stage of the process that additional, less tangible benefits of critical making pedagogy have come to light, especially in my most common Stepworks assignment: challenging students to remediate a favorite song or poem into a Stepworks performance. When students reach the stage of loading and reloading their spreadsheets into different stages in Stepworks, they also often begin exploring different performance effects for their remediated texts, regularly going beyond any requirements of the assignment to learn more about working with and in Stepworks to achieve the effects they want. Of this creative experience, one student writes:

In order to accurately portray the message of the song, I had to take time and sing the song to myself multiple times to figure out the best way to group the words together to appear on screen. Once this was completed, I had an issue where words would appear out of order because I didn’t fully understand the +1 (or any number for that matter) after a word. I thought that this meant to just add extra time to the duration of the word or syllable. I later figured out that it acted as a timeline of when to appear rather than the duration of the word’s appearance. Once I figured this out, it then came down to figuring out the best timing of the words appearing on each screen to match the original song.

Here the student’s reflection indicates a few important things: first the reflection articulates the student’s own iterative design process, as they move between the designer’s and audience’s experiential positions to create the “best timing of the words” performed on screen. This same movement guides the student towards figuring out some more advanced Stepworks syntax: working with a “Pulse” character to effect a tempo or rhythm to the piece by delaying certain words’ appearances on screen (the reference to “+1”). As the assignment required an attention to rhythm, but did not specify the necessity of using the Pulse character to create a rhythm, this student’s choice to work with the Pulse character points to both a self-guided movement beyond the requirements of the assignment, and a developing poetic sensitivity to words and texts as rhythmic bodies that effect meaning through performance. In Stepworks, rhythm can be created by working down the spreadsheet’s rows (as in the Hamilton or Unlimited Greatness pieces in Figures 2 and 3), however this rhythm is reliant on the player’s interactive choices and personal sensitivity to the poetics. Using a pulse to effect rhythm overrides the player’s choice and assigns a rhythm to the contents within a single cell, performing them on a set delay following the user’s single-click.[6] Working with the pulse, then, the student is letting their creative and critical ownership of their poetic design lead them to a direct confrontation with the culturally-conditioned paradox of user-control and freedom in interactive environments. This point is explicitly evoked later in the reflection, as the student expands on the effects of the pulse, writing:

The timing of the words appearing on the screen … also enhance the impact and significance of certain words. My favorite example is how one word … “Hallelujah,” can be stretched out to take up 8 units of time, signifying the importance of just that one word
(see Figure 4, the leftmost text in the second row).

Indeed, across my courses students have demonstrated a willingness to go beyond the specifications of this assignment in order to fully realize their poetic vision. Besides the Pulse, students often explore syntax for “sampling” the spreadsheet’s contents to perform a kind of remix, customizing fonts or colors in the display, or adding multimedia like images, gifs, or sound. Thus, as Stepworks supports this kind of work, it simultaneously supports students’ hands-on learning of digital poetics, especially those popularized by works created with Flash.

Finally, I’d like to point to one final aspect of Stepworks that makes it an ideal platform for teaching e-literature through feminist critical making pedagogies. Because of its integration with google sheets, Stepworks supports collaboration, unfettered by distance or synchronicity. Speaking for my own classes and experiences teaching with Stepworks—particularly the Spring 2020 class—this is where the program really excels pedagogically, as it opens a space for teaching e-poetry through sharing ideas and poetic decisions, creating and experimenting “together,” and supporting students to learn from and with one another, even when the shared situatedness of the classroom is inaccessible. A powerful pedagogical practice in its own right, collaboration is also a core feminist technology, central to feminist praxis across disciplines, and a cornerstone of digital humanities work, broadly speaking. Indeed, it is one of the tenants of digital humanities that has contributed to the field’s disruption and challenge to expectations of humanities scholarship.

As an illustration of what can be done in the (virtual, asynchronous) classroom through a collaborative Stepworks piece, I will end this case study with a piece my students created to close out the Spring 2020 semester—a moment that, to echo the opening of this article, threw us into a space of technological inaccess, of temporal obsolescence. Unable to access many works of Flash-based e-lit that had been on the original syllabus, critical making through Stepworks was our primary—in some cases, only—mode through which to engage with e-poetics, particularly of interaction and kinetics. Working from a single google spreadsheet, each student took a character-column and added a poem or song or lyric that was, in some way, meaningful to them in this moment. The resulting performance (Figure 4), appears as a multi-voiced, found text; watching it or playing it, it is almost as if you are in a room, surrounded by others, sharing a moment of speaking, hearing, watching, performing poetry.

Figure 4. A recording of a collaborative Stepworks piece of Found Poetry built by students in Sarah Laiola’s Spring 2020 Introduction to the Digital Humanities class.The piece presents text on a 3 by 3 grid, and each grid performs the text of a different poem or song, chosen by the students.

Conclusion

Technological obsolescence will undoubtedly continue to present a challenge for teaching digital texts and electronic literatures. As our systems update into outdatedness, we face the loss of both readerly access to existing texts, and writerly access to creative potentialities. Flash, which as of this writing, is facing its final days, is just one example of this cycle and its effects on electronic and digital literatures. As I have shown here, however, even as platforms (like Flash) become obsolete for contemporary machines, critical making through newer platforms with low barriers to entry like Stepworks can offer productive counters to this loss, particularly from the writerly perspective of, in this case, kinetic poetics. Indeed, this approach can enhance teaching e-literature and digital textuality more broadly, for other inaccessible platforms. For instance, Twine, a free platform for creating text-based games that runs on contemporary browsers, is a productive space for teaching hypertextual storytelling in place of Storyspace, which though still functional on contemporary systems, is cost-prohibitive at nearly $150; similarly, Kate Compton’s Tracery offers an opportunity for first-time-coders to create simple text generators, in contrast to comparatively more advanced languages like javascript, which require such focus on the code, that the lesson on poetics of textual generation may be lost by overwhelmed students. Whatever the case may be, critical making through low-cost, contemporary platforms that are easy to use offer a robust space for teaching e-literary and digital media creation that maintains a feminist, inclusive, and equitable classroom ethos to counter technological obsolescence.

Notes

[1] Throughout this article, I am purposely using different capitalizations for referencing the digital humanities. Capital DH, as used here indicates the formalized field as such, while lowercase digital humanities refers to broad practices of digital humanities work that is not always recognizable as part of the field.

[2] Although I cite Svensson here, the term is taken from the 2011 MLA conference theme. Svensson, however, theorizes the big tent in this piece.

[3] Anecdotal data.

[4] For example, see Hélène Cixous’s écriture feminine, or feminist language-oriented poetry such as that by Susan Howe.

[5] The unessay is a form of student assessment that replaces the traditional essay with something else, often a creative work of any media or material. For examples see Cate Denial’s blog reflecting on the assignment, Hayley Brazier and Heidi Kaufman blog post “Defining the Unessay,” or Ryan Cordell’s assignment in his Technologies of Text class.

[6] As a reminder, Stepworks texts are single-click interactions, where the entire contents of a cell in the spreadsheet are performed on a click.

Bibliography

Balsamo, Anne, Dale MacDonald, and John Winet. 2017. “AIDS Quilt Touch: Virtual Quilt Brower.” In Making Things and Drawing Boundaries: Experiments in the Digital Humanities, edited by Jentery Sayers. Debates in the Digital Humanities. Minneapolis: University of Minnesota Press. https://dhdebates.gc.cuny.edu/read/untitled-aa1769f2-6c55-485a-81af-ea82cce86966/section/acee4e20-acf5-4eb3-bb98-0ebaa5c10aaa#ch32.

Belojevic, Nina. 2017. “Glitch Console.” In Making Things and Drawing Boundaries: Experiments in the Digital Humanities, edited by Jentery Sayers. Debates in the Digital Humanities. Minneapolis: University of Minnesota Press. https://dhdebates.gc.cuny.edu/read/untitled-aa1769f2-6c55-485a-81af-ea82cce86966/section/b1dbaeab-f4ad-4738-9064-8ea740289503#ch20.

Bouse, Brad. 2012. Between Page and Screen. New York: Siglio Press. https://www.betweenpageandscreen.com.

Burtch, Allison, and Eric Rosenthal. 2017. “Mic Jammer.” In Making Things and Drawing Boundaries: Experiments in the Digital Humanities, edited by Jentery Sayers. Debates in the Digital Humanities. Minneapolis: University of Minnesota Press. https://dhdebates.gc.cuny.edu/read/untitled-aa1769f2-6c55-485a-81af-ea82cce86966/section/f8dcc03c-2bc3-499a-8c1b-1d17f4098400#ch11.

Chess, Shira. 2020. Play Like a Feminist. Playful Thinking. Cambridge: MIT Press.

D’Ignazio, Catherine, and Lauren F. Klein. 2020. Data Feminism. Ideas. Cambridge: MIT Press.

ELL Team, Dene. 2020. “Electronic Literature Lab.” Electronic Literature Lab. Accessed June 30. http://dtc-wsuv.org/wp/ell/.

Endres, Bill. 2017. “A Literacy of Building: Making in the Digital Humanities.” In Making Things and Drawing Boundaries: Experiments in the Digital Humanities, edited by Jentery Sayers. Debates in the Digital Humanities. Minneapolis: University of Minnesota Press. https://dhdebates.gc.cuny.edu/read/untitled-aa1769f2-6c55-485a-81af-ea82cce86966/section/2acf33b9-ac0f-4411-8e8f-552bb711e87c#ch04.

FemTechNet. 2012. “Manifesto.” FemTechNet. https://femtechnet.org/publications/manifesto/.

FemtechNet White Paper Committee. 2013. “Transforming Higher Education with Distributed Open Collaborative Courses (DOOCS): Feminist Pedagogies and Networked Learning.” FemTechNet. https://femtechnet.org/about/white-paper/.

Flores, Leonardo. 2019. “Third Generation Electronic Literature.” Electronic Book Review, April. doi:https://doi.org/10.7273/axyj-3574.

Hayles, N. Katherine. 2006. “The Time of Digital Poetry: From Object to Event.” In New Media Poetics: Contexts, Technotexts, and Theories, edited by Adalaide Morris and Thomas Swiss, 181–210. Cambridge, Massachusetts: MIT Press.

Hayles, N. Katherine, and Ryan House. 2020. “How to Teach Electronic Literature.” Teaching Electronic Literature.

Knight, Kim A. Brillante. 2017. “Fashioning Circuits, 2011–Present.” In Making Things and Drawing Boundaries: Experiments in the Digital Humanities, edited by Jentery Sayers. Debates in the Digital Humanities. Minneapolis: University of Minnesota Press. https://dhdebates.gc.cuny.edu/read/untitled-aa1769f2-6c55-485a-81af-ea82cce86966/section/05613b02-ec93-4588-98e0-36cd4336e7a0#ch26.

Losh, Elizabeth, and Jacqueline Wernimont, eds. 2018. Bodies of Information: Intersectional Feminism and Digital Humanities. Debates in the Digital Humanities. Minneapolis: University of Minnesota Press. https://dhdebates.gc.cuny.edu/projects/bodies-of-information.

Salter, Anastasia, and John Murray. 2014. Flash: Building the Interactive Web. Cambridge: MIT Press.

Sayers, Jentery, ed. 2017. Making Things and Drawing Boundaries: Experiments in the Digital Humanities. Debates in the Digital Humanities. Minneapolis: University of Minnesota Press. https://dhdebates.gc.cuny.edu/projects/making-things-and-drawing-boundaries.

Slocum, Holly, Nicholas Schiller, Dragan Espenscheid, and Greg Philbrook. 2019. “After Flash: Proposal for the 2019 National Endowment for the Humanities, Humanities Collections and Reference Resources, Implementation Grant.” Electronic Literature Lab. http://dtc-wsuv.org/wp/ell/afterflash/.

Stefans, Brian Kim. 2000. The Dreamlife of Letters. Electronic Literature Collection: Volume 1. https://collection.eliterature.org/1/works/stefans__the_dreamlife_of_letters.html.

Svensson, Patrik. 2012. “Beyond the Big Tent.” In Debates in the Digital Humanities, edited by Matthew K. Gold. Minneapolis: University of Minnesota Press.

Wardrip-Fruin, Noah. 2003. “From Instrumental Texts to Textual Instruments.” In Streaming Worlds. Melbourne, Australia. http://www.hyperfiction.org/texts/textualInstrumentsShort.pdf.

About the Author

Sarah Whitcomb Laiola is an assistant professor of Digital Culture and Design at Coastal Carolina University, where she also serves as coordinator for the Digital Culture and Design program. She holds a PhD in English from the University of California, Riverside, and specializes in teaching new media poetics, visual culture, and contemporary digital technoculture. Her most recent peer-reviewed publications appear in Electronic Book Review (Nov 2020), Syllabus (May 2020), Hyperrhiz (Sept 2019), Criticism (Jan 2019), and American Quarterly (Sept 2018). She will be joining the JITP Editorial Collective in 2021.

Side angle view of a 3D-scanned leafy plant woodblo
0

Make-Ready: Fabricating a Bibliographic Community

Abstract

The 3Dhotbed project was established in 2016 with the goal of leveraging maker culture to enhance the study of material culture. An acronym for 3D-Printed History of the Book Education, the project extends the tradition of experiential learning set forth by the bibliographical press movement through the development of open-access teaching tools. After successfully implementing its initial curricular offerings, the project developed an engaged community of practice across the United States, Canada, and the United Kingdom, whose members have since called for further development. This paper reports upon recent efforts to answer these demands through the design of a community-populated repository of 3D-printable teaching tools for those engaging in bibliographical instruction and research. Among the key findings are the demonstration that the same pedagogical methods that made the bibliographical press movement successful are now applicable more broadly throughout the expanding discipline of book history, and that 3D technologies, distributed by a digital library platform, are ideal for providing open access to the tools that can promote this pedagogy on a global scale. By outlining the theoretical grounding for the project’s work and charting the hurdles and successes the group has encountered in furthering these efforts in the burgeoning bibliographical maker movement, the 3Dhotbed project is well positioned to serve as a model for others seeking to utilize 3D technologies, digital library infrastructure, and/or multiple institutional partnerships in the design of interactive pedagogy.

Introduction

In the mid-twentieth century, a new approach to research and instruction surrounding the history of the book emerged, which advocated for a hands-on mode of inquiry complementing the traditional work performed in a reading room.[1] The bibliographical press movement, as it came to be known, demonstrated that first-hand experience with the tools and methods of, say, an Elizabethan pressroom allowed researchers to put into practice the book history they theorized. In recent decades, the movement’s continued success has hinged largely upon access to appropriate equipment with which students and researchers can conduct these experiential experiments. The reality today is that many more would like to offer an immersive approach to book history pedagogy than have the proper equipment to do so. However, with the proliferation of affordable 3D technologies and advancements in digital library distribution, there is a new opportunity to democratize and significantly enhance access to this and other forms of instruction. This paper reports on efforts of the 3Dhotbed project (3D-printed History of the Book Education) to build a community-populated repository of open-access, 3D-printable teaching tools for those engaging in bibliographical instruction and research. Following a brief analysis of how the project has evolved from decades of application in book history-related instruction, the paper will outline 3Dhotbed’s work toward establishing a community 3D data repository and fostering a community of practice. Beyond the realm of book history, these findings will be relevant to those developing projects using 3D technologies, or anyone working on a digital project that relies upon workflows distributed among partners at many institutions.

An Overview of the Bibliographical Press Movement

A crucial element in the expansion of bibliography during the last century was the advent of the bibliographical press movement. Defined by one of its key proponents, Philip Gaskell, as “a workshop or laboratory which is carried on chiefly for the purpose of demonstrating and investigating the printing techniques of the past by means of setting type by hand, and of printing from it on a simple press,” these pedagogical experiments have provided an ideal environment for scholars seeking to understand the printed texts they study (Gaskell 1965, 1). The movement appeared in the wake of a call-to-action made in 1913 by R. B. McKerrow, which has served as a refrain for book historians over the last century: “It would, I think, be an excellent thing if all who propose to edit an Elizabethan work from contemporary printed texts could be set to compose a sheet or two in as exact facsimile as possible of some Elizabethan octavo or quarto, and to print it on a press constructed on the Elizabethan model” (McKerrow 1913, 220). As he postulated, with even a rudimentary experiential lesson in historic printing, students “would have constantly and clearly before their minds all the processes through which the matter of the work before them has passed, from its first being written down by the pen of its author to its appearance in the finished volume, and would know when and how mistakes are likely to arise” (McKerrow 1913, 220). It would be some twenty years before McKerrow’s course of study was brought to fruition with the establishment of a bibliographical press by Hugh Smith at University College, London in 1934. By 1964, at least twenty-five such presses had been established across three continents (Gaskell 1965, 7–13).

In practice, the work McKerrow’s successors have embarked upon with these bibliographical presses has served to further both pedagogy and research. The hands-on approach encourages students to internalize the processes, forging an intimate knowledge of them that better informs subsequent analysis of texts. In the past several decades, experiential learning theorists have confirmed Gaskell’s adage that “there is no better way of recognizing and understanding [printers’] mistakes than that of making them oneself” (Gaskell 1965, 3). As Sydney Shep has written:

By applying technical knowledge to the creation of works which challenge the relationship between structure and meaning, form and function, sequentiality and process, text and reader, students achieve not only a profound understanding of nature and behaviour of the materials they are handling, but an awareness of the interrelationship of book production methods and their interpretive consequences. (Shep 2006, 39–40)

The continued proliferation of these bibliographical presses stands as a testament to the efficacy of an experiential approach that focuses on the process to better understand the product of printing. This presswork has formalized a methodology through which book historians have established and tested hypotheses regarding how books were printed in the hand-press period. Indeed, much of the foundational work done during the twentieth century to investigate the history of the book relied upon the reverse engineering of historical methods in bibliographical press environments (Shep 2006, 39). However, such advances have not come without significant logistical obstacles. Notably, the acquisition of proper equipment has been a perennial challenge for those wishing to take up this work.

Writing in the early twentieth century, Philip Gaskell described how he relied on the charity of his colleagues at the Cambridge University Press, who lent him their castoff equipment to start his operations. However, these two nineteenth-century iron hand-presses differed significantly from those used in the Elizabethan period. Gaskell acknowledges, “Ideally we should try to imitate the ways of the hand-press period as closely as we can, and the establishments which have built replicas of old common presses are certainly nearest to that ideal” (Gaskell 1965, 3). Modern iron hand-presses such as those given to him mimicked the operations of an Elizabethan common press well enough to foster an understanding of early printing, so long as one understood the historical concessions being made.

Concerns regarding how to procure the right equipment pervaded the early decades of the bibliographical press movement. In his article advocating for an experiential approach, Gaskell included a census of current bibliographical presses and took pains to list the equipment each held, paying particular attention to the make and model of the press and how it was acquired (Gaskell 1965, 7–13). Notably, only one of these was fashioned before the nineteenth century, with most dating to the late nineteenth or early twentieth centuries. Today, even locating latter-day printing equipment proves to be a struggle. Steven Escar Smith has commented upon the increasing difficulty of doing so as the printing industry continues to evolve, adding that “the scarcity and age of surviving tools and equipment, especially in regard to the hand press period, makes them too precious for student use” (Smith 2006, 35). This being the case, one must resort to custom fabrication in order to facilitate the kind of hands-on experiential learning that Gaskell and his colleagues advocated. In today’s market, where securing even a few cases of type can be prohibitively expensive for many instructors, such a prospect is a nonstarter. In order for the bibliographical press movement to grow and expand in the twenty-first century, an updated approach must be adopted.

New Solutions for Old Problems: 3D Technologies in Book History Instruction

The 3Dhotbed project was devised to alleviate the difficulties that would-be instructors in book history encounter when trying to develop experiential learning curricula. Using a complement of 3D scanning and modelling techniques, this multi-institutional collaboration has created highly accurate, 3D-printable replicas that are freely available for download from an online repository. The project is positioned as a confluence of book history, maker culture, and the digital humanities, and is grounded by core values such as openness, transparency, and collaboration (Spiro 2012). Succinctly put, the project proposes (a) the same methods that made the bibliographical press movement successful in the instruction of early modern western printing can and should be applied more broadly throughout the expanding discipline of book history; and (b) 3D technologies, distributed via a digital library platform, are ideal for providing open access to the tools that can promote this pedagogy on a global scale.

Extending the practice of relying upon custom fabrication to build multi-faceted collections of book history teaching tools, we envision a future for bibliographical instruction that embraces additive manufacturing to create hands-on teaching opportunities. The project aspires to continue the spirit of the bibliographical press movement, which was marked by experimentation as inquiry and a commitment to experiential learning, within burgeoning digital spaces. By leveraging the sustainable, open-access digital library at the University of North Texas (UNT), the project democratizes access to a growing body of 3D-printable teaching tools and the knowledge they engender beyond often isolated and institutionally-bound physical classrooms. Ultimately, 3Dhotbed envisions an international bibliographic community whose members not only benefit from the availability and replicability of these tools, but also contribute their own datasets and teaching aids to the project.

3Dhotbed: From Product to Project

The 3Dhotbed team’s initial goal was developing a typecasting toolkit to teach the punch-matrix system for producing moveable metal type. The team captured 3D scans of working replicas from the collection of the Book History Workshop at Texas A&M University as the basis for the toolkit’s design. The workshop’s own wood-and-metal replicas were painstakingly fabricated by experts in analytical bibliography and early modern material culture, and have been used in hands-on typographical instruction for almost two decades.

Rotating gif of a 3D-modeled hand mould.
Figure 1. Rotating gif of Moveable Hand Mould Side A. One piece of a two-part mould used for casting hand-made type. Slid together, the two parts of the mould create a cavity that adjusts for varying letter widths. A typecaster inserts a matrix for a specific letter into the hand mould and fills the resulting cavity with molten type metal, creating a piece of type (Jacobs, McIntosh, and O’Sullivan 2017).

Successfully developing the typecasting toolkit was an iterative process that included scanning, processing, and modeling the exemplar wood-and-metal replicas into functional 3D-printable tools.[2] The resulting toolkit served as a proof-of-concept, not only for the creation and use of 3D tools in hands-on book history instruction, but also for the formation of an active community of practice invested in their use and further development (Jacobs, McIntosh, and O’Sullivan 2018). Since being launched in the summer of 2017, the 3Dhotbed collection has been used more than 3,100 times at institutions across the United States, Canada, and the United Kingdom.

In developing the first toolkit, the project illustrated the potential pedagogical impact of harnessing maker culture to advance the study of material culture. As Dale Dougherty, founder of Make magazine, framed the term, a “maker” is someone deeply invested in do-it-yourself (DIY) activities, who engages creatively with the physicality of an object and thinks critically about the ideas that have informed it. It is a definition that places a maker as being more akin to a tinkerer than an inventor (Dougherty 2012, 11–14). As a tool designed for classroom use, the first 3Dhotbed toolkit includes several distinct pieces, which the end-user must interpret, physically assemble, and enact in order to fully understand. Inspired by the maker movement, this personal engagement with the design and mechanical operations of tools of the fifteenth century, as mediated through those of the twenty-first century, encourages a student to actively construct their knowledge of the punch-matrix system (Sheridan et al. 2014, 507).

The team members immediately implemented the typecasting toolkit into curricula across their respective institutions. At the University of North Texas, Jacobs partnered with Dr. Dahlia Porter to incorporate hands-on book history instruction into a graduate level course focusing on bibliography and historical research methods. The required readings introduced students to the theoretical processes of type design, typecasting, typesetting, imposition, and hand-press printing, but many struggled to connect those processes to the resulting physical texts they were assigned to analyze prior to their visit. In a post-instruction survey of the visit, Dr. Porter stated, “It is only when students model the processes themselves that they fully understand the relationship between the technical aspects of book making and the book in its final, printed and bound form.” At Texas A&M University, where an experiential book history program was already established, O’Sullivan incorporated the typecasting toolkit in support of more ad-hoc educational outreach, such as informal instruction and community events. Doing so allowed detailed bibliographic instruction to be brought into spaces where it might not otherwise appear, illustrating the immediate benefits of portability and adaptability provided by using 3D models outside a pressroom environment.

From the outset, 3Dhotbed was envisioned as a community resource, and the team has repeatedly solicited feedback and guidance from end-users. For example, the team hosted a Book History Maker Fair in 2016, attracting a varied group of students, faculty, and community members. Attendees widely reported that handling these tools was helpful in crafting their understanding of historical printing practices, and expressed an eagerness to attend similar events in the future. Following the project’s official debut at the RBMS Conference in summer 2017, instructors contacted the team to discuss how they were successfully incorporating the typecasting toolkits in their book history classrooms and offered ideas for additional tools to be developed.[3] Simultaneously, scholars involved in similar work reached out to the team with datasets they had generated, hoping to make them publicly available for teaching and research (see Figure 2). This international group of scholars, students, and special collections librarians offered innovative use cases, additional content, and suggestions regarding areas in which to grow, which encouraged the team to develop the 3Dhotbed project from a stand-alone toolkit into a multi-faceted digital collection.

A key feature of this second phase is the involvement of this community—not only in the reception and use of 3Dhotbed tools, but in their creation and description as well. These continuing endeavors have developed into networked and peer-led learning experiences. In moving to this community-acquisition model, the project draws from a wider selection of available artifacts to digitize and include, as well as a broader array of expertise in the history of the book. Moving toward such a model also created an opportunity for the project to benefit from the perspective of independent designers, book history enthusiasts, and makers who work outside the academy.

The success of this distributed model depends on three things: the utilization of a trusted digital library for the data, the development of logical workflows with clear directives for ingesting and describing community-generated data, and considerations for responsibly stewarding datasets of collections owned by partner institutions. Rather than create an unsustainable, purpose-built repository to host the datasets, 3Dhotbed leverages the existing supported system at UNT Libraries. This digital library encourages community partnerships and has a successful history of facilitating distributed workflows for thorough content description (McIntosh, Mangum, and Phillips 2017). The team built upon this existing structure and developed additional tools to facilitate varying levels of collaboration. This model enables the 3Dhotbed project to explore new partnerships within the international bibliographic community while promising sustainability and open access.

Community-Generated Data in Theory

The twenty-first century has witnessed a boom in the development and use of 3D technologies across both public and private sectors. In 2014, Erica Rosenfeld Halverson and Kimberly M. Sheridan pointed to the makerspace.com directory as listing more than 100 makerspaces worldwide, citing this as an indicator of the maker movement’s expansion (Halverson and Sheridan 2014, 503). As of this writing, only six years later, the same directory now lists nearly ten times that figure.[4] With this massive growth in available resources has also come a great expansion in the types of projects and products incorporating 3D technologies and similar maker tools. In some cases, the focus of activities taking place in a makerspace is upon the making itself, with the end-product being only incidental—the relic of a constructive process and physical expression of more abstract ideas (Ratto 2011; Hertz 2016). In others, however, the process is driven toward a specific goal, an end-product that might possess future research, instruction, or commercial value. Such activities include custom fabrication, prototyping, and 3D digitization. In these instances, where the deliverable could be considered a work of enduring value, particular attention must be given not only to the design and manufacturing of the product, but also to its metadata and sustainable long-term storage (Boyer et al. 2017).

In the humanities, where 3D projects have begun to proliferate, we must develop methodologies to guide work of enduring value to ensure its rigor and validity. As in any digitization project, it is important to understand what information has been lost between the original artifact and the resulting 3D model when incorporating it into research and instruction. The field of book history is well prepared to utilize 3D models, as scholars in the discipline have regularly turned to functional replicas to inform their inquiry where precise historical artifacts were not available (Gaskell 1965, 1–2; Samuelson and Morrow 2015, 90–95).

The 3Dhotbed team strives for full transparency in this regard, communicating where concessions to historical accuracy have been made, and attending in particular to the anticipated learning outcomes of each dataset. For instance, the 3Dhotbed typecasting teaching toolkit cannot be used to cast molten metal type when 3D printed from plastic polymers, but is nevertheless effective in communicating the mechanics of how a punch, matrix, and hand-mould may be used to produce multiple identical pieces of movable type.

These decisions are an integral part of producing a detailed 3D model. Most are inconspicuous, but cumulatively they have a real impact on the finished product. Understanding where and how these decisions are made has been one significant outcome of this project. With the progression toward a collaborative model, there is a communal responsibility to make informed recommendations as to how the scanning or modelling ought to be carried out, ensuring a consistent level of quality across the digital assets ingested into the repository. Likewise, it is essential that those who download, print, and use future replicas are able to evaluate historical concessions for themselves. An analogous concern is encountered by anyone conducting research with a microfilmed or digitized copy of a book or newspaper (Correa 2017, 178). These surrogates provide much of a book’s content (e.g. the printed text), but they cannot convey all of the information contained therein (e.g. bibliographical features such as watermarks or fore-edge paintings)—a fact that has limited their utility to book historians.

By contrast, 3D models can enter a repository as complex objects accompanied by supplemental files and other forms of metadata. Through these we have the ability—and the ethical responsibility—to contextualize the digital surrogate with full catalog descriptions, details regarding the scanning and post-processing, and photographic documentation of the original artifact (Manžuch 2017, 9–10). To facilitate the fabrication and use of 3Dhotbed models, information on how best to 3D print these tools is included within the metadata. For example, woodblock facsimiles require a granularity in printing not afforded through fused deposition modeling, or FDM, printers (the most common 3D printer type among hobbyists and publicly available makerspaces). Rather, these tools are better served by resin printing, a process that is suited to the production of finely detailed models. By identifying the appropriate 3D printing process, these supplemental data help mitigate frustration and wasted resources on the part of the end-user.

Side angle view of a 3D-scanned leafy plant woodblock.
Figure 2. Due to its fine detail, the Mattioli woodblock requires resin-printing (Sweitzer-Lamme 2016).

Community-Generated Data in Production

Expanding the collection to include data contributed from a broader community of practice is the ideal way to continue the original mission of the 3Dhotbed project. By including objects from a variety of international printing traditions, the collection encourages the study of the book over the course of millennia beyond its history in western Europe. Doing so also affords the opportunity to include a more diverse range of expertise for the contribution of metadata, scholarship, and teaching materials in concert with the data itself. However, in expanding the repository, it was imperative the team institute workflows to ensure this phase of the project adhered to the same principles of transparency and accuracy for both data and metadata. Thus, this phase of the project signals a shift from 3Dhotbed acting solely as content generator to content curator as well.

This shift introduced unanticipated hurdles not encountered in previous additions to the 3Dhotbed collection. While some were specific to the objects themselves, other broader issues were the byproduct of putting a community model into production. These included describing uncataloged and/or foreign language materials and managing the institutional relationships that come with hosting others’ data. The first community-generated teaching tool added to the 3Dhotbed project illustrates some of these challenges. Like the typecasting toolkit, it was fashioned in response to a distinct pedagogical demand. While developing the curriculum for a history class covering Qing history through Manchu sources at UCLA, Devin Fitzgerald (an instructor in the English Department) contacted UCLA Library Special Collections staff to arrange a class visit. During their visit, University Archivist Heather Briston introduced the group to a complete set of thirty-six xylographic woodblocks carved during the eighteenth century to print miracle stories, or texts based on the ancient Buddhist Diamond Sutra. Due to their age and fragility, the Diamond Sutra blocks are no longer suitable for inking and printing. In response to the instructor’s desire to facilitate a hands-on learning experience for his students, Coordinator of Scholarly Innovation Technologies Doug Daniels led a project in the UCLA Lux Lab to digitize all thirty-six blocks and create high-quality, resin-printed facsimiles that could be inked and printed to recreate the original text. Working with Fitzgerald and Daniels, the 3Dhotbed team ingested one of these digitized blocks into the UNT Digital Library as a tool for those in the broader community interested in studying and teaching xylography and eastern printing practices (Daniels 2017).

Rotating gif of a carved xylographic woodblock illustrating on one half a standing teacher and three seated students, and, on the other, Chinese characters.
Figure 3. 3D dataset model of a xylographic woodblock containing pages 31–32 of “An Illustrated Explanation of Miraculous Response Appended to the Heart Sutra and Diamond Sutra,” an ancient Buddhist text published by Zhu Xuzeng 朱續曾 in 1798 (Daniels 2017).

Like many items from complex artifactual collections, the Diamond Sutra woodblock had not yet been cataloged by its home institution, due in part to a foreign language barrier. This limited the 3Dhotbed team’s ability to mine existing metadata to describe the derivative 3D model. However, in collaboration with the instructor, a specialist in both European and East Asian bibliography, the woodblock data was described in both English and Chinese. The addition of the Diamond Sutra woodblock model into the growing corpus of the 3Dhotbed project is an exciting development, as it enhances access to educational tools that help decenter the often anglophonic focus of book history instruction. Developing a system for community-generated description in which scholars are encouraged to describe foreign-language materials is central to developing an international bibliographic community of practice. Additionally, the project’s website supplements the digital collection and creates a space where scholars can host valuable pedagogical materials to help contextualize the teaching tools within multiple fields of study.[5]

Integrating the Diamond Sutra woodblock into the collection also helped refine workflows that will be necessary for taking on larger partnerships in the future. For example, the team realized that community-generated data necessitates community-generated description. Facilitated by the infrastructure already in place in the UNT Digital Library, 3Dhotbed provides two methods for description via distributed workflows, allowing content generators to provide detailed metadata for objects within their area of expertise. These two methods are the direct-access description approach used for larger, more complex projects and a mediated description approach for smaller projects.

The direct access description approach leverages the partnership model already implemented by the UNT Digital Library. UNT Digital Projects staff create metadata editor accounts for individuals and institutions submitting data. The describers then have access, either pre- or post-ingestion, to edit the individual records for items uploaded to the collections under their purview through the UNT Digital Library metadata editor platform. The editing interface was built in-house and includes embedded metadata input guidelines as well as various examples to guide standards-compliant data formatting. The editing platform’s low barrier of entry eases the training required for new metadata creators, and makes it flexible for use by partners with varying backgrounds in description. Additionally, users are able to update descriptions continuously, meaning partners can add additional description based on future cataloging projects or other developing knowledge about an item.

Despite its low barrier of entry, partners still require some base-level training in order to provide standards-compliant description in the metadata editing platform. The 3Dhotbed project required a workaround for ad-hoc partnerships, which led to developing a mediated description approach for smaller scale and discrete datasets. The project team replicates the required fields necessary for standards-compliant description into a Google document along with guidelines and examples that mimic the metadata editing platform. Project partners populate the document with descriptive information for their 3D data, often with guidance from the project team. The project team then formats the metadata as needed for compliance, then ingests the metadata together with the data for the object itself into the digital library.

These foundations offer numerous possibilities for small- and large-scale partnerships with individual content generators as well as institutions. Flexible workflows provide multiple modes of entry for potential partners, scaffolding the iterative steps from acquisition to description, and enable various levels of investment depending on the nature of the project. Additionally, both approaches ensure item description adheres to the metadata standards required for all items ingested into the UNT Digital Library, regardless of format.

With the possibility of varying levels of collaboration comes the added challenge of managing relationships with varied partners. The team anticipated the potential sensitivity of hosting data generated by other makers, which may duplicate digital collections held in other repositories. Fortunately, this is another area in which the 3Dhotbed project benefits from the existing infrastructure in place within the UNT Digital Library and its partner site, the Portal to Texas History. The Portal models itself as a “gateway … to primary source materials” and provides access to collections and data from hundreds of content partners to provide “a vibrant, growing collection of resources” (Portal, n.d.). It mitigates conflict over attribution by establishing clear ownership and credit for each hosted item. The growth of the 3Dhotbed project into a community-generated resource hinges upon carefully navigating these relationships with our core principles of openness, transparency, and collaboration.

The Bibliographical Maker Movement

With the aid of twenty-first century tools and digital library infrastructure, the 3Dhotbed project creates broad and open access to the previously rarified opportunity to work with models of the tools and materials used in historical book production. Moreover, the future success of 3Dhotbed is not solely based on the volume, diversity, or rarity of individual items, but also on the ability of the platform to put these items in conversation. Distributed workflows facilitate the creation of an innovative scholarly portal, which inspired a community around book history instruction in digital spaces while also making possible new facets of original research around these aggregated materials. As this collection of teaching toolkits continues to grow, the analytical practices set forth in the mid-twentieth century are expanded and refined. With the inclusion of a wider array of objects and tools, this bibliographical maker movement reflects the global reach of book history as a discipline. The project strives not only to make examples of these global book traditions more readily available, but also to enable diversity in experiential book history instruction. The Diamond Sutra woodblock now hosted in the digital repository provides users the opportunity to view a full 3D-scan of the original woodblock and download the data necessary to 3D print a facsimile. By virtue of its inclusion in the 3Dhotbed project, the Diamond Sutra is accompanied by a video demonstrating the processes of carving, inking, and printing woodblocks in various time periods, further enhancing the data’s potential as a teaching tool.[6]

In moving toward a community-led project in the second phase, 3Dhotbed has sought mutual investment from the international bibliographic community in developing and expanding a growing corpus of datasets to facilitate their diverse research. In addition to affordably fabricated teaching tools, the 3Dhotbed project will now include high-resolution 3D models of various typographical and technical collections items related to book history, contributed by these new members of our community. As the repository continues to grow, it will digitally collect and preserve 3D models of tools and collections items that are physically distributed across the globe. Thus, the 3Dhotbed project will become a valuable research portal in its own right—a total greater than the sum of its parts that will facilitate research and instruction across institutions and between disciplines to further our understanding of global book history.

Notes

[1] Authorship of this article was shared in equal collaboration; attribution is thus listed alphabetically.
[2] For more detail on the team’s toolkit development process, see Jacobs, McIntosh, and O’Sullivan 2018.
[3] Audio of this presentation is available at https://alair.ala.org/handle/11213/8581.
[4] The current directory is located at https://makerspaces.make.co/.
[5] For examples, see https://www.3dhotbed.info/instructionalresources.
[6] See https://www.3dhotbed.info/blog/2020/10/5/printing-with-woodblocks.

Bibliography

Boyer, Doug M., Gregg F. Gunnell, Seth Kaufman, and Timothy M. McGeary. 2017. “MorphoSource: Archiving and Sharing 3-D Digital Specimen Data.” The Paleontological Society Papers 22: 157–181.

Correa, Dale J. 2017. “Digitization: Does It Always Improve Access to Rare Books and Special Collections?” Preservation, Digital Technology & Culture 45, no. 4: 177–179. doi: https://doi.org/10.1515/pdtc-2016-0026.

Daniels, Doug. 2017. “Diamond Sutra [Dataset: Jin gang jing xin jing gan ying tu shuo 金剛經心經感應圖說 pp. 31–32].” University of North Texas Libraries, Digital Library. doi: https://digital.library.unt.edu/ark:/67531/metadc1259405/.

Dougherty, Dale. 2012. “The Maker Movement.” Innovations 7, no. 3: 11–14.

Gaskell, Philip. 1965. “The Bibliographical Press Movement.” Journal of the Printing Historical Society 1: 1–13.

Halverson, Erica Rosenfeld, and Kimberly M. Sheridan. 2014. “The Maker Movement in Education.” Harvard Educational Review 84, no. 4: 495–504.

Hertz, Garnet. 2016. “What is Critical Making?” Current 7. https://current.ecuad.ca/what-is-critical-making.

Jacobs, Courtney, Marcia McIntosh, and Kevin M. O’Sullivan. 2017. “Dataset: Moveable Type Hand Mould Side A.” University of North Texas Libraries, Digital Library. doi: https://digital.library.unt.edu/ark:/67531/metadc967385/.

———. 2018. “Making Book History: Engaging Maker Culture and 3D Technologies to Extend Bibliographical Pedagogy.” RBM 19, no. 1: 1–11.

Manžuch, Zinaida. 2017. “Ethical Issues in Digitization Of Cultural Heritage.” Journal of Contemporary Archival Studies 4, no. 2: Article 4. http://elischolar.library.yale.edu/jcas/vol4/iss2/4.

McIntosh, Marcia, Jacob Mangum, and Mark E. Phillips. 2017. “A Model for Surfacing Hidden Collections: The Rescuing Texas History Mini-Grant Program at The University of North Texas Libraries.” The Reading Room 2, no. 2: 39–59. doi: https://digital.library.unt.edu/ark:/67531/metadc987474/.

McKerrow, Ronald B. 1913. “Notes on Bibliographical Evidence for Literary Students and Editors of English Works of the Sixteenth and Seventeenth Centuries.” Transactions of the Bibliographical Society 12: 213–318.

Portal to Texas History. n.d. “Overview.” The Portal to Texas History. Accessed October 14, 2019. https://texashistory.unt.edu/about/portal/.

Ratto, Matt. 2011. “Critical Making: Conceptual and Material Studies in Technology and Social Life.” The Information Society 27, no. 4: 252–260.

Samuelson, Todd, and Christopher L. Morrow. 2015. “Empirical Bibliography: A Decade of Book History at Texas A&M.” The Papers of the Bibliographical Society of America 109, no. 1: 83–109.

Shep, Sydney J. 2006. “Bookends: Towards a Poetics of Material Form.” In Teaching Bibliography, Textual Criticism and Book History, edited by Ann R. Hawkins, 38–43. London: Pickering & Chatto.

Sheridan, Kimberly M., Erica Rosenfeld Halverson, Breanne K. Litts, Lisa Brahms, Lynette Jacobs-Priebe, and Trevor Owens. 2014. “Learning in the Making: A Comparative Case Study of Three Makerspaces.” Harvard Educational Review 84, no. 4: 505–531.

Smith, Steven Escar. 2006. “‘A Clear and Lively Comprehension’: The History and Influence of the Bibliographical Laboratory.” In Teaching Bibliography, Textual Criticism and Book History, edited by Ann R. Hawkins, 32–37. London: Pickering & Chatto.

Spiro, Lisa. 2012. “This is Why We Fight: Defining the Values of the Digital Humanities.” In Debates in the Digital Humanities, edited by Matthew K. Gold. Minneapolis: University of Minnesota Press. doi: http://dhdebates.gc.cuny.edu/debates/text/13.

Sweitzer-Lamme, Jon. 2016. [Dataset: Mattioli woodblock, [art original] 457. Malva maior altera.] University of North Texas Libraries, Digital Library. Accessed October 14, 2019. doi: https://digital.library.unt.edu/ark:/67531/metadc1477779/.

About the Authors

Courtney “Jet” Jacobs is the Head of Public Services, Outreach, and Community Engagement for UCLA Library Special Collections where she oversees the department’s research and reference services, instruction, exhibits, programming and community events, as well as the Center for Primary Research and Training. She holds a BA in English from Ohio State University and an MSLIS from Syracuse University. In partnership with teaching faculty, she has developed and delivered multiple lectures and workshops on book history and printing history utilizing primary source materials in support of course curriculum.

Marcia McIntosh first began working in digital collection development as an undergraduate student at Washington University in St. Louis. After earning a BA in English and African & African American Studies, she went on to earn an MS in Information Studies from the University of Texas at Austin’s School of Information. She is currently the Digital Production Librarian at the University of North Texas where she assists in the coordination, management, and training necessary to create digital projects.

Kevin M. O’Sullivan is the Curator of Rare Books & Manuscripts at Texas A&M University, where he also serves as Director of the Book History Workshop. Prior to this, he was awarded an IMLS Preservation Administration Fellowship to work at Yale University Libraries as part of the Laura Bush 21st Century Librarian Program. He holds a BA in English from the University of Notre Dame, and received his MS from the School of Information with a concurrent Certificate of Advanced Study from the Kilgarlin Center for the Preservation of the Historical Record, both at the University of Texas at Austin. Currently, he is working toward his PhD in English at Texas A&M University.

Skip to toolbar