Tagged digital humanities

A network diagram shows links between names of languages with varying sizes. English, Latin, Greek, and Arabic all have the largest bubbles.

Reading Texts in Digital Environments: Applications of Translation Alignment for Classical Language Learning


This paper illustrates the application of translation alignment technologies to empower a new approach to reading in digital environments. Digital platforms for manual translation alignment are designed to facilitate a particularly intensive and philological experience of the text, which is traditionally peculiar to the teaching and study of Classical languages. This paper presents the results of the experimental use of translation alignment in the context of Classical language teaching, and shows how the use of technology can empower a meaningful and systematic approach to information. We argue that translation alignment and similar technologies can open entirely new perspectives on reading practices, going beyond the opposed categories of “skimming” and traditional linear reading, and potentially overcoming the cognitive limitations connected with the fruition of digital content.

Reading and Digital Technologies: A New Challenge

It seems impossible to imagine a world where digital technologies are not a substantial part of our intellectual activities. The use of technology in teaching and learning is becoming increasingly prominent, even more now, as the massive public health crisis of COVID-19 created the need to access information without physical proximity. Yet, the way information is processed on digital platforms is substantially different from the cognitive standpoint, and not exempt from concerning consequences: recently, it has been emphasized that accessing content digitally stimulates superficial approaches and “skimming”, rather than reading, which may have a longstanding impact on the ways in which human brains understand, approach, and articulate complex information (Wolf 2018).

Therefore, we must ask ourselves if we are using digital technologies in the right way, and what can be done to address this problem. Instead of eliminating digital methods entirely (which in current times seems especially unrealistic), maybe the solution resides in using them to empower a different way of approaching information. In this paper, I will advocate that the practices embedded in the study of Classical texts can offer a new perspective on reading as a cognitive operation, and that, if appropriately empowered through the use of technology, they can create a new and meaningful approach to reading on digital platforms.

The study of Classical languages implies a very peculiar approach to processing information (Crane 2019). The most relevant aspect of studying Classical texts is that we cannot consult a native speaker to verify our knowledge: instead, “communication” is achieved through written sources and their interaction with other carriers of information, such as material culture and visual representations. On the other hand, we must never forget that we are engaging with an alien culture to which we do not have direct access. This necessity of navigating uncertainty requires a much more flexible approach to information, and a very different way of engaging with written sources, where the focus is on mediated cultural understanding through reading, rather than immediate communication.

Engaging with an ancient text is a deeply philological operation: a scholar of an ancient language never simply goes from one word to another with a secure understanding of their meaning. Their reading mode is much more immersive. It is an operation of reconstruction through reflection, pause, and exploration, which requires several skills: from the ability of active abstraction of the language and its mechanics, to the recognition of linguistic patterns that coincide with given models, to the reflection on what a word or expression “really means” in etymological, stylistic, and cultural terms, to the philological reconstruction of “why” that word is there, as a result of a long process of transmission, translation, and error.[1] Yet, the implications of this intensive reading mode, in the broader context of the cognitive transformations to reading and learning, are often overlooked.

The operations embedded in the reading of Classical languages respond to a different cognitive process, that is beyond the opposed categories of “skimming” and traditional linear reading. Because of this peculiarity, some of the technologies designed in the domain of Classical languages are created specifically to empower this approach, bringing it at the center of the reader’s experience.

Translation Alignment: Principles and Technologies

Digital technologies are widely used in Classics for scholarship and teaching, thanks to the widespread use of digital libraries like Perseus (Crane et al. 2018) and the Thesaurus Linguae Graecae (2020), and to the consolidation of various methods for digital text analysis (Berti 2019) and pedagogy (Natoli and Hunt 2019). One of the most interesting recent developments in the field is the proliferation of platforms for manual and semi-automated translation alignment.

Translation alignment is a task derived from one of the most popular applications in Natural Language Processing. It is defined as the comparison of two or more texts in different languages, also called parallel texts or parallel corpora, by means of automated or semi-automated methods. The main purpose is to define which parts of a source text correspond to which parts of a second text. The result is often a list of pairs of items – words, sentences, or larger texts chunks like paragraphs or documents. In Natural Language Processing, aligned corpora are used as training data for the implementation of machine translation systems, but also for many other purposes, such as information extraction, automated creation of bilingual lexica, or even text re-use (Dagan, Church, and Gale 1999).

The alignment of texts in different languages, however, is an exceptionally complex task, because it is often difficult to find perfect overlap across languages, and machine-actionable systems are often inefficient in providing equivalences for more sophisticated rhetorical or literary devices. The creation of manually aligned datasets is especially useful for historical languages, where available indexes and digitized dictionaries often do not provide a sufficient corpus to develop reliable NLP pipelines, and are remarkably inefficient for automated translation. Therefore, creating aligned translations is also a way to engage with a larger scholarly community and to support important tasks in Computer Science.

In the past few years, three generations of digital tools for the creation and use of aligned corpora have been developed specifically with Classical languages in mind. First, Alpheios provides a system for creating aligned bilingual texts, which are then combined with other resources, such as dictionary entries and morphosyntactic information (Almas and Beaulieu 2016; “The Alpheios Project” 2020). Second, the Ugarit Translation Alignment Editor was inspired by Alpheios in providing a public online platform, where users could perform bilingual and trilingual alignments. Ugarit is currently the most used web-based tool for translation alignment in historical languages: since it went online in March 2017, it has registered an ever-increasing number of visits and new users. It currently hosts more than 370 users, 23,900 texts, 47,600 aligned pairs, and 39 languages, many of which ancient, including Ancient Greek, Latin, Classical Arabic, Classical Chinese, Persian, Coptic, Egyptian, Syriac, Parthian, Akkadian, and Sanskrit. Aligned pairs are collected in a large dynamic lexicon that can be used to extract translations of different words, but also as a training dataset for implementing automated translation (Yousef 2019).

The alignment interface offered by Ugarit is simple and intuitive. Users can upload their own texts and manually align them by matching words or groups of words. Alignments are automatically displayed on the home page (although users can deactivate the option for public visibility). Corresponding aligned tokens are highlighted when the pointer hovers on them. The percentage of aligned tokens is displayed in the color bar below the text: the green indicates the rate of matching tokens, the red the rate of non-matching tokens. Resulting pairs are automatically stored in the database, and can be exported as XML or tabular data. For languages with non-Latin alphabets, Ugarit offers automatic transliteration, visible when the pointer hovers on the desired word.[2]

Overview of a trilingual alignment on Ugarit (Armenian, Greek, and Latin). The mouse pointer triggers the highlighting of aligned pairs, and activates the transliteration for the Armenian text. A color bar below the text shows the percentage of aligned pairs in green, and of non-aligned tokens in red.
Figure 1. Overview of a trilingual alignment on Ugarit (Armenian, Greek, Latin), with active transliteration for Armenian.

The structure of Ugarit was also used to display a manually aligned version of the Persian Hafez, in a study that tested how German and Persian speakers used translation alignment to study portions of Hafez using English as a bridge language. The results indicated that, with the appropriate scaffolding, users with no knowledge of the source language could generate word alignments with the same output accuracy generated by experts in both languages. The study showed that alignment could serve as a pedagogical tool with a certain effect on long-term retention (Palladino, Foradi, and Yousef forthcoming; Foradi 2019).

The third generation of digital tools is represented by DUCAT – Daughter of Ugarit Citation Alignment Tool, developed by Christopher Blackwell and Neel Smith (Blackwell and Smith 2019), which can be used for local text alignment and can be integrated with the interactive analysis of morphology, syntax, and vocabulary. The project “Furman University Editions” shows the potential of these interactive views, which are currently part of the curriculum of undergraduate Classics teaching at Furman and elsewhere.

This proliferation of tools shows that there is potential in the pedagogical application of this method: translation alignment can provide a new and imaginative way of using translations for the study of Classical texts, overcoming the hindrances normally associated with reading an ancient work through a modern-day version.

Text Alignment in the Classroom

The use of authorial translations to approach Classical texts is normally discouraged in the classroom, being perceived as “cheating” or as unproductive for a true, active engagement with the language. Part of this phenomenon is explained by the fact that, as “passive” readers, we don’t have any agency in assessing the relationship between a translation and the original, and reading them side by side on paper is rarely a systematic or intentional operation (Crane 2019). However, translations are an integral part of ancient cultures.[3] They are a crucial component of textual transmission, as they represent witnesses of the survival and fortune of Classical texts. Translations are also important testimonies of the scholarly problem of transferring an alien culture and its values onto a different one, to ensure effective communication, or to pursue a cultural and political agenda through the reshaping and recrafting of an important text (Lefevere 1992).

Translations are a medium between cultures, not just between languages. Engaging in an analytical comparison between a translation and the original means to have a deeper experience of how a text was interpreted in a given time, what meanings were associated to certain words, and, at the same time, how certain expressions can display multifaceted semantics that are often not entirely captured by another language. This is also an exercise in cultural dialogue and reflection, not only upon the language(s), but upon the civilization that used it to reflect its values. In other words, it is a philological exploration that resembles much of the reading mode of a Classicist.

Digital platforms for translation alignment offer an immersive and visually powerful environment to perform this task, where the reader can analytically compare texts token by token, and at the same time observe the results through an interactive visualization. It is the reader who decides what is compared, how, and to what extent: the comparison of parallel texts becomes an analytical, systematic operation, which at the same time encourages reflection and debate regarding the (im)perfect matching of words and expressions. In this way, translation alignment provides a way to navigate between traditional linguistic mastery and the complete dependence upon a translation. Not only this stimulates an active fruition of modern translations of ancient texts, but the public visibility of the result on a digital platform also provides a way to be part of a broader conversation on the reception and significance of an ancient text over time.

However, it is also important to apply this tool in the right way. For example, translation alignment needs to be coupled with some grammatical input, to encourage reflection on structural linguistic differences. Mechanical approaches, all too easy with the uncontrolled use of a clickable “matching tool”, should be discouraged by emphasizing the importance of focused word-by-word alignment. In practice, translation alignment needs to be used with caution and in meaningful ways, as a function of the goal and level of a course.

The following sections illustrate examples of application of translation alignment in the context of beginner, intermediate, and upper level classes in Ancient Greek and Latin. Translation alignment was structurally used during the courses to emphasize semantic and syntactic complexities through analytical comparison with English or other languages. Later, students were assigned various alignment tasks and exercises, designed to empower an analytical approach to the text.

Beginner Ancient Greek, first and second semester

The students were given two assignments, performed iteratively in two consecutive semesters, with variations in quantity (more words and sentences were assigned in the second semester):[4]

  1. Individuate specific given words in a chosen passage, and align them with the corresponding words in one translation. The goal of this exercise was to set the groundwork to develop a rough understanding of the depth of word meanings, by assessing how the same word in the source text could appear in different ways across the same translation.
  2. Use alignment to evaluate two translations of a shorter text chunk (1–3 sentences, or 10 verse lines). Identify precisely the corresponding sections of text in the source and in the translations. Assess which translation is most effective by using two criteria: 1, combination of number and quality of matched tokens; 2, pick particularly problematic words and look them up in a dictionary, to assess their meanings; compare the dictionary explanation with the general context of the passage, and assess how translations relate to the dictionary entries and how closely they render the “original sense” of the word.

The results were two short essays where the students articulated their considerations. Grading was based on the ability to give insightful analysis of how word choices impacted the tone and meaning of the translations, and discuss the semantic depth of the words in the original language. Bonus points were provided if the student was able to identify tangential aspects, such as word order, changes in cases, and syntax. Minor weight was given to the overall accuracy of the alignment, in consideration of the level: the design of the exercises was deliberate in discouraging the creation of longer alignments, which often result in the student doing the work without thinking about their alignment choices. Essay questions focused instead on close-reading, analytical, in-depth investigation into the semantics of the source language.

The Ancient Greek text is located at the center, and the two translations at the sides. The translation on the left displays a 75% of aligned pairs, the translation on the right a 73%.
Figure 2. Two aligned English translations of Odyssey 9.105–9.115.

Intermediate level of Ancient Greek and Latin, third semester

Students used translation alignment in the context of project-based learning. They were responsible for the alignment of a chosen text chunk against translations that they had selected, ranging from early modern to contemporary translations. The assignment was divided in phases:

  1. Alignment of the source text against two chosen translations in English, and systematic evaluation of both translations. The students were asked to focus on chosen phenomena of syntax, morphology, grammar and semantics, that were particularly relevant in the text: e.g. word order, participial constructs, adjectival constructs, passive/active constructions, changes in case, transposition of allusion and semantic ambiguity. The students used their knowledge of syntax and grammar to critically assess the performance of different translators, focusing on the different ways in which complex linguistic phenomena can be rendered in another language. This assignment was combined with side analysis of morphology and syntax: for example, the students of Ancient Greek designed a morphological dataset containing 200 parsed words from the same passage.
  2. Creation of a new, independent translation, with a discussion of where it distanced itself from the original, which aspects of it were retained, and how the problems individuated in the authorial translations were approached by the student.

The result was a written report submitted at the midterm or end of the semester, indicating: the salient aspects of the texts and its most relevant linguistic features; an analytical comparison of how those linguistic features appeared in the competing aligned translations, and an evaluation of the translator’s strategy; the student’s translation, with a critical assessment of the chosen strategy to approach the same problems. These aspects constituted the backbone of the grading strategy, with additional attention to the alignment accuracy.

Section of two aligned translations of Hesiod, Works and Days vv. 42–105, with the original ancient Greek at the center, and the English translations on the sides.
Figure 3. Section of two aligned translations of Hesiod, Works and Days vv. 42–105. The student used a comparison between two translations from the same period (Hugh G. Evelyn-White, 1914, and David W. Tandy and Walter C. Neale 1996) but with very different styles, and used adjective-noun combinations and participle constructions to systematically evaluate them. The 1996 translation was judged more literal than the other, and more useful for a student.

Upper level Ancient Greek and Latin, fourth to seventh semester (graduate and undergraduate)

The exercises assigned for the upper level were a more articulated version of the project-based ones given to the intermediate level. The students were assigned a research-based project where alignment would be one component of an in-depth analysis of a chosen source. At an intermediate stage of the semester, the students would submit a research proposal indicating: an extensive passage they chose to investigate, and why they chose it; the topic they decided to investigate, and a short account of previous literature on it; methodologies applied to develop the project; desired outcomes. The final result would be a project report submitted at the end of the semester, indicating: if the desired outcomes had been reached, what kinds of challenges were not anticipated, and what new results were achieved; strategies implemented to apply the chosen methodology, e.g. which alignment strategy was applied to ensure that the research questions were answered; what the student learned about the source, its cultural context and/or language. The results were graded as proper projects: the students were evaluated according to their ability to clearly delineate motivation and methodology, use of existing resources, and critical discussion of the outcomes.

Many students creatively integrated alignment in their projects. For example:

  • Creation of an aligned translation for non-expert readers, alongside a commentary and morphosyntactic annotations. To facilitate reading, the student developed a consistent alignment strategy that only matched words corresponding in meaning and grammatical function. This project was published on GitHub.
  • Trilingual alignment of English-Latin-German to investigate the matching rate between two similarly inflected languages. The student noted that, even if their knowledge of German was inferior to English and Latin, matching Latin against German proved easier and more streamlined, while the English translation was approached with more criticism for its verbosity (Figure 4).[5]
  • Trilingual alignment to compare different texts. The student conceived a project aimed at gathering systematic evidence of the verbatim correspondences between the so-called Fables of Loqman and the Aesopic fables: according to existing scholarship, the former would be an Arabic translation of the latter. The student used a French translation of the Loqman fables to leverage on the challenges of the Arabic, and examined the overall matching rate across the texts (see this sample passage).
Sample passage of Tacitus, Germania 1.1, with two aligned translations in English and German, located on the left and at the center respectively. The German translation at the center displays identical matching rate as the Latin text on the right (93%), while the English translation on the left only has 89% matching rate.
Figure 4. Sample passage of Tacitus, Germania 1.1, with two aligned translations in English and German.


The students reported how alignment affected their understanding of the source and its linguistic features, and how approaching the original by comparing it against a modern translation gave them a deeper understanding and respect for the content. While the alignment process often resulted in some criticism of available translations, the students who had to discuss the challenges faced by translators (or who had to translate themselves) gained a stronger understanding of the issues involved in “transferring” not only words and constructs but also underlying cultural implications and multiple meanings. The students who used alignment in the context of research projects also benefited from the publication of their aligned translations, and some presented them as research papers at undergraduate conferences. Many students even reported to have used alignment independently afterwards in other courses, often to facilitate the study of new languages, both ancient and modern.

Some overarching tendencies in the evaluation of concurrent translations emerged, particularly at the Intermediate and Upper Level. This feedback was extremely interesting to observe, because most of it can only be explained as the result of a systematic comparison between target and source language, in a situation where the reader is an active operator and not a passive content consumer.

The students observed analytically the various ways in which translations cannot structurally convey peculiar aspects of the original: for example, dialectal variants, metrical arrangement, wordplays, or syntactic constructs. Most of them were still able to appreciate skillful modern translations, and even to diagnose why translators would distance themselves from the original. They definitely understood the challenge by engaging in translation tasks themselves. For many, however, the discovery that they could not fully rely on translations to understand what is happening in a text was astonishing. Students tend to be educated to the idea that authorial translations are necessarily “right” (and therefore “faithful”[6]) renderings of Classical texts, to the point where they often trust them over their own understanding of the language. With this exercise, they learned that “right” and “faithful” may not be the same thing, and that the literature of an ancient civilization preserves a depth and complexity of meaning that cannot be fully encompassed in a translation.

Interestingly, students often had a more positive judgement of translations that rendered difficult syntactic constructs more closely to the original without fundamentally altering the structure, or shifting the emphasis (e.g. by changing subject-object relations or by altering verb voices). Students at the Intermediate level, in particular, judged such translations more “literal”, as they found them more helpful in understanding important linguistic structures: Figure 3 shows an alignment of Hesiod’s Works and Days, where the student extrapolated adjective-noun combinations and participle constructs to draw a systematic comparison between two concurring translations. The translation that was judged “more literal”, and therefore more useful for a student, was the one that kept these structures closer to the way they appeared in the original. This phenomenon intensified with texts that had a strong amount of allusions and wordplay, which are often conveyed by means of very specific syntactic constructs: students who dealt with this kind of texts were merciless judges of translations that completely altered the original syntax and recrafted the phrasing to adapt it to a modern audience. The students indicated how such alterations regularly failed to convey the depths of sophisticated wordplay, where the syntax itself is not an accessory, but a structural part of the meaning.

The omission of words in the source language was considered particularly unforgiving: even though some words like adverbs and conjunctions are omitted in translations to avoid redundancies, some translations were found to leave out entire concepts or expressions for no apparent reason. The visualization of aligned texts on Ugarit certainly accentuated this aspect, as it tends to emphasize the relation between matched and non-matched tokens through the use of color, and it also provides matching rates to assess the discrepancy between texts. Almost all the students seem to have intensively taken advantage of this aspect, by emphasizing how translations missed entire expressions that appeared in the original and shaped its message: in other words, even if the omission only regarded one adjective or a particularly intensive adverb, they felt that translations did not convey the full meaning of the text they were reading.

The implications of such observations are interesting: the translations in question were “bad” translations not because they were not understandable or efficient in conveying the sense of a passage in English, but because they hindered the student’s understanding of the original. Readers, even classically-trained ones, normally enjoy translations that, while taking some liberties, are more efficient in conveying the content and artistic aspects of a text in a way that is more familiar to a modern audience. Students who read a text in translation often struggle with versions that try to be close to the original language (sometimes with rather clumsy results), and they also make limited use of printed aligned translations that used to be very popular in school commentaries of the past. However, when students became active operators of translation alignment, the focus shifted to the understanding of the original through the scaffolding provided by the translation. In other words, the focus was on how the translation served the reader of the source text: this suggests an extremely active engagement with the original, through the critical lenses of systematic linguistic comparison.

With the guidance provided by the exercises, the students used translation alignment to engage with linguistic and stylistic phenomena, and the assessment of the ineffectiveness of translations in conveying such complex nuances often made them more confident in approaching the original language. In their own translations, they became extremely self-aware of their position with respect to the text, and tried to justify every perceived variation from the structure and the style of the original. Some of them opted for very literal, yet clumsy, translations, which they reflected upon and elaborated more thoroughly in a commentary to the text; others, particularly at the Upper level, built upon aspects that they liked or disliked in the translations to create better versions of them, depending on their intended audience.

We can conclude that, if appropriately embedded in reflective exercises, translation alignment did not result in a mechanical operation of word matching, but nurtured an active philological approach to the text, and an exploration of it in all its different aspects, from linguistic constructs to word meanings, to the role of wordplays in a literary context. Despite growing skepticism in the ability of translations to convey the “full” meaning of a text, the students still believed in the necessity of using them in a thoughtful manner.

In fact, the students advocated for more and more varied use of digital tools, to compensate for the deficiencies of aligned corpora. At the Upper level in particular, many students complemented their translation alignments with additional data gathered through other digital resources: for example, while creating translation alignments directed at non-expert readers, they integrated the resource with a complete morphosyntactic analysis performed with treebanking (Celano 2019), with the intention of making up for the limitations of incomplete matching of word functions in specific linguistic constructs.

In this regard, it is important to emphasize that translation alignment is just one of the tools at our disposal. In a future where learning and reading are going to be prevalently performed through digital technologies, we need to create environments where readers can meaningfully engage in a philological exploration of texts at multiple levels: translation alignment, but also detection of textual variants, geospatial mapping, social network analysis, morphosyntactic reconstruction, up to the incorporation of sound and recording that can compensate for reading and visual disabilities (Crane 2019).


Overall, the experiment showed that a meaningful use of translation alignment can empower a reflective and active approach to Classical sources, by means of the continuous, systematic comparison of the cultural and semantic depths embedded in the language. Of course, translation alignment should not be the only option: digital technologies offer many opportunities of enhancing the reading experience as a philological exploration, through the interaction of many different data types, allowing a sophisticated approach to information from multiple perspectives. Even though these tools have been created to empower the reading processes specific of Classical scholars, their application promises new ways of approaching digital content in a much wider context, going beyond the categories of “close reading” and “skimming.”

Translation alignment is a tool that can empower a thoughtful and meaningful approach to reading on digital platforms. But more than that, it can also stimulate a deeper respect for cultural differences. In an increasingly globalized world, translations as means of communicating through cultural contexts and languages are increasingly important: automated translations, as well as interpreters and professional translators, represent a response to a generalized need of fast and broad access to information produced in different cultural contexts. However, being able to access translated content so easily can result in oversimplification, and in the overlooking of cultural complexities. Aligned translations offer an alternative. By discouraging the idea that every word has an exact equivalence, aligned translations add value to the original, rather than subtracting it, through a continuous dialogue between cultural and linguistic systems. Engaging with a translation meaningfully means so much more than merely establishing equivalences: by emphasizing the depth of semantic differences, it can promote better attitudes to cultural diversity and acceptance.


[1] In this sense, reading an ancient text is much closer to literary criticism than to the study of a foreign language. This is the reason why Classical languages are never fully embedded in current practices of foreign language teaching and assessment. This topic was recently treated, among others, by Nicoulin (2019).
[2] This feature is currently available for Greek, Arabic, Persian, Armenian, and Georgian.
[3] Translations were continuously used to ensure communication between different cultures and communities in the ancient world (Bettini 2012; Nergaard 1993). The practice of multi-lingual aligned texts as means of cultural communication was normal, if not frequent, in antiquity, with famous examples like the inscriptions of Behistun, the edicts of Ashoka the Great, and the Rosetta Stone.
[4] A variant of this assignment was also tested on a group of students with no knowledge of Greek, enrolled in courses of literature in translation (Palladino, Foradi, and Yousef forthcoming).
[5] Interestingly, trilingual alignment was used by some students to improve their mastery of a third language, often a modern one, by leveraging on their knowledge of their native tongue and the ancient language (Palladino, Foradi, and Yousef forthcoming).
[6] Incidentally, the “faithfulness” of a translation as a value judgement was introduced by the Christians: since God imprints his image on the text, every version of that text needs to be a faithful reproduction of it. Here resides the miraculous character of the translation of the Septuagint, which, according to tradition, came to be when seventy savants independently wrote an identical translation of the Bible (Nergaard 1993).


Almas, Bridget, and Marie-Claire Beaulieu. 2016. “The Perseids Platform: Scholarship for All!” In Digital Classics Outside the Echo-Chamber, edited by Gabriel Bodard and Matteo Romanello, 171–86. Ubiquity Press. https://doi.org/10.26530/OAPEN_608306.

Berti, Monica, ed. 2019. Classical Philology. Ancient Greek and Latin in the Digital Revolution. Berlin: De Gruyter Saur.

Bettini, Maurizio. 2012. Vertere. Un’antropologia della traduzione nella cultura antica. Torino: Einaudi.

Blackwell, Christopher, and N. Smith. 2019. “DUCAT – Daughter of Ugarit Citation Alignment Tool.” Accessed October 4, 2020. https://github.com/eumaeus/ducat.

Celano, Giuseppe. 2019. “The Dependency Treebanks for Ancient Greek and Latin”. In Digital Classical Philology. Ancient Greek and Latin in the Digital Revolution, edited by Monica Berti, 279–298. Berlin: De Gruyter Saur.

Cook, Guy. 2009. “Foreign Language Teaching.” In Routledge Encyclopedia of Translation Studies, edited by Monica Baker and Gabriela Saldanha, Second Edition, 112–15. London; New York: Routledge, Taylor & Francis Group.

Crane, Gregory. 2019. “Beyond Translation: Language Hacking and Philology.” Harvard Data Science Review 1, no. 2. https://doi.org/10.1162/99608f92.282ad764.

Crane, Gregory, Alison Babeu, Lisa Cerrato, Bridget Almas, Marie-Claire Beaulieu, and Anna Krohn. 2018. “Perseus Digital Library.” Accessed March 5, 2020. http://www.perseus.tufts.edu/hopper/.

Dagan, I., K. Church, and W. Gale. 1999. “Robust Bilingual Word Alignment for Machine Aided Translation.” In Natural Language Processing Using Very Large Corpora, edited by Susan Armstrong, Kenneth Church, Pierre Isabelle, Sandra Manzi, Evelyne Tzoukermann, and David Yarowsky, 209–24. Text, Speech and Language Technology. Dordrecht: Springer Netherlands. https://doi.org/10.1007/978-94-017-2390-9_13.

Foradi, Maryam. 2019. “Confronting Complexity of Babel in a Global and Digital Age. What Can You Produce and What Can You Learn When Aligning a Translation to a Language That You Have Not Studied?” In DH2019: Digital Humanities Conference, Utrecht University, July 9–12. Book of Abstracts. Utrecht.

Lefevere, André. 1992. Translation, Rewriting, and the Manipulation of Literary Fame. London; New York: Routledge.

Natoli, Bartolo, and Steven Hunt, eds. 2019. Teaching Classics with Technology. London; New York: Bloomsbury Academic.

Nergaard, Siri, ed. 1993. La teoria della traduzione nella storia. Milano: Bompiani.

Nicoulin, Morgan. 2019. “Methods of Teaching Latin: Theory, Practice, Application.” Arts & Sciences Electronic Theses and Dissertations, May. https://doi.org/10.7936/znvz-zd20.

Palladino, Chiara, Maryam Foradi, and Tariq Yousef. forthcoming. “Translation Alignment for Historical Language Learning: A Case Study.”

———.“The Alpheios Project.” 2020. Accessed June 23, 2020. https://alpheios.net/.

———.“TLG – Thesaurus Linguae Graecae.” 2020. Accessed June 18, 2020. http://stephanus.tlg.uci.edu/.

Wolf, Maryanne. 2018. Reader, Come Home: The Reading Brain in a Digital World. New York: Harper.

Yousef, Tariq. 2019. “Ugarit: Translation Alignment Visualization.” In LEVIA’19: Leipzig Symposium on Visualization in Applications 2019. Leipzig.

About the Author

Chiara Palladino is Assistant Professor of Classics at Furman University. She works on the application of digital technologies to the study of ancient texts. Her current main interests are in the use of semantic annotation and modelling for the analysis of ancient spatial narratives, and in the implementation of translation alignment platforms for reading and investigating historical languages.

A three-by-three grid of fragments of poems, with syllables appearing in different colors.

Back in a Flash: Critical Making Pedagogies to Counter Technological Obsolescence


This article centers around the issue of teaching digital humanities work in the face of technological obsolescence. Because this is a wide-ranging topic, this article draws a particular focus on teaching electronic literature in light of the loss of Flash, which Adobe will end support for at the end of 2020. The challenge of preserving access to electronic literature due to technological obsolescence is integral to the field, which has already taken up a number of initiatives to preserve electronic literatures as their technological substrate of hardware and software become obsolete, with the recent NEH-funded AfterFlash project responding directly to the need to preserve Flash-based texts. As useful as these and similar projects of e-literary preservation are, they focus on preservation through one traditional modality of humanities work: saving a record of works so that they may be engaged, played, read, or otherwise consumed by an audience, a focus that loses the works’ poetics that are effected through materiality, physicality, and (often) interaction. This article argues that obsolescence may be most effectively countered through pedagogies that combine the consumption of e-textual record, with the critical making of texts that share similar poetics, as the process of making allows students to engage with material, physical poetics. The loss of Flash is a dual loss of both a set of consumable e-texts and a tool for making e-texts; the article thus provides a case study of teaching with a complementary program for making e-poetic texts in the place of Flash called Stepworks.

Technological obsolescence—the phenomenon by which technologies are rendered outdated as soon as they are updated—is a persistent, familiar problem when teaching with, and through digital technologies. Whether planned, a model of designed obsolescence where hardware and software are updated to be incompatible with older systems (thus forcing the consumer to purchase updated models) or perceived, a form of obsolescence where a given technology becomes culturally, if not functionally, obsolete, teaching in a 21st century ubicomp environment necessitates strategies for working with our own and our students’ “obsolete,” outdated, or otherwise incompatible technologies. These strategies may take any number of forms: taking time to help students install, emulate, or otherwise troubleshoot old software; securing lab space with machines that can run certain programs; embracing collaborative models of work; designing flexible assignments to account for technological limitations…the list could go on.

But in Spring 2020, as campuses around the globe went fully remote, online, and often asynchronous in response to the COVID-19 pandemic, many of us met the limits of these strategies. Tactics like situated collaboration, shared machinery, or lab space became impossible to employ in the face of this sudden temporal obsolescence, as our working, updated technologies became functionally “obsolete” within the time-boundedness of that moment. In my own case, the effects of this moment were most acutely felt in my Introduction to the Digital Humanities class, where the move to remote teaching coincided with our electronic literature unit, a unit where the combined effects of technological obsolescence and proprietary platforms already necessitate adopting strategies so that students may equitably engage the primary texts. For me, primary among these strategies are sharing machinery to experience texts together, guiding students through emulator or software installation as appropriate, and adopting collaborative critical making pedagogies to promote material engagement with texts. In Spring 2020, though, as my class moved to a remote, asynchronous model, each of these strategies was directly challenged.

I felt this challenge most keenly in trying to teach Brian Kim Stefans’s Dreamlife of Letters (2000) and Amaranth Borsuk and Brad Bouses’s Between Page and Screen (2012), two works that, though archived and freely available via the Electronic Literature Collection and therefore ideal for accessing from any device with an internet connection, were built in Flash, so require the Flash Player plug-in to run. Since Adobe’s announcement in 2017 that they would stop supporting Flash Player by the end of 2020, fewer and fewer students enter my class with machines prepared to run Flash-based works, either because they have not installed the plugin, or because they are on mobile devices which have never been compatible with Flash—a point that Anastasia Salter and John Murray (2014) argue, has largely contributed to Adobe’s decision. The COVID-19 move to remote, online instruction effectively meant that students with compatible hardware had to navigate the plug-in installation on their own (a process requiring a critical level of digital literacy), while those without compatible hardware were unable to access these works except as images or recordings. In the Spring 2020 teaching environment, then, the temporal obsolescence brought on by COVID-19 acted as a harbinger for the eventual inaccessibility of e-literary work that requires Flash to run.

The problem space that I occupy here is this: teaching electronic literature in the undergraduate classroom, as its digital material substrates move ever closer to obsolescence, an impossible race against time that speeds up each year as students enter my classroom with machines increasingly less compatible with older software and platforms. Specifically, I focus on the challenge of teaching e-literature in the undergraduate, digital humanities classroom, while facing the loss of Flash and its attendant digital poetics—a focus informed by my own disciplinary expertise, but through which I offer pedagogical responses and strategies that are useful beyond this scope.

Central to my argument, here, is that teaching e-literary practice and poetics in the face of technological obsolescence is most effective through an approach that interweaves practice-based critical making of e-literary texts, with the more traditional humanities practices of reading, playing, or otherwise consuming texts. Critical making pedagogy, however, is not without its critiques, particularly as it may promote inequitable classroom environments. For this reason, I start with a discussion of critical making as feminist, digital humanities pedagogy. From there, I turn more explicitly to the immediate problem at hand: the impending loss of Flash. I address this problem, first, by highlighting some of the work already being done to maintain access to e-literary works built in Flash, and second by pointing to some of the limits of these projects, as they prioritize maintaining Flash-based works for readerly consumption, even though the loss of Flash is also the loss of an amateur-friendly, low-stakes coding environment for “writing” digital texts. Acknowledging this dual loss brings me to a short case study of teaching with Stepworks, a contemporary, web-based platform that is ideal for creating interactive, digital texts, with poetics similar to those created in Flash. As such, it is a platform through which we can effectively use critical making pedagogies to combat the technological obsolescence of Flash. I close by briefly expanding from this case study, to reflect on some of the ways critical making pedagogies may help combat the loss of e-literary praxes beyond those indicative of and popularized by Flash.

Critical Making Pedagogy As DH Feminist Praxis

Critical making is a mode of humanities work that merges theory with practice by engaging the ways physical, hands-on practices like making, doing, tinkering, experimenting, or creating constitute forms of thinking, learning, analysis, or critique. Strongly associated with the digital humanities given the field’s own relationship to intellectual practices like coding and fabrication, critical making methodologically argues that intellectual activity can take forms other than writing, as coding, tinkering, building, or fabricating represent more than just “rote” or “automatic” technical work (Endres 2017). As critical making asserts, such physical practices constitute complex forms of intellectual activity. In other words, it takes to task what Bill Endres calls “an accident of available technologies” that has proffered writing its status as “the gold standard for measuring scholarly production” (2017).

Critical making scholarship can, thus, take a number of forms, as Jentery Sayers’s (ed.) Making Things and Drawing Boundaries showcases (2017). For example, as a challenge to and reflection on the invasiveness of contemporary personal devices, Allison Burtch and Eric Rosenthal offer Mic Jammer, a device that transmits ultrasonic signals that, when pointed towards a smartphone, “de-sense that phone’s microphone,” to “give people the confidence to know that their smartphones are non-invasively muted” (Burtch and Rosenthal 2017). Meanwhile, Nina Belojevic’s Glitch Console, a hacked or “circuit-bent” Nintendo Entertainment System, “links the consumer culture of video game platforms to issues of labor, exploitation, and the environment” through a play-experience riddled with glitches (Belojevic 2017). Moving away from fabrication, Anne Balsamo, Dale MacDonald, and Jon Winet’s AIDS Quilt Touch (AQT) Virtual Quilt Browser is a kind of preservation-through-remediation project that provides users with opportunities to virtually interact with the AIDS Memorial Quilt (Balsamo, MacDonald, and Winet 2017). Finally, Kim A. Brillante Knight’s Fashioning Circuits disrupts our cultural assumptions about digital media and technology, particularly as they are informed by gendered hierarchy; this project uses “wearable media as a lens to consider the social and cultural valences of bodies and identities in relation to fashion, technology, labor practices, and craft and maker cultures” (Knight 2017).

Each of these examples of critical making projects highlight the ways critical making disrupts the primacy of writing in intellectual activities. However they are also entangled in one of the most frequently lobbed and not insignificant critiques of critical making as a form of Digital Humanities (DH) praxis: that it reinforces the exclusionary position that DH is only for practitioners who code, make, or build. This connection is made explicit by Endres, as he frames his argument that building is a form of literacy through a mild defense of Steven Ramsay’s 2011 MLA conference presentation, which argued that building was fundamental to the Digital Humanities. Ramsay’s presentation was widely critiqued for being exclusionary (Endres 2017), as it promoted a form of gatekeeping in the digital humanities that, in turn, reinforces traditional academic hierarchies of gender, race, class, and tenure-status. As feminist DH in particular has shown, arguments that “to do DH, you must (also) code or build,” imagine a scholar who, in the first place, has had the emotional, intellectual, financial, and temporal means to acquire skillsets that are not part of traditional humanities education, and who, in the second place, has the institutional protection against precarity to produce work in modes outside “the gold standard” of writing (Endres 2017).

Critical making as DH praxis then, finds itself in a complicated bind: on the one hand, it effectively challenges academic hierarchies and scholarly traditions that equate writing with intellectual work; on the other hand, as it replaces writing with practices (akin to coding) like fabrication, building, or simply “making,”[1] it potentially reinforces the exclusionary logic that makes coding the price of entry into the digital humanities “big tent” (Svensson 2012).[2] Endres, however, raises an important point that this critique largely fails to account for: that “building has been [and continues to be] generally excluded from tenure and promotion guidelines in the humanities” (Endres 2017). That is, while we should perhaps not take DH completely off the hook for the field’s internalized exclusivity and the ways critical making praxis may be commandeered in service of this exclusivity, examining academic institutions more broadly, and of which DH is only a part, reveals that writing and its attendant systems of peer review and impact factors remains the exclusionary technology, gate-keeping from within.

To offer a single point of “anec-data:”[3] in my own scholarly upbringing in the digital humanities, I have regularly been advised by more senior scholars in the field (particularly other white women and women of color) to only take on coding, programming, building, or other making projects that I can also support with a traditional written, peer-reviewed article—advice that responds explicitly to writing’s position of primacy as a metric of academic research output, and implicitly to academia’s general valuation of research above activities like teaching, mentorship, or service. It is also worth noting that this advice was regularly offered with the clear-eyed reminder that, as valuable as critical making work is, ultimately the farther the scholar or practitioner appears from the cisgendered, heterosexual, able-bodied, white, male default, the more likely it is that their critical making work will be challenged when it comes to issues of tenure, promotion, or job competitiveness. While a single point of anec-data hardly indicates a pattern, the wider system of academic value that I sketch here is well-known and well-documented. It would be a disservice, then, not to acknowledge the space that traditional, peer-reviewed, academic writing occupies within this system.

Following Endres, my own experience, and systemic patterns across academia, I would argue that even though critical making can promote exclusionary practices in the digital humanities, the work that this methodological intervention does to disrupt technological hierarchies and exclusionary systems—including and especially, writing—outweighs the work that it may do to reinforce other hierarchies and systems. Indeed, I would go one step further to add my voice to existing arguments, implicit in the projects cited above and explicit throughout Jacqueline Wernimont and Elizabeth Losh’s edited collection, Bodies of Information: Intersectional Feminism and the Digital Humanities (2018), that critical making is feminist praxis, not least for the ways it contributes to feminism’s long-standing project of disrupting, challenging, and even breaking language and writing.[4]

The efficacy of critical making’s feminist intervention becomes even more evident, and I would argue, powerful, when it enters the classroom as undergraduate pedagogy. Following critical making scholarship, critical making pedagogy similarly disrupts the primacy of written text for intellectual work. Students are invited to demonstrate their learning through means other than a written paper, in a student learning assessment model that aligns with other progressive, student-centered practices like the un-essay.[5] Because research in digital and progressive pedagogy highlights the ways things like un-essays are beneficial to student learning, I will focus here on pointing to particular ways critical making pedagogy in the undergraduate digital humanities classroom operates as feminist praxis for disrupting heteropatriarchal assumptions about technology. For this, I will pull primarily from feminist principles of and about technology, as outlined by FemTechNet and Lauren Klein and Catherine D’Igniazio’s Data Feminism, and from my experiences teaching undergraduate digital humanities classes through and with critical making pedagogies.

In their White Paper on Transforming Higher Education with Distributed Open Collaborative Courses, FemTechNet unequivocally states “effective pedagogy reflects feminist principles” (FemTechNet White Paper Committee 2013), and the first and perhaps most consistent place that critical making pedagogies respond to this charge are in the ways that critical making makes labor visible, a value of intersectional feminism broadly, and data feminism specifically (D’Ignazio and Klein 2020). Issues of invisible labor have long been central to feminist projects, so it is no surprise that this would also be a central concern in response to Silicon Valley’s techno-libertarian ethos that blackboxes digital technologies as “labor free” for both end-users and tech workers. When students participate in critical making projects that relate to digital technologies—projects that may range from building a website or programming an interactive game, to fabricating through 3D printing or physical circuitry—they are forced to confront the labor behind (and rendered literally invisible in) software and hardware. This confrontation typically manifests as frustration, often felt in and expressed through their bodies, necessitating an engagement with affect, emotion, care, and often, collaboration in this work—all feminist technologies directly cited in both FemTechNet’s manifesto (FemTechNet 2012), and as core principles of data feminism (D’Ignazio and Klein 2020). Similarly, as students learn methods and practices for their critical making projects, they inevitably find themselves facing messiness, chaos, even fragments of broken things, and only occasionally is this “ordered” or “cleaned” by the time they submit their final deliverable. Besides recalling FemTechNet’s argument that “making a mess… [is a] feminist technolog[y]” (FemTechNet 2012), the physical and intellectual messiness of critical making pedagogy also requires a shift in values away from the “deliverable” output, and towards the process of making, building, or learning itself. At times, this shift in value toward process can manifest as a celebration of play, as the tinkering, experimentation, chaos, and messiness of critical making transform into a play-space. While I am hesitant to argue too forcefully for the playfulness of critical making in the classroom (not least for the ways play is inequitably distributed in academic and technological systems), Shira Chess has recently and compellingly argued for the need to recalibrate play as a feminist technology, so I name it here as an additional, potential effect of critical making pedagogy (Chess 2020). Whether it transforms to play or not, however, the shift in value away from a deliverable output is always a powerful disruption of the masculinist, capitalist narratives of “progress” and “value” that undergird technological, academic work.

These are, of course, also the narratives primarily responsible for the profitability of obsolescence, which brings us back to my primary thesis and central focus: the efficacy of adopting critical making pedagogies to counter the effects of technological obsolescence in electronic literature. Because technological obsolescence is a core concern of electronic literature, it will be worth spending some time and space to examine this relationship more deeply, and address some of the ways the field is already at work countering loss-through-obsolescence. Indeed, some of this work already anticipates possibilities for critical making to counter this loss.

Preservation and E-lit

As stated technological obsolescence is the phenomenon whereby technologies become outdated (obsolete) as they are updated. Historically, technological obsolescence has occurred in concert with developments in computing technologies that have radically altered what computers can do (eg: moves from primarily text to graphics processing power) or how they are used (eg: with the advent of programming languages, or the development of the graphical user interface). However, as digital technologies have developed into the lucrative consumer market of today, this phenomenon has become driven more heavily by capitalistic gains through consumer behaviors. Consider, for instance, iOS updates that no longer work on older iphone models, or new hardware models that do not fit old plugs, USB ports, or headphones. In each case, updates to the technology force consumers to purchase newer models, even if their old ones are otherwise functioning properly.

In the field of electronic literature, obsolescence requires attention from two perspectives: the readerly and the writerly. From the former, obsolescence threatens future access to e-literary texts, so the field must regularly engage with preservation strategies; from the latter, it requires that that the field regularly engage with new technologies for “writing” e-literary texts, as new platforms and programs both result from and in others’ obsolescence. E-lit, thus, occupies both sides of the obsolescence coin: on the one side, holding the outdated to ensure preservation and access, and on the other, embracing the updated, as new platforms, programs, and hardware prompt the field’s innovation. Much of the field’s focus is on combating obsolescence through attention to outdated (or, as in the case of Flash, soon-to-be-outdated) platforms, and maintaining these works for future audiences. However, this attention only weakly (if at all) accounts for the flipside of the writerly, where obsolescence also threatens loss of craft or practice in e-literary creation; this is where, I argue critical making as e-literary pedagogy is especially productive as a counter-force to loss. First though, it will be worth looking more closely at the ways obsolescence and e-literature are integrated with one another.

Maintaining access to obsolete e-literature is, and has been, a central concern of the field in large part because the field’s own history “is inextricably tied to the history of computing, networking, and their social adoption” (Flores 2019). A brief overview of e-literature’s “generations” effectively illustrates this relationship. The first generation of e-lit is primarily pre-web, text-heavy works developed between 1952 and 1995—dates that span the commercial availability of computing devices and the development of language-based coding, to the advent and adoption of the web (Flores 2019). Spanning over 40 years, this period also includes within it, the period 1980–1995 which is “dominated by stand-alone narrative hypertexts,” many of which “will no longer play on contemporary operating systems” (Hayles and House 2020, my italics). The second generation, beginning in 1995, spans the rise of personal, home computing and the web, and is characterized by web-based, interactive, and multimedia works, many of which are developed in Flash (Flores 2019). In 2005, the web of 1995 shifted into the platform-based, social “web 2.0” that we recognize and use today. Leonardo Flores uses this year to mark the advent of what he argues for as the third generation of e-lit, which “accounts for a massive scale of born digital work produced by and for contemporary audiences for whom digital media has become naturalized” (Flores 2019). In the Electronic Literature Organization (ELO)’s Teaching Electronic Literature initiative, N. Katherine Hayles and Ryan House characterize third generation e-lit, specifically, as “works written for cell phones” in contrast to “web works displayed on tablets and computer screens” (Hayles and House 2020).

Though Flores includes activities like storytelling through gifs and poetics of memes in his characterization of third generation e-lit, framing this moment through cell phones is helpful for thinking about the centrality of computing development and its attendant obsolescence to the field. In the first place, pointing to work developed for cell phones immediately brings to mind challenges of access due to Apple’s and Android’s competitive app markets. Here, there is on the one hand, the challenge of apps developed for only one of these hardware-based platforms, so inaccessible by the other; on the other hand, there are the continuous operating system updates that in turn, require continuous app updates which regularly results in apps seemingly, and sometimes literally, becoming obsolete overnight. In the second place, the 1995–2015 generation of “web works displayed on tablets and computer screens” is, as noted, a generation characterized by the rising ubiquity of Flash—a platform that was central to both user-generated webtexts, and e-literary practice of this time (Salter and Murray 2014). As noted, Flash-based works are currently facing their own impending obsolescence due to Adobe’s removal of support, a decision that Salter and Murray argue results directly from the rise of cell phones and/as smartphones which do not support Flash. Thus, cell phones and “works created for cell phones” once again demonstrates the intricate relationship between computing history, technological development, and e-literature.

As this brief history demonstrates, the challenge of ensuring access to e-lit in the face of technological obsolescence is absolutely integral to the field, as it ultimately ensures that new generations of e-lit scholars can access primary texts. Indeed, it is a central concern behind one of the field’s most visible projects: the Electronic Literature Collection (ELC). As of this writing, the ELC is made up of three curated volumes of electronic literature, freely available on the web and regularly maintained by the ELO. In addition to expected content like work descriptions and author’s/artist’s statements, each work indexed in the ELC is accompanied by notes for access; these may include links, files for download, required software and hardware, or emulators as appropriate. The ELC, thus, operates as both an invaluable resource for preserving access to e-lit in general, and for ensuring access to e-lit texts for teaching undergraduates. Most of the work in the ELC is presented either in its original experiential, playable form, or as recorded images or videos when experiencing the work directly is untenable for some reason (as in, for instance, locative texts which require their user to be in specific locations to access the text). However, some of the work is presented with materials that encourage a critical making approach—a move in direct conversation with my argument that critical making plays an important pedagogical role for experiencing and even preserving e-literary practice, even and especially if the text itself cannot be experienced or practiced directly. Borsuk and Bouse’s Between Page and Screen, an augmented reality work that requires both a physical, artist’s book of 2D barcodes specifically designed for the project, and a Flash-based web app that enables the computer’s camera to “read” these barcodes, offers a particularly strong example of this.

Between Page and Screen is indexed in the 3rd volume of the ELC. In addition to the standard information about the work and its authors, the entry’s “Begin” button contains an option to “DIY Physical Book,” which will take the user to a page on the work’s website that offers users access to 1) a web-based tool called “Epistles” for writing their own text and linking it to a particular bar code; and 2) a guide for printing and binding their own small chapbook of bar codes, which can then be held up and read through the project’s camera app. In this way, users who may be unable to access the physical book of barcodes that power Between Page and Screen are still offered an experiential, material engagement with the text through their own critical making practices. Engaging the text in this way allows users not only to physically experience the kinetics and aesthetics of the augmented reality text, but also to engage the materiality and interaction poetics at the heart of the piece—precisely those poetics that are lost when the only available access to a text is a recording to be consumed. At the same time, engaging the text through the critically made chapbook prompts a material confrontation between the analog and the digital as complementary, even intimate, information technologies. Of course, in this case it is (perhaps) ironically not the anticipated analog technology that is least accessible; rather it is the digital complement, the Flash-based program that allows the camera to read the analog bar codes, that is soon to be inaccessible.

Complementing the Electronic Literature Collections are the preservation efforts underway at media archeology labs around the country, most notably the Electronic Literature Lab (ELL) directed by Dene Grigar at Washington State University, Vancouver. Currently, the ELL is working on preserving Flash-based works of electronic literature, through the NEH-sponsored Afterflash project. Afterflash will preserve 447 works through a combination process where researchers:

1) preserve the works with Webrecorder, developed by Rhizome, that emulates the browser for which the works were published, 2) make the works accessible with newly generated URLs with six points of access, and 3) document the metadata of these works in various scholarly databases so that information about them is available to scholars (Slocum et. al. 2019).

Without a doubt, this is an exceptional project of preservation for ensuring some kind of access to Flash-based works of electronic literature, even after Adobe ends their support and maintenance of the software. In particular, the use of Webrecorder to capture the works means that the preservation will not just be a recorded “walk-through” of the text, but will capture the interactivity—an important poetic of this moment in e-literary practice.

As exceptional as this preservation project is, however, it is focused entirely (and not incorrectly) on preserving works so that they may be experienced by future readers, who do not have access to Flash. But what of preserving Flash as a program particularly suited for making interactive, multimedia webtexts? As Salter and Murray argue, a major part of the platform’s success and influence on turn-of-the-century web aesthetics, web arts, and electronic literature has to do with its low barrier-to-entry for creating interactive, multimedia works, even for users who were not coders, or who were new to programming (Salter and Murray 2014). Pedagogically speaking, the amateur focus of Flash also meant that it was particularly well-suited for teaching digital poetics through critical making. In the first place, it upheld the work of feminist digital humanities to disrupt and resist the primacy of original, complex, codework to the digital humanities and (more specifically) electronic literatures. In this way, it could operate as an ideal tool for feminist critical making pedagogies by, both promoting alternatives to writing for intellectual work, and by resisting the exclusivity behind prioritizing original, complex codework. In the second place, it allowed students to tinker with the particularities and subtleties of digital poetics—things like interaction, animation, kinetics, and visual / spatial design—without getting overwhelmed by the complexities and specifications of code. As the end of 2020 and Adobe’s support for Flash looms large, the question then becomes all the more urgent: if critical making offers an effective, feminist pedagogical model for teaching electronic literatures in the face of technological obsolescence, how can we maintain these practices in our undergraduate teaching in a post-Flash world?

Case Study: Stepworks

In response to this question, I propose: Stepworks. Created by Erik Loyer in 2017, Stepworks is a web-based platform for creating single-touch interactive texts that centers around the primary metaphor of staged, musical performance, an appropriate metaphor that resonates with traditions of e-literature and e-poetry that similarly conceptualize these texts in terms of performance. Stepworks performances are powered by the Stepwise XML, also developed by Loyer, but the platform does not require creators to work directly in the XML to make interactive texts. Instead, the program in its current iteration interfaces directly with Google Sheets—a point that, while positive for things like ease of use and supporting collaboration (features I discuss more fully in what follows), does introduce a problematic reliance on Google’s corporate decisions to maintain access to and workability of Stepworks and its texts. In the Stepworks spreadsheet, each column is a “character,” named across the first row, while the cells in each subsequent row contain what the character performs—the text, image, code, sound, or other content associated with that character. Though characters are often named entities that speak in text, they can also be things like instructions for use, metadata about the piece, musical instruments that will perform sound and pitch, or a “pulse,” a special character that defines a rhythm for the textual performance. Finally, each cell contains the content that will be performed with each single-click interaction, and this content will be performed in the order that it appears down the rows of the spreadsheet. Students can, therefore, easily and quickly experiment with different effects of single-click interactive texts, as they perform at the syllable, word, or even phrase level, just by putting that much content into the cell.

Figures 1, 2, and 3 illustrate some of these effects. Figure 1, a gif of a Stepworks text where each line of Abbott and Costello’s “Who’s on First” occupies each cell, showcases the effects of performing full lines of text with each interaction.

The gif includes text that appears in alternative lines on top and below one another, as each character speaks. This text reads: “Strange as it may seem, they give ball players nowadays very peculiar names / Funny names? / Nicknames, nicknames. / Now on the St. Louis team we have Who’s on first, What’s on second, I Don’t Know is on third-- / That’s what I want to find out / I want you to tell me the names of the fellows on the St. Louis team. / I’m telling you / Who’s on first, What’s on second, I Don’t Know is on third / You know the fellows’ names? / Yes.
Figure 1. An animated gif showing part of a Stepworks performance created by Erik Loyer that remediates the script of Abbott and Costello’s comedy sketch, “Who’s On First.” The gif was created by the author, through a screen recording of the Stepworks recording.

Figure 2, a gif from a Stepworks text based on Lin-Manuel Miranda’s Hamilton, showcases the effects of breaking up each cell by syllable and allowing the player to perform the song at their own pace.

The text appears on a three-by-three grid, and in each space of the grid the text gets a different color, indicating a different character. The gif begins with text in the center position that reads “but just you wait, just you wait…” and in the bottom right that reads “A voice saying.” Then text appears in each position on the grid around the center saying “Alex, you gotta fend for yourself,” and the bottom right space reads “He started retreatin’ and readin’ every treatise on the shelf.” The top left then presents text reading “There would have been nothin’ left to do for someone less astute.”
Figure 2. An animated gif showing part of a Stepworks performance created by Erik Loyer that remediates lyrics from Lin-Manuel Miranda’s broadway musical Hamilton. The gif was created by the author, through a screen recording of the Stepworks recording.

Finally, Figure 3 is taken from Unlimited Greatness, a text that remediates Nike’s ad featuring the story of Serena Williams as single words. In this text, the effects of filling each cell with a single word are on display.

There is text that appears on word at a time in the center of the screen which reads “Compton / sister / outsider / pro / #304 / winner / top 10 / Paris / London / New York / Melbourne / #1 / injured / struggling.”
Figure 3. An animated gif showing part of a Stepworks performance called Unlimited Greatness created by Erik Loyer that remediates a Nike ad about Serena Williams’s tennis career originally created by the design team, Wiedan+Kennedy. The gif was created by the author, through a screen recording of the Stepworks recording.

Pedagogically speaking, this range of illustrative content allows students to quickly grasp the different effects of manipulating textual performance in an interactive piece. At the same time, the ease of working in the spreadsheet offers them an effective place from which to experiment in their own work. For example, reflecting on the process of creating a Stepworks performance based on a favorite song, one student describes placing words purposefully into the spreadsheet, breaking up the entries by word and eventually by syllable, rather than by line:

I made them [the words] pop up one by one instead of the words popping up all together using the [&] symbol. I also had words that had multiple syllables in the song, so I broke those up to how they fit in the song. So, if the song had a 2-syllable word, then I broke it up into 2 beats.

Although the spreadsheet is not named explicitly, the student describes a creative process of purposefully engaging with the different effects of syllables, words, and lines, opting for a word- and syllable-based division in the spreadsheet to affect how the song’s remediation is performed (see Figure 4, the middle column of the top row).

Besides offering the spreadsheet as an accessible space in which to build interactive texts, Stepworks comes equipped with pre-set “stages” on which to perform those texts. Currently there are seven stages available for a Stepworks performance, and each of them offers a different look and feel for the text. For instance, the Hamilton piece discussed above (Figure 2) is performed on Vocal Grid, a stage which assigns each character a specific color and position on a grid that will grow to accommodate the addition of more characters; Who’s On First, by contrast, is performed on Layer Cake, a stage that displays character dialogue in stacked rows. As the stages offer a default set design for font, color, textual behavior, and background, they (like the spreadsheets) reduce the barrier-to-entry for creating and experimenting with kinetic, e-poetic works. Once a text is built in the spreadsheet and the spreadsheet published to the web, it may be directly loaded into any stage and performed; a simple page reload in the browser will update the performance with any changes to the spreadsheet, thereby supporting an iterative design process. The user-facing stage and design-facing spreadsheet are integrated with one another so that students may tinker, fiddle, experiment, and learn, moving between the perspective of the audience member and that of the designer.

In my own classes, it is at this stage of the process that additional, less tangible benefits of critical making pedagogy have come to light, especially in my most common Stepworks assignment: challenging students to remediate a favorite song or poem into a Stepworks performance. When students reach the stage of loading and reloading their spreadsheets into different stages in Stepworks, they also often begin exploring different performance effects for their remediated texts, regularly going beyond any requirements of the assignment to learn more about working with and in Stepworks to achieve the effects they want. Of this creative experience, one student writes:

In order to accurately portray the message of the song, I had to take time and sing the song to myself multiple times to figure out the best way to group the words together to appear on screen. Once this was completed, I had an issue where words would appear out of order because I didn’t fully understand the +1 (or any number for that matter) after a word. I thought that this meant to just add extra time to the duration of the word or syllable. I later figured out that it acted as a timeline of when to appear rather than the duration of the word’s appearance. Once I figured this out, it then came down to figuring out the best timing of the words appearing on each screen to match the original song.

Here the student’s reflection indicates a few important things: first the reflection articulates the student’s own iterative design process, as they move between the designer’s and audience’s experiential positions to create the “best timing of the words” performed on screen. This same movement guides the student towards figuring out some more advanced Stepworks syntax: working with a “Pulse” character to effect a tempo or rhythm to the piece by delaying certain words’ appearances on screen (the reference to “+1”). As the assignment required an attention to rhythm, but did not specify the necessity of using the Pulse character to create a rhythm, this student’s choice to work with the Pulse character points to both a self-guided movement beyond the requirements of the assignment, and a developing poetic sensitivity to words and texts as rhythmic bodies that effect meaning through performance. In Stepworks, rhythm can be created by working down the spreadsheet’s rows (as in the Hamilton or Unlimited Greatness pieces in Figures 2 and 3), however this rhythm is reliant on the player’s interactive choices and personal sensitivity to the poetics. Using a pulse to effect rhythm overrides the player’s choice and assigns a rhythm to the contents within a single cell, performing them on a set delay following the user’s single-click.[6] Working with the pulse, then, the student is letting their creative and critical ownership of their poetic design lead them to a direct confrontation with the culturally-conditioned paradox of user-control and freedom in interactive environments. This point is explicitly evoked later in the reflection, as the student expands on the effects of the pulse, writing:

The timing of the words appearing on the screen … also enhance the impact and significance of certain words. My favorite example is how one word … “Hallelujah,” can be stretched out to take up 8 units of time, signifying the importance of just that one word
(see Figure 4, the leftmost text in the second row).

Indeed, across my courses students have demonstrated a willingness to go beyond the specifications of this assignment in order to fully realize their poetic vision. Besides the Pulse, students often explore syntax for “sampling” the spreadsheet’s contents to perform a kind of remix, customizing fonts or colors in the display, or adding multimedia like images, gifs, or sound. Thus, as Stepworks supports this kind of work, it simultaneously supports students’ hands-on learning of digital poetics, especially those popularized by works created with Flash.

Finally, I’d like to point to one final aspect of Stepworks that makes it an ideal platform for teaching e-literature through feminist critical making pedagogies. Because of its integration with google sheets, Stepworks supports collaboration, unfettered by distance or synchronicity. Speaking for my own classes and experiences teaching with Stepworks—particularly the Spring 2020 class—this is where the program really excels pedagogically, as it opens a space for teaching e-poetry through sharing ideas and poetic decisions, creating and experimenting “together,” and supporting students to learn from and with one another, even when the shared situatedness of the classroom is inaccessible. A powerful pedagogical practice in its own right, collaboration is also a core feminist technology, central to feminist praxis across disciplines, and a cornerstone of digital humanities work, broadly speaking. Indeed, it is one of the tenants of digital humanities that has contributed to the field’s disruption and challenge to expectations of humanities scholarship.

As an illustration of what can be done in the (virtual, asynchronous) classroom through a collaborative Stepworks piece, I will end this case study with a piece my students created to close out the Spring 2020 semester—a moment that, to echo the opening of this article, threw us into a space of technological inaccess, of temporal obsolescence. Unable to access many works of Flash-based e-lit that had been on the original syllabus, critical making through Stepworks was our primary—in some cases, only—mode through which to engage with e-poetics, particularly of interaction and kinetics. Working from a single google spreadsheet, each student took a character-column and added a poem or song or lyric that was, in some way, meaningful to them in this moment. The resulting performance (Figure 4), appears as a multi-voiced, found text; watching it or playing it, it is almost as if you are in a room, surrounded by others, sharing a moment of speaking, hearing, watching, performing poetry.

Figure 4. A recording of a collaborative Stepworks piece of Found Poetry built by students in Sarah Laiola’s Spring 2020 Introduction to the Digital Humanities class.The piece presents text on a 3 by 3 grid, and each grid performs the text of a different poem or song, chosen by the students.


Technological obsolescence will undoubtedly continue to present a challenge for teaching digital texts and electronic literatures. As our systems update into outdatedness, we face the loss of both readerly access to existing texts, and writerly access to creative potentialities. Flash, which as of this writing, is facing its final days, is just one example of this cycle and its effects on electronic and digital literatures. As I have shown here, however, even as platforms (like Flash) become obsolete for contemporary machines, critical making through newer platforms with low barriers to entry like Stepworks can offer productive counters to this loss, particularly from the writerly perspective of, in this case, kinetic poetics. Indeed, this approach can enhance teaching e-literature and digital textuality more broadly, for other inaccessible platforms. For instance, Twine, a free platform for creating text-based games that runs on contemporary browsers, is a productive space for teaching hypertextual storytelling in place of Storyspace, which though still functional on contemporary systems, is cost-prohibitive at nearly $150; similarly, Kate Compton’s Tracery offers an opportunity for first-time-coders to create simple text generators, in contrast to comparatively more advanced languages like javascript, which require such focus on the code, that the lesson on poetics of textual generation may be lost by overwhelmed students. Whatever the case may be, critical making through low-cost, contemporary platforms that are easy to use offer a robust space for teaching e-literary and digital media creation that maintains a feminist, inclusive, and equitable classroom ethos to counter technological obsolescence.


[1] Throughout this article, I am purposely using different capitalizations for referencing the digital humanities. Capital DH, as used here indicates the formalized field as such, while lowercase digital humanities refers to broad practices of digital humanities work that is not always recognizable as part of the field.

[2] Although I cite Svensson here, the term is taken from the 2011 MLA conference theme. Svensson, however, theorizes the big tent in this piece.

[3] Anecdotal data.

[4] For example, see Hélène Cixous’s écriture feminine, or feminist language-oriented poetry such as that by Susan Howe.

[5] The unessay is a form of student assessment that replaces the traditional essay with something else, often a creative work of any media or material. For examples see Cate Denial’s blog reflecting on the assignment, Hayley Brazier and Heidi Kaufman blog post “Defining the Unessay,” or Ryan Cordell’s assignment in his Technologies of Text class.

[6] As a reminder, Stepworks texts are single-click interactions, where the entire contents of a cell in the spreadsheet are performed on a click.


Balsamo, Anne, Dale MacDonald, and John Winet. 2017. “AIDS Quilt Touch: Virtual Quilt Brower.” In Making Things and Drawing Boundaries: Experiments in the Digital Humanities, edited by Jentery Sayers. Debates in the Digital Humanities. Minneapolis: University of Minnesota Press. https://dhdebates.gc.cuny.edu/read/untitled-aa1769f2-6c55-485a-81af-ea82cce86966/section/acee4e20-acf5-4eb3-bb98-0ebaa5c10aaa#ch32.

Belojevic, Nina. 2017. “Glitch Console.” In Making Things and Drawing Boundaries: Experiments in the Digital Humanities, edited by Jentery Sayers. Debates in the Digital Humanities. Minneapolis: University of Minnesota Press. https://dhdebates.gc.cuny.edu/read/untitled-aa1769f2-6c55-485a-81af-ea82cce86966/section/b1dbaeab-f4ad-4738-9064-8ea740289503#ch20.

Bouse, Brad. 2012. Between Page and Screen. New York: Siglio Press. https://www.betweenpageandscreen.com.

Burtch, Allison, and Eric Rosenthal. 2017. “Mic Jammer.” In Making Things and Drawing Boundaries: Experiments in the Digital Humanities, edited by Jentery Sayers. Debates in the Digital Humanities. Minneapolis: University of Minnesota Press. https://dhdebates.gc.cuny.edu/read/untitled-aa1769f2-6c55-485a-81af-ea82cce86966/section/f8dcc03c-2bc3-499a-8c1b-1d17f4098400#ch11.

Chess, Shira. 2020. Play Like a Feminist. Playful Thinking. Cambridge: MIT Press.

D’Ignazio, Catherine, and Lauren F. Klein. 2020. Data Feminism. Ideas. Cambridge: MIT Press.

ELL Team, Dene. 2020. “Electronic Literature Lab.” Electronic Literature Lab. Accessed June 30. http://dtc-wsuv.org/wp/ell/.

Endres, Bill. 2017. “A Literacy of Building: Making in the Digital Humanities.” In Making Things and Drawing Boundaries: Experiments in the Digital Humanities, edited by Jentery Sayers. Debates in the Digital Humanities. Minneapolis: University of Minnesota Press. https://dhdebates.gc.cuny.edu/read/untitled-aa1769f2-6c55-485a-81af-ea82cce86966/section/2acf33b9-ac0f-4411-8e8f-552bb711e87c#ch04.

FemTechNet. 2012. “Manifesto.” FemTechNet. https://femtechnet.org/publications/manifesto/.

FemtechNet White Paper Committee. 2013. “Transforming Higher Education with Distributed Open Collaborative Courses (DOOCS): Feminist Pedagogies and Networked Learning.” FemTechNet. https://femtechnet.org/about/white-paper/.

Flores, Leonardo. 2019. “Third Generation Electronic Literature.” Electronic Book Review, April. doi:https://doi.org/10.7273/axyj-3574.

Hayles, N. Katherine. 2006. “The Time of Digital Poetry: From Object to Event.” In New Media Poetics: Contexts, Technotexts, and Theories, edited by Adalaide Morris and Thomas Swiss, 181–210. Cambridge, Massachusetts: MIT Press.

Hayles, N. Katherine, and Ryan House. 2020. “How to Teach Electronic Literature.” Teaching Electronic Literature.

Knight, Kim A. Brillante. 2017. “Fashioning Circuits, 2011–Present.” In Making Things and Drawing Boundaries: Experiments in the Digital Humanities, edited by Jentery Sayers. Debates in the Digital Humanities. Minneapolis: University of Minnesota Press. https://dhdebates.gc.cuny.edu/read/untitled-aa1769f2-6c55-485a-81af-ea82cce86966/section/05613b02-ec93-4588-98e0-36cd4336e7a0#ch26.

Losh, Elizabeth, and Jacqueline Wernimont, eds. 2018. Bodies of Information: Intersectional Feminism and Digital Humanities. Debates in the Digital Humanities. Minneapolis: University of Minnesota Press. https://dhdebates.gc.cuny.edu/projects/bodies-of-information.

Salter, Anastasia, and John Murray. 2014. Flash: Building the Interactive Web. Cambridge: MIT Press.

Sayers, Jentery, ed. 2017. Making Things and Drawing Boundaries: Experiments in the Digital Humanities. Debates in the Digital Humanities. Minneapolis: University of Minnesota Press. https://dhdebates.gc.cuny.edu/projects/making-things-and-drawing-boundaries.

Slocum, Holly, Nicholas Schiller, Dragan Espenscheid, and Greg Philbrook. 2019. “After Flash: Proposal for the 2019 National Endowment for the Humanities, Humanities Collections and Reference Resources, Implementation Grant.” Electronic Literature Lab. http://dtc-wsuv.org/wp/ell/afterflash/.

Stefans, Brian Kim. 2000. The Dreamlife of Letters. Electronic Literature Collection: Volume 1. https://collection.eliterature.org/1/works/stefans__the_dreamlife_of_letters.html.

Svensson, Patrik. 2012. “Beyond the Big Tent.” In Debates in the Digital Humanities, edited by Matthew K. Gold. Minneapolis: University of Minnesota Press.

Wardrip-Fruin, Noah. 2003. “From Instrumental Texts to Textual Instruments.” In Streaming Worlds. Melbourne, Australia. http://www.hyperfiction.org/texts/textualInstrumentsShort.pdf.

About the Author

Sarah Whitcomb Laiola is an assistant professor of Digital Culture and Design at Coastal Carolina University, where she also serves as coordinator for the Digital Culture and Design program. She holds a PhD in English from the University of California, Riverside, and specializes in teaching new media poetics, visual culture, and contemporary digital technoculture. Her most recent peer-reviewed publications appear in Electronic Book Review (Nov 2020), Syllabus (May 2020), Hyperrhiz (Sept 2019), Criticism (Jan 2019), and American Quarterly (Sept 2018). She will be joining the JITP Editorial Collective in 2021.

The opening header of the first episode of Ulysses in The Little Review (photograph by the author from the copy in the Special Collections and Archives of Grinnell College).

Numbering Ulysses: Digital Humanities, Reductivism, and Undergraduate Research


Ashplant: Reading, Smashing, and Playing Ulysses, is a digital resource created by Grinnell College students working with Erik Simpson and staff and faculty collaborators.[1] In the process of creating Ashplant, the students encountered problems of data entry, some of which reflect the general difficulties of placing humanistic materials in tabular form, and some of which revealed problems arising specifically from James Joyce’s experimental techniques in Ulysses. By presenting concrete problems of classification, the process of creating Ashplant led the students to confront questions about the tendency of Digital Humanities methods to treat humanistic materials with regrettably “mechanistic, reductive and literal” techniques, as Johanna Drucker puts it. It also placed the students’ work in a century-long history of readers’ efforts to tame the difficulty of Ulysses by imposing numbering systems and quasi-tabular tools—analog anticipations of the tools of digital humanities. By grappling with the challenges of creating the digital project, the students grappled with the complexity of the literary material but found it difficult to convey that complexity to the readers of the site. The article closes with some concrete suggestions for a self-conscious and reflexive digital pedagogy that maintains humanistic complexity and subtlety for student creators and readers alike.

In calculating the addenda of bills she frequently had recourse to digital aid.
—James Joyce, Ulysses (17.681–82), describing Molly Bloom[2]

The growth of the Digital Humanities has been fertilized by widespread training institutes, workshops, and certifications of technical skills. As we have developed those skills and methods, we DHers have also cultivated a corresponding skepticism about their value. Katie Rawson and Trevor Muñoz, for example, write that humanists’ suspicions of data cleaning “are suspicions that researchers are not recognizing or reckoning with the framing orders to which they are subscribing as they make and manipulate their data” (Rawson and Muñoz 2019, 280–81). Similarly and more broadly, Catherine D’Ignazio and Lauren Klein describe in Data Feminism the need for high-level critique when our data does not fit predetermined categories—to move beyond questioning the categories to question “the system of classification itself” (D’Ignazio and Klein 2020, 105). When we have “recourse to digital aid,” to reappropriate Joyce’s phrase from the epigraph above, we break fluid, continuous (analog) information into discrete units. In the process, we gain computational tractability. What do we lose? What do we lose, especially, when we use digital tools in humanistic teaching, where we value the cultivation of fluidity and complexity? These recent critiques build on foundational work such as Johanna Drucker’s earlier indictment of the methods of quantitative DH:

Positivistic, strictly quantitative, mechanistic, reductive and literal, these visualization and processing techniques preclude humanistic methods from their operations because of the very assumptions on which they are designed: that objects of knowledge can be understood as self-identical, self-evident, ahistorical, and autonomous. (Drucker 2012, 86)

The problems that Drucker identifies echo the distinction that constitutes the category of the digital itself: whereas analog information exists on a continuous spectrum, digital data becomes computationally tractable by creating discrete units that are ultimately binary. Tractability can require a loss of complexity. One of Drucker’s examples involves James Joyce’s Ulysses: she evokes the history of mapping the novel’s Dublin as a signature instance of the “grotesque distortion” that can occur when we use non-humanistic methods to transform the materials of imaginative works (Drucker 2012, 94). At their worst, such methods can operate like the budget of Bloom’s day in Ulysses, which is an accounting of a complex set of interactions that obscures at least as much as it reveals, mainly by sweeping messy expenditures into a catch-all category called “balance” (17.1476). Making the numbers add up can render complexity invisible.[3]

I agree with Drucker’s point, albeit with some discomfort, as I am also the faculty lead of a digital project that involves mapping Ulysses, albeit in a different way. In this essay, I take up the pre-digital and digital history of transforming information about Joyce’s novel into structured data. Then I consider the concrete application of those methods in Ashplant: Reading, Smashing, and Playing Ulysses, a website that shares the scholarship of my Grinnell College students. Working on the site has brought us into the history of numbering Ulysses, for better and for worse, and shown us how Ulysses specifically—more than most texts—resists and undermines the very processes that give digital projects their analytical power. In creating Ashplant, we have found that undergraduate research provides an especially generative environment for breaking down the “unproductive binary relation,” in Tara McPherson’s words, between theory and practice in the digital humanities (Macpherson 2018, 22).

Numbering Ulysses from The Little Review to the Database

Although created with twenty-first–century digital technologies, Ashplant takes part in a tradition that has built over the full century since the publication of Ulysses. Readers of Joyce’s text have long sought to discipline its complexity by creating reading aids structured like tabular data. That process begins with numbering: facing a novel that sometimes runs for many pages without a paragraph break, we readers have given ourselves a unique, identifying value for each line of the text. In database architecture, such a value is called—with unintentionally Joycean overtones—a primary key.[4] The primary key for Ulysses assigns the lines a value based on episode and line numbers. In its most conventional form, the line numbering is based on the Gabler edition of the novel. The episode-line key lets us point, for instance, to “Ineluctable modality of the visible”—the first line of the third episode—as 3.1.

Ulysses is unusual in having such a reliably fixed convention for a work of prose. Prose normally resists stable line numbering because its line breaks change in response to variations of typesetting that we do not normally read as meaningful; thus arises the variation among editions of Shakespeare’s plays in the line numbering of prose passages. The difficulty of reading Ulysses, however, creates a desire for reliably numbered reference points, for stable ground upon which communities of readers can gather. Such numbering imposes orderly hierarchy upon a text that implicitly and explicitly resists the concept and practices of orderly hierarchy. The early history of numbering the chapters and pages of Ulysses reveals our modern standardization—and by extension the structured data in Ashplant—as the product of a century-old conflict between printed versions of Ulysses and efforts of readers to retrofit the text into tractable data.

The serialization of Ulysses in The Little Review gave readers their first opportunity to grasp Joyce’s text with names and numbers. In the first issue containing part of Ulysses, the numbers begin: issue V.11, for March 1918. The Table of Contents reads, “Ulysses, 1.” The heading of the piece itself, on three lines, is “ULYSSES / JAMES JOYCE / Episode 1.”

The opening header of the first episode of Ulysses in The Little Review, volume 5, number 11, March 1918.
Figure 1. The opening header of the first episode of Ulysses in The Little Review (photograph by the author from the copy in the Special Collections and Archives of Grinnell College).

The 1922 Shakespeare and Company edition removes the episode numbers of the Little Review installments and, indeed, removes most signposting numbers altogether. The volume has no table of contents. In the front matter, the sole indication of a section or chapter number is a page containing only the roman numeral I, placed a bit above and to the left of the center of the page.

Figure 2. The Shakespeare and Company page with only the roman numeral “I” (Joyce, 1922).

This page is preceded by one blank page and followed by another, after which the main text of the novel begins, with no chapter title or episode number.

The beginning of the first episode in the Shakespeare and Company edition, showing no numbering or other header above the text.
Figure 3. The beginning of the first episode in the Shakespeare and Company edition (Joyce, 1922).

The page has no number, either. The following page is numbered 4, so the reader can infer that this is page three, and that the page with the roman numeral I has also been page one of the book. Episodes two and three are also unnumbered, so the reader can infer retrospectively that the roman numeral one indicated a section rather than a chapter or episode number. At this point, that is to say, the only stable numbering from The Little Review—the episode numbers—has disappeared entirely and been replaced by section numbers (just three for the whole book) that reveal their signification only gradually.

To fill the void of stable numbering, early readers of Ulysses relied on supplemental texts that have structured the naming and numbering conventions of the text ever since: the two schemata that Joyce hand-wrote for Carlo Linati and Stuart Gilbert in 1920 and 1921, respectively. The schemata have retained their power in part because of the tantalizing (apparent) simplicity of their form: they organize information about the novel in a structure closely resembling that of a spreadsheet or relational database table. Both schemata use the novel’s eighteen episodes as their records, or tuples—the rows that act essentially as entries in the database—and both populate each record with data corresponding to a series of columns largely but not entirely shared between the two schemata. The relational structure of the Linati schema, for example, has a record for the first episode that identifies it with the number “1,” the title “Telemachus,” and the hour of “8–9” (Ellmann 1972).

Today, any number of websites re-create the schemata by making the quasi-tabular structure fully tabular, as does the Wikipedia page for the Linati schema (“Linati Schema for Ulysses”).

A screenshot of the Linati schema in Wikipedia, showing the tabular structure of the data in the web page.
Figure 4. A screenshot of the Linati schema in Wikipedia. (https://en.wikipedia.org/wiki/Linati_schema_for_Ulysses).


Joyce’s sketches mimic tabular data so neatly that many later versions of the schemata put them in tabular structure without comment. The episode names and numbers again prove their utility: though the two schemata contain different columns, a digital version can join them by creating a single, larger table organized by episode.[5]

Organizing information by episode number has become standard practice in analog and digital supplements to Ulysses. In the analog tradition, readers have long used supplemental materials, organized episode by episode, to assist in the reading of the novel. An offline reader might prepare to read episode three, for instance, by reading a brief summary of the episode on Wikipedia, then consulting the schemata entries for the episode in Richard Ellmann’s Ulysses on the Liffey, then reading the longer summary in Harry Blamires’s New Bloomsday Book. In each case, that reader would look for materials associated with episode number three or the associated name “Proteus.” As much as any other work of literature, Ulysses invites that kind of hand-crafted algorithm for the reading process, all organized by the supplements’ adoption of conventional episode numbers and, often, the Gabler edition’s line numbers as well.

Digital projects, including Ashplant, rely on these episode and line numbers even more fundamentally. One section of Ashplant—the most conventional section—is called “Ulysses by episode.” It organizes and links to resources created by other writers and scholars, from classic nodes of reading Ulysses such as the Linati and Gilbert schemata to contemporary digital projects such as Boston College’s “Walking Ulysses” maps and the textual reproductions of the Modernist Versions Project.[6] This part of Ashplant, like those digital supplements to Ulysses and a long list of others, relies on the standard numerical organization: it has eighteen sections, corresponding to the eighteen episodes of the novel.

The underlying structure of these pages illustrates the importance of the episode number as a primary key: at the level of HTML markup, all eighteen pages point to the same file (“episode.php”) and therefore contain the same code.[7] The only change that happens when the user moves from the page about episode one to that for episode two, for example, is that the value of a single variable, “episode_number,” changes from “01” to “02.” Based on that variable, the page changes the information it displays, mainly by altering database calls that say, essentially, “Look at this database table and give me the information in the row where episode_number equals” the value of that variable. The database call looks like this, with emphasis added:

/* Performing SQL query */
$query = “SELECT a.episode_number, a.episode_name, b.episode_number, b.ls_time, b.ls_color, b.ls_people, b.ls_scienceart, b.ls_meaning, b.ls_technic, b.ls_organ, b.ls_symbols, b.gs_scene, b.gs_hour, b.gs_organ, b.gs_color, b.gs_symbol, b.gs_art, b.gs_technic
FROM episode_names a, schemata b
WHERE (a.episode_number=’$episode_number’) and (a.episode_number=b.episode_number)”;
$result = mysql_query($query) or die(“Query failed : ” . mysql_error());

The user’s input provides the value of episode_number (from 01 to 18) when the page loads.[8] Then, the page with this code queries the database to gather the information—the episode’s name, the schemata entries, and much more—from two database tables joined by the column containing that two-digit number in each table. The process gains its effectiveness from the conventionality of the episode numbers. The price of that efficacy is the loss of a good deal of information—from quirks of typesetting and handwriting, to alternative approaches to numbering the episodes, to the Italianate episode names from the schemata—that have at least as much textual authority as do our later simplifications.

Hierarchy and Classification

If there could be that which is contained in that which is felt there would be a chair where there are chairs and there would be no more denial about a clatter. A clatter is not a smell. All this is good.
—Gertrude Stein, Tender Buttons (Stein 2018, 53)

Line numbers, mainly those of the Gabler edition, impose further numerical discipline on Ulysses. The line numbers produce a hierarchy that allows humans and machines alike to arrive at a shared understanding of textual location:

Line (beginning at 1 and incrementing by 1 within each episode)

Episodes (1–18, or “Telemachus” to “Penelope”)

Ulysses (the whole)

This rationalization functions so powerfully, not only in digital projects but also in conventional academic citation, because it assigns to each location in the text—with some exceptions, such as images—a line, and every line belongs to an episode, and every episode belongs to Ulysses. The hierarchy of information allows shared understanding of reference points.

Though created before the age of contemporary digital humanities, the line/episode/book hierarchy produces a kind of standardization—simple, technical, and reductive—that is enormously useful for digital methods. For example, I embedded into Ashplant a script that combs parts of the site for references to Ulysses, based on a standard citation format of “(U [episode].[line(s)]),” then generates automatically a list of references to an episode, with a link to each source page.

A screenshot of an index of line references from Ashplant, showing a machine-generated list of references from Episode Five of Ulysses.
Figure 5. A screenshot of an index of line references from Ashplant.

These references point to entries in our collective lexicon of key terms in Ulysses. The students’ lexicon entries together constitute a playful, inventive exploration of the book’s language. The automated construction of the list itself, however, relies on an episode-line hierarchy that has none of that playfulness or invention.

For playful inventiveness in a hierarchy of location, we can turn instead to Stephen Dedalus in Joyce’s Portrait of the Artist as a Young Man, who reads the hierarchical self-location he has written on the flyleaf of his geography book:

Stephen Dedalus
Class of Elements
Clongowes Wood College
County Kildare
The World
The Universe (Joyce 2003, 12)

Stephen’s hierarchy is personal and resistant, not only containing the humorously self-centered details of his hyper-local situation but also excluding, for instance, any layer between “Ireland” and “Europe” that would acknowledge Ireland’s containment within the United Kingdom. It also confuses categories: although the list seems geographical—appropriately, given its location on a geography book—it contains some elements that would require additional information to become geographical (“Stephen Dedalus,” “Class of Elements”), and others whose mapping would be contentious (“Ireland,” “Europe”). It even contains an element, “Class of Elements,” that constitutes a self-reflexive joke about the impulse to classification that the list satirizes.

Such knowing irony does not infuse the hierarchies that drive much of our work in the digital humanities. That work requires schemes of classification that rely on one element’s containment within another. Consider what “Words API” claims to be “knowing” about words:

A screenshot from the 'About' section of Words API describing the hierarchical relations of certain terms, such as 'a hatchback is a type of car'.
Figure 6. A screenshot from the “About” section of Words API describing the hierarchical relations of certain terms (https://www.wordsapi.com).

An API, or Application Programming Interface, provides methods for different pieces of software to communicate with one another in a predictable way. Words API, “An API for the English Language,” performs this function by adding hierarchical metadata to every word. That metadata, in turn, allows other software to draw on that information by searching for all the words that refer to parts of the human body, for instance, or for singular nouns. In other kinds of DH applications, such as textual editions that are part of the XML-based Text Encoding Initiative, the hierarchical relationships are often hand-encoded: feminine rhyme is a type of rhyme, and rhyming words are sections of lines, which are elements of stanzas, and so forth.

The utility of these techniques is not surprising. Stanzas do generally consist of lines, transitive is a type of verb. The reliance on these encoded hierarchies echoes the methods of New Criticism, such as Wellek and Warren’s hierarchical sequence of image, metaphor, symbol, and myth—for them, the “central poetic structure” of a work (Wellek and Warren 1946, 190). For contemporary scholars more invested in decentering and poststructuralism, however, the echo of New Critical hierarchies in DH is unwelcome. Centrality implies exclusion; structure implies oversimplification; formal hierarchy implies social hierarchy. Or, as John Bradley writes, “XML containment often represents a certain kind of relationship between elements that, for want of a better term, can be thought of as ‘ownership’” (Bradley 2005, 145). Even for scholars working specifically to counteract the hierarchical containments of XML, the attempt can lead—as in Bradley’s work—to the linking of multiple hierarchical structures rather than the disruption of hierarchical organization itself.

Problems of “Subtle Things”

Addressing the challenges of encoding historical materials in XML, Bradley describes the limitations of hierarchical classification. “Humanities material,” he writes, “sometimes does not suit the relational model,” and he cites the Orlando Project’s opposition to placing its data in a relational database because it wanted to say more “subtle things” than the relational model could express (Bradley 2005, 141).[9] Bradley responds to that challenge with an ingenious method of integrating the capabilities of SQL databases into XML, solving the problem of expressing how a name in a historical document might refer to one of three people with discrete identifiers in the database. Even this problem, however, involves a relatively simple kind of uncertainty, representable on a line between “certain” and “unlikely.” The data being encoded is used for humanistic purposes, but the problem itself is not especially humanistic: it assumes an objective historical reality that can be mapped, with varying levels of confidence, onto stable personal identifiers.

The data of Ulysses presents additional difficulties, many of which are specifically literary, as the students working on Ashplant have repeatedly found. In one case, a group of them sought to document every appearance of every character in Ulysses. That data has clear utility for readers: when made searchable, it could assist a reader by identifying, for a given episode and line number, the active characters, perhaps adding a brief annotation to each name. Many earlier aids to reading the novel offer descriptions of the main characters, but the students set out to develop a resource that was more comprehensive and more responsive to a reader’s needs at a given place in the text. The students quickly discovered that identifying and describing a handful of major characters is easy; identifying all of them and their textual locations is not just a bigger problem but a fundamentally different one.

Take, for example, the novel’s dogs. We had established early on that non-human entities could be characters in our classification, given Joyce’s attribution of speech and intention to, say, hats and bars of soap. At least one dog seemed clearly to reach the level of “character”: Garryowen, the dog accompanying the Citizen in the “Cyclops” episode. According to one of the satirical interpolations of the episode, Garryowen has attained, “among other achievements, the recitation of verse” (12.718–19), a sample of which is included in the text. For the purposes of our data and accompanying visualization, we therefore needed basic information about the character, such as its name and when it appears in Ulysses.

The name creates the first problem. The description of the dog in “Cyclops” identifies it as “the famous old Irish red setter wolfdog formerly known by the sobriquet of Garryowen and recently rechristened by his large circle of friends and acquaintances Owen Garry” (12.715–17). In itself, the attribution of two names to one being is not a problem. For instance, Leopold Bloom can be “Bloom” or “Poldy,” but switching between them does not rename him. In Ulysses, such an entity could take the names “Garryowen” and “Owen Garry.” As long as the underlying identity is stable, this kind of multiplicity (two names at two different times) can fit easily into a relational database.

But Ulysses does not work so simply. Within the fiction, the renaming of the dog has questionable reality-status.[10] The “rechristening” has no durability within the narrative; it exists only in the context of one satirical interpolation, and subsequent references to the dog revert to “Garryowen.” Arguably, if our task is to describe the characters that are real within the world of the novel, the name “Owen Garry” has no status at all, as it attaches more to the voice of the temporary narrator than to what we could imagine as the real (fictional) dog.

However, we could just as plausibly say that, in the fiction, “Owen Garry” must not only exist but also be recorded as a separate character representing the temporary re-creation of Garryowen by this narrative voice. All of this messiness anticipates the further complications of the hallucinatory “Circe” episode, in which Bloom is followed by a dog that metamorphoses among species—spaniel, retriever, terrier, bulldog—until Bloom addresses it as “Garryowen,” and it transforms into a wolfdog. The dog might say, as Stephen does, “I am other I now” (9.205). Joyce’s method relies on the unresolvability of these ambiguities.

Emily Mester, the student who took the lead on the Ashplant character project, brought the transforming dog to our working group as a problem of data entry. We discussed how the problem stemmed from a breakdown of classification: rather than allowing the reader to rely on conventional relationships between sets and their elements (living things include humans and other animals, which include dogs, which include species, which include individual dogs), Joyce’s transforming dog implies a relation in which the individual dog contains multiple species. Our conversation led us to consider the dog as a device through which Joyce upends hierarchies of containment by attaching the name “Garryowen” to a dog, or an assortment of dogs, whose characteristics arise from the surrounding narration.

We realized together that the exercise of entering data into our spreadsheet led us to new questions about earlier scholarship on Ulysses. We found, for example, that Vivien Igoe assumed that Joyce’s Garryowen represented a historical dog of the same name, as in her statement that “Garryowen, who appears in three of the episodes in Ulysses (‘Cyclops’, ‘Nausicaa’, and ‘Circe’), was born in 1876” (Igoe 2009, 89). Although Igoe subsequently notes that Joyce distorts Garryowen for the purposes of fiction, this sentence still relies on several related presumptions for the purposes of historicist explanation: that the historical and fictional Garryowens are the same, that the Garryowen of Ulysses has the species identification of the historical dog (“famous red setter” [Igoe 2009, 89] rather than the “Irish red setter wolfdog” of “Cyclops”), and that within Ulysses, the fictional dog maintains a constant identity across episodes.[11] Reading phrasing such as Igoe’s in light of our questions about Garryowen led the Ashplant group to consider the confrontation between certain kinds of historicist methods with poststructural skepticism.

As the students continued to develop Ashplant, they discovered more and more examples of data entry problems that gave rise to probing discussions of Ulysses and, often, of how fictions work and how readers receive them. We sought, for instance, to map the global imagination of Ulysses, resisting the tendency Drucker had criticized of producing simplistic, naïve Dublin-centric visualizations of the “action” of the book. Instead, our map included only places outside of Dublin. For that subproject, guided by the student Christopher Gallo, we asked, How do we map an imaginary place? One that a character remembers by the wrong name? One that seems to refer to a historical event but puts it in the wrong place? For another part of the project, led by Magdalena Parkhurst, we created a visualization of the Blooms’ bookshelf that has been disrupted in “Ithaca,” and we needed to represent books with missing, incorrect, and imaginary information according to our research into historical sources.

Again and again, we found that the parts of Ashplant that appeared to involve the simplest kinds of data entry prompted us to have some of our deepest conversations about Ulysses, often leading us to further reading in contemporary criticism and theory. We found that, as Rachel Buurma and Anna Tione Levine and have argued,


Building an archive for the use of other researchers with different goals, assumptions, and expectations requires sustained attention to constant tiny yet consequential choices: “Should I choose to ignore this unusual marking in my transcription, or should I include it?” “Does this item require a new tag, or should it be categorized using an existing one?” “Is the name of the creator of this document data or metadata?” (Buurma and Levine 2016, 275–76)


Though our project is not archival, our experience has aligned with Buurma and Levine’s argument. Undergraduate research, which “has long emphasized process over product, methodology over skills, and multiple interpretations over single readings,” is well situated to foster the “sympathetic research imagination” necessary for creating useful digital projects. As our process became product, we felt more powerfully the constraints of using the “reductive and literal” tools that concern Drucker. No matter how nuanced and far-reaching our conversation about Garryowen had been, for instance, the needs of our spreadsheet compelled us to choose: is/are the transforming dog(s) of “Circe” appearances Garryowen or not?[12]

We found that the machinery of data entry and visualization produced what Donna Haraway calls the “god trick” of producing the illusion of objectivity, even when our conversations and methods aspired to privilege, in Haraway’s words, “contestation, deconstruction, passionate construction, webbed connections, and hope for transformation of systems of knowledge and ways of seeing” (Haraway 1988, 585).[13] Our timeline-based visualization of character appearances, for example, could not resist the binary choice of yes or no; even a tool that could represent probability would not be capable of representing non-probabilistic indeterminacy in the way that our conversation had. We needed to find other ways to make Ashplant into a site that produces a humanistic experience for its readers as well as its creators.

Ways Forward for Humanism in Undergraduate Digital Studies

This essay will not fully solve the problem it addresses: that digital methods gain some of their power by selecting from and simplifying complex information, sometimes in ways that run contrary to humanistic practices. Like Buurma and Levine, however, I find that the scale and established practices of undergraduate research create opportunities to do digital work that minimizes the problem and may, in fact, point to approaches that can inform humanistic digital work in general. With that goal in mind, I offer a few propositions based on our Ashplant team’s experience to date.

  1. Narrate the problems. Undergraduate research often operates at a scale that allows for hand-crafted digital humanities, in which the consequences of data manipulation can become the explicit subject of a project. The structure of Ashplant allows us to explain the problems of documenting character and location in Ulysses, and it also provides space for a wider range of student research: an analysis of Bloom’s scientific thinking and mis-thinking in “Ithaca,” a piece about Ulysses and the film Inside Llewyn Davis that uses hyperlinks to take a circular rather than linear form, and students’ artistic responses to the novel. The scale of undergraduate research allows it to become an arena for confrontation with and immersion in the problems created by the intersection of data science and the humanities.
  2. Connect conventional research to digital outcomes. Ashplant has at its heart an annotated bibliography, for which students read, cite, and summarize existing scholarship. Creating such a bibliography in digital form—specifically, with the bibliographical information in a database accessed through our web interface—enables searchability and linking. The bibliography thus becomes the scholarly backbone of the site, linked from and linking to every other section. Contributing to this part of the project grounds the students in the kind of reading and writing they have done for their other humanistic work, while also illustrating the affordances of the digital environment.
  3. Use the genre of the hypertext essay. Writing essays that combine traditional scholarly citation with other means of linking—bringing a project’s data to bear on a problem, connecting the project to other digital collections and resources—allows students to experience and demonstrate the impact of their digital projects on scholarly argumentation. Ashplant therefore includes a section of topical essays and theoretical explorations, addressing subjects from dismemberment to music. These essayistic materials link to and, importantly, are linked from the parts of the site that are based more explicitly on structured data. Our visualization of the global locations of Ulysses can lay the foundations for discussions of the Belgian King Leopold and the postcolonial Ulysses, for example, and a tool we developed for finding phonemic patterns in the text became the prompt for Emily Sue Tomac, a student specializing in linguistics as well as English, to undertake a project on Joyce’s use of vowel alternation in word sets such as tap/tip/top/tup. Hypertext essays can reanimate the complexities and contestations hidden by the god trick.
  4. Make creative expression a pathway to DH. My initial design Ashplant involved an unconventional division of labor. For the most part, students wrote the content of the site, while I took the roles of faculty mentor, general editor, and web developer. As the site evolved, so did those roles, and I perceived an important limitation of our model: students were rarely responsible for the visual elements of our user interface, and their interest in that part of the project was growing. The students saw the creative arts as a means of resisting the constraints of digital methods, and some of them created art projects that now counterbalance the lexical content of the site. When I designed a new course on digital methods for literary studies, therefore, I put artistic creativity first.[14] In that class, the students learned frameworks for discussing the affordances and effects of electronic literature, and we applied those frameworks to texts such as “AH,” by Young-hae Chang Heavy Industries; Illya Szilak and Cyril Tsiboulski’s Queerskins: A Novel; and Ana María Uribe’s Tipoemas y Anipoemas.[15] These works model a range of approaches to interactivity and digital interfaces. Therefore, all of our subsequent work in the semester—from the creation of the students’ own works of electronic literature, to the collection and presentation of geographical data, to writing Python scripts for textual analysis—takes place after this initial framing of digital work as a set of creative practices.

The complexity of humanistic inquiry does not involve solving well-defined problems with clear endpoints and signs of success. Our wholes have holes. As I have worked with my students on Ulysses, we have come to embrace a practice of digital humanities that puts creativity, resistance, and questioning at its heart, even (or especially) when we use the tabular and relational structures that appear at first to build walls within the imaginative works we study. Asking questions as simple as “What do we call this chapter of Ulysses?” and “When does this character appear?” has led my students to think and play and draw, representing contours of absurdity and art that help draw new maps of undergraduate study in the humanities.

In some ways, that new mapping takes part in the tradition I have described here: the translation and even reduction of textual complexity into reference materials that help students grasp Ulysses and begin the process of making meaning of and around it. In other ways, however, Ashplant has led us to a practice of digital humanities more aligned with Tara McPherson’s emphasis on “the relations between the digital, the arts, and more theoretically inflected humanities traditions” (McPherson 2018, 13). The scale of undergraduate pedagogy allows spreadsheets, essays, maps, and paintings to grow from the same intellectual soil, maintaining the value that structured data has long provided while preserving the complex energies of humanistic inquiry.


[1] “Ashplant” is the word Joyce uses to describe the walking stick of Stephen Dedalus. As the site explains, Stephen’s ashplant “is not a simple support but his ‘casque and sword’ (9.296) that he uses for everything from dancing and drumming to smashing a chandelier.” We likewise sought to take the conventional idea of a digital site supporting the reading of Ulysses and create varied and surprising possibilities for its use.

[2] I cite Ulysses (Joyce 1986) by episode and line number throughout, following the numbering convention that is about to become the subject of this essay.

[3] On the other hand, when Bloom later relates the events of his day to his wife in words, his account includes similar evasions and omissions. Joyce’s larger point seems less about the deceptions of quantification than about the many modes of deception humans can use when sufficiently motivated to hide something.

[4] More technically, in a database, a primary key is a column or combination of columns that have a unique value for every row. For example, in Ulysses, line number alone cannot be a primary key because every episode has, for instance, a line numbered 33. Therefore, the primary key requires the combination of episode and number: 1.33, 4.33, and 17.33 are all unique values. The other main characteristic of a primary key is that it cannot contain a null value in any row.

[5] As we have seen, the numbering convention of labeling the episodes from one to eighteen follows the lead of the Little Review episodes but not of the Shakespeare & Company edition, which uses only section numbers. The schemata employ yet another system, primarily restarting the episode numbering at the beginning of each section (so the fourth episode, which begins the second section, becomes a second episode “1”).

[6] These projects’ addresses are http://ulysses.bc.edu/ and http://web.uvic.ca/~mvp1922/, respectively.

[7] HTML (Hypertext Markup Language) is the standard markup language for web pages. HTML describes the structure of the data in a page, along with some information about formatting, so that it can be rendered by a browser. HTML does not contain executable scripts (or “code”). To create executable scripts and access information in databases, Ashplant embeds the scripting language PHP within its HTML code and connects to databases created with MySQL. Using this combination of PHP and MySQL is a common approach to creating dynamic web pages.

[8] This numbering involves another small translation: using two digits—01 and 02 rather than 1 and 2—allows the numbers to sort properly when interpreted computationally.

[9] The ability of XML-based schemes to contain non-hierarchical information remains a point of lively contention. The subject prompted a lengthy conversation on the HUMANIST listserv in early 2019 under the heading “the McGann-Renear debate.” That conversation is archived at https://dhhumanist.org/volume/32/.

[10] “Cyclops” implies yet another variant of the name: the Citizen familiarly calls the dog “Garry,” and the mock-formal narrator calls the poet-dog “Owen,” reversing the usual functions of first and last names and implying another identity called “Garry Owen.”

[11] Igoe’s phrasing also conflates the birth years of the historical and fictional Garryowens, although the historical dog would have had an implausible age of around twenty-eight years at the time Ulysses takes place.

[12] For whatever it’s worth, the unsatisfying decision we made was to classify the “Garryowen” (and “Owen Garry”) of “Cyclops” and “Nausicaa” as the character “Garryowen,” then create a separate character called “Circe Dog” to capture the transforming species of the dog(s) of that episode.

[13] Haraway’s skepticism echoes the sentiments of the “foundational crisis” of mathematics about a century ago, when Joyce was conceiving Ulysses and when, in 1911, Oskar Perron wrote, “This complete reliability of mathematics is an illusion, it does not exist, at least not unconditionally” (Engelhardt 2018, 14).

[14] That course, “Lighting the Page: Digital Methods for Literary Study,” was designed in partnership with my student collaborator Christina Brewer, who made especially valuable contributions to the unit on electronic literature.

[15] “AH” is online at http://www.yhchang.com/AH.html, Queerskins at http://online.queerskins.com/, and Uribe’s poetry at http://collection.eliterature.org/3/works/tipoemas-y-anipoemas/typoems.html.


Blamires, Harry. 1996. The New Bloomsday Book. London: Routledge.

Bradley, John. 2005. “Documents and Data: Modelling Materials for Humanities Research in XML and Relational Databases. Literary and Linguistic Computing 20, no. 1: 133–51.

Buurma, Rachel Sagner and Anna Tione Levine. 2016. “The Sympathetic Research Imagination: Digital Humanities and the Liberal Arts.” In Debates in the Digital Humanities, edited by Matthew K. Gold and Lauren F. Klein, 274–279 Minneapolis: University of Minnesota Press.

D’Ignazio, Catherine and Lauren F. Klein. 2020. Data Feminism. Cambridge: MIT Press.

Drucker, Johanna. 2012. “Humanistic Theory and Digital Scholarship.” In Debates in the Digital Humanities, edited by Matthew K. Gold, 85–95. Minneapolis: University of Minnesota Press.

Ellmann, Richard. 1972. Ulysses on the Liffey. Oxford: Oxford University Press.

Engelhardt, Nina. 2018. Modernism, Fiction, and Mathematics. Edinburgh: Edinburgh University Press.

Haraway, Donna. 1988. “Situated Knowledges: The Science Question in Feminism and the Privilege of Partial Perspective.” Feminist Studies 13, no. 3: 575–599.

Igoe, Vivien. 2009. “Garryowen and the Giltraps.” Dublin James Joyce Journal 2, no. 2: 89–94.

Joyce, James. 1922. Ulysses. Paris: Shakespeare and Company. http://web.uvic.ca/~mvp1922/ulysses1922/

Joyce, James. 1986. Ulysses, edited by Hans Walter Gabler with Wolfhard Steppe and Claus Melchior. New York: Vintage.

Joyce, James. 2003. A Portrait of the Artist as a Young Man, edited by Seamus Deane. New York: Penguin.

Macpherson, Tara. 2018. Feminist in a Software Lab: Difference + Design. Cambridge: Harvard University Press.

Rawson, Katie and Trevor Muñoz. 2019. “Against Cleaning.” In Debates in the Digital Humanities, edited by Matthew K. Gold and Lauren F. Klein, 279–92. Minneapolis: University of Minnesota Press.

Stein, Gertrude. 2018. Tender Buttons: Objects, Food, Rooms, edited by Leonard Diepeveen. Peterborough: Broadview.

Wellek, René and Austin Warren. 1946. Theory of Literature. New York: Harcourt.


This essay names some of the contributors to Ashplant, but dozens of student, faculty, and staff collaborators have made important contributions to the project, and the full accounting of gratitude for their work is on the site’s “About” page at http://www.math.grinnell.edu/~simpsone/Ulysses/About/index.php. I also thank Amanda Golden, Elyse Graham, and Brandon Walsh for their insightful comments on earlier versions of this piece.

About the Author

Erik Simpson is Professor of English and Samuel R. and Marie-Louise Rosenthal Professor of Humanities at Grinnell College. He is the author of two books: Literary Minstrelsy, 1770–1830 and Mercenaries in British and American Literature, 1790–1830: Writing, Fighting, and Marrying for Money. His current research concerns digital pedagogy and, in collaboration with Carolyn Jacobson, the representation of spoken dialects in nineteenth-century literature.

A greyscale map with circles over countries, the size and darkness of which indicate density. The US is the darkest, followed by India, Indonesia, Viet Nam, West Africa, Europe, and the Caribbean.

Data Fail: Teaching Data Literacy with African Diaspora Digital Humanities


This essay examines the authors’ experiences working collaboratively on Power Players of Pan-Africanism, a data curation and data visualization project undertaken as a directed study with undergraduate students at Salem State University. It argues that data-driven approaches to African diaspora digital humanities, while beset by challenges, promote both data literacy and an equity lens for evaluating data. Addressing the difficulties of undertaking African diaspora digital humanities scholarship, the authors discuss their research process, which focused on using archival and secondary sources to create a data set and designing data visualizations. They emphasize challenges of doing this work: from gaps and omissions in the archives of the Pan-Africanism social movement to the importance of situated data to the realization that the original premises of the project were flawed and required pivoting to ask new questions of the data. From the trials and tribulations—or data fails—they encountered, the authors assess the value of the project for promoting data literacy and equity in the cultural record in the context of high school curricula. As such, they propose that projects in African diaspora digital humanities that focus on data offer teachers the possibility of engaging reluctant students in data literacy while simultaneously encouraging students to develop an ethical lens for interpreting data beyond the classroom.

What can data visualization tell us about the scope and spread of Pan-Africanism during the first half of the 20th century, and what insights does undertaking this research offer for teaching data literacy? These questions were at the heart of a directed study during the 2019–2020 academic year, where we, a professor (Roopika Risam) and two students (initially Jennifer Mahoney in Fall 2019, with Hibba Nassereddine joining in Spring 2020), examined the utility of data visualization for African diaspora digital humanities and its possibilities for cultivating students’ interest in and knowledge of data-driven research. Part of Mahoney’s participation in Salem State University’s Digital Scholars Program, which introduces students to humanities research using computational research methods, the directed study offered her the experience of undertaking interdisciplinary independent research (a rare opportunity in the humanities at Salem State University), an introduction to working with data and data visualization, and the opportunity to broaden her knowledge of African diaspora literature and history. While the process of undertaking this research included many twists and turns and, ultimately, did not yield the insights we had anticipated, it opened up new areas of inquiry for computational approaches to the African diaspora, critical insights about the value of introducing students to African diaspora digital humanities, and the pedagogical imperatives of data literacy. As we propose, data projects on the African diaspora offer the possibility of both introducing students to important stories and voices that are often underrepresented in curricula and to the ethics of working with data in the context of communities that have been dehumanized and oppressed by unethical uses of data.

The State of Data in African Diaspora Digital Humanities

In recent years, Black Digital Humanities has grown tremendously in scope. The African American Digital Humanities (AADHum) Initiative at the University of Maryland, College Park, led initially by Catherine Knight Steele and now by Marisa Parham, and the Center for Black Digital Research at Penn State, led by P. Gabrielle Foreman, Shirley Moody-Turner, and Jim Casey, attest to increased institutional investment in digital approaches to Black culture. An extensive list of projects, created by the Colored Conventions Project, demonstrates the variety of methodologies, histories, and voices being explored through Black Digital Humanities scholarship. Since Kim Gallon outlined the case for Black Digital Humanities in her essay in the 2016 volume of the Debates in the Digital Humanities series, she has, indeed, “set in motion a discussion of the black digital humanities by drawing attention to the ‘technology of recovery’ that undergirds black digital scholarship, showing how it fills the apertures between Black studies and digital humanities” (Gallon 2016, 42–43). Black Digital Humanities is, as scholars like Gallon (2016), Parham (2019), Safiya Umoja Noble (2019), and others propose, fundamentally transnational. An emphasis on the African diaspora has, thus, become an essential dimension of Black Digital Humanities. The Digital Black Atlantic (University of Minnesota Press, 2021), which Risam co-edited with Kelly Baker Josephs for the Debates in the Digital Humanities series, will be the first volume to articulate the scope and span of African diaspora digital humanities as a multidisciplinary, transnational assemblage of diverse scholarly practices spanning a range of disciplines (e.g., literary studies, history, library and information science, musicology) and methodologies (e.g., community archives, library collection development, textual analysis, network analysis).

African diaspora digital humanities, we contend, offers students opportunities to engage in active learning through participation in civically engaged scholarship. Such forms of authentic learning are “participatory, experimental, and carefully contextualized via real-world applications, situations, or problems” (Hancock et al. 2010, 38). They draw on scholarship that supports deep learning through the experiences of actively constructing knowledge (Downing et al. 2009; Ramsden 2003; Vanhorn et al. 2019). In the context of digital humanities, as Tanya Clement (2012), suggests, “Project-based learning in digital humanities demonstrates that when students learn how to study digital media, they are learning how to study knowledge production as it is represented in symbolic constructs that circulate within information systems that are themselves a form of knowledge production” (366). As Risam (2018) has argued, undertaking this work in the context of postcolonial and diaspora studies “empowers students to not only understand but also intervene in the gaps and silences that persist in the digital cultural record” (89–90). As projects like Amy E. Earhart and Toniesha L. Taylor’s White Violence, Black Resistance demonstrate, authentic learning through research-based projects in African diaspora studies “teach recovery, research, and digitization skills while expanding the digital canon” (Earhart and Taylor 2016, 252). Such projects allow undergraduate students to develop both digital and data literacy skills, which are often only implicitly taught in undergraduate courses, particularly in the humanities (Carlson et al. 2015; Battershill and Ross 2017; Anthonysamy 2020).

Approaches to the African diaspora that foreground working with data have shown particular promise as the technologies of recovery for which Gallon advocates. The Transatlantic Slave Trade Database, which aggregates data from slave ship records, was first conceived in the early 1990s by David Eltis, David Richardson, and Stephen Behrendt, researchers who were compiling data on enslavement and decided to join forces. Over the decades, the team and database expanded to include 36,000 voyages. The Transatlantic Slave Trade Database is now partnering with other projects on enslavement through Michigan State University’s Enslaved project, which is working to develop interoperable linked open data between these various databases. Projects like In the Same Boats, directed by Kaiama L. Glover and Alex Gil, with contributions from a team of scholars of the African diaspora (including Risam), demonstrate the value of a transnational, data-driven approach to more recent facets of African diasporic culture. The directors compiled data sets from their partners identifying the locations where Black writers and artists found themselves throughout the twentieth century and created data visualizations that show their intersections. While co-location of these figures at a given time does not necessarily mean they met, the project opens up new research questions about relationships and collaborations between them. The possibility of creating new avenues of transnational research is, perhaps, the most critical contribution of African diaspora digital humanities projects that focus on data.

But working with data in the context of the African diaspora is not an unambiguous proposition. Writing about the Transatlantic Slave Trade Database in her essay “Markup Bodies,” Jessica Marie Johnson argues, “Metrics in minutiae neither lanced historical trauma nor bridged the gap between the past itself and the search for redress” (2018, 62). In Dark Matters, Simone Brown notes that data has played a role in racialized surveillance from transatlantic slavery to the present and has been complicit with social control (2015, 16). COVID Black, a task force on Black health and data, directed by Kim Gallon, Faithe Day, and Nishani Frazier, along with a team, addresses racial disparities from the COVID-19 pandemic through data. Recognizing and addressing these issues is critical for African diaspora digital humanities projects that focus on data, particularly when working with undergraduate humanities students because of the twin challenges of students’ general lack of exposure to African diaspora studies and to data literacy in curricula.

Understanding Data through the Lens of Pan-Africanism

All of these issues came together in our project, Power Players of Pan-Africanism, which collects data on and develops data visualizations of attendees of Pan-Africanist gatherings from 1900 to 1959. Pan-Africanism, a social movement of great significance during the 20th century, fostered a sense of solidarity and political organization between people in Africa and African-descended people around the world. The timeframe encompasses the First Pan-African Conference in 1900, Pan-African Congresses held between 1919 and 1945, the Bandung Conference held in 1955, the Congresses of Black Writers and Artists in 1956 and 1959, the Afro-Asian Writers’ Conference in 1958, and assorted events during this time period that created space for people of Africa and its diaspora to meet and discuss their common political, social, and economic concerns. We chose to include events including Afro-Asian connections as well because they offered opportunities for Pan-Africanist connections in the broader context of Afro-Asian solidarity. Additionally, we ended in 1959 because 1960—widely known as the “Year of Africa”—saw the successes of decolonization movements in Africa and significantly changed the stakes of the conversation among Pan-Africanists.

While the idea for Power Players of Pan-Africanism emerged as a side project from Risam’s work on The Global Du Bois, a data visualization project that explores how computational data-driven research challenges, complicates, and assists with how we understand W.E.B. Du Bois’s role as a global actor in anticolonial struggles, and from her contribution of the Du Bois data set to Glover and Gil’s In the Same Boats, this project was undertaken as a collaboration between Risam and Mahoney, who together designed a plan for research, data collection and curation, and data modeling. We were joined in the Spring 2020 semester by Hibba Nassereddine, another student in the Digital Scholars Program, who collaborated with us on research for the data set, the iterative process of designing research questions based on the data, and prototyping of data visualizations.

The first challenge we encountered is that Pan-Africanism is largely unexamined within both high school and college curricula in the US. Despite its significance for understanding anti-colonial and anti-racist movements in the US and abroad, Pan-Africanism is a topic that goes largely unexplored in the classroom. However, its emphasis on global cooperation between Africa and its diaspora is poised to open up significant insights on the African diaspora, global history, political science, and literary studies, among others. The thriving network of intellectuals, artists, writers, and politicians who participated in Pan-Africanist movements reveals rich global connections and world travel that brought Black people of the US, Caribbean, Europe, and Africa into communication and collaboration during the first half of the 20th century. Thus, Mahoney, and later Nassereddine, first had to learn about an entirely new area of study in preparation for their participation in this project.

Data literacy is also a sorely missing part of curricula in high schools and colleges in the US. Therefore, both Mahoney and Nassereddine had to learn about working with data as well. We focused on the concept that data is situated, an idea that Jill Walker Rettberg has articulated (Rettberg 2020; Risam 2020). Data is not, as many think, objective and neutral but is a factor of how it is collected—who is collecting it, what terms are they using, what are their biases—and how it is represented—what choices are being made in data visualization and how does that affect how data is interpreted and received by audiences. We examined principles of data visualization, influenced by the work of Edward Tufte, Alberto Cairo, and Isabel Mereilles, to consider how data visualization risks misrepresenting or skewing data. Thus, to be prepared to undertake the project, Mahoney, and later Nassereddine, needed a firm grounding in data literacy and data ethics, which they had not received elsewhere in their education.

Recognizing the challenges of working with data in the context of the African diaspora, Risam and Mahoney set out to identify connections between attendees at Pan-Africanist events. By identifying conferences and other events that created space for Pan-Africanists to meet, we believed we could bring to life a data set that would reveal connections between figures in Pan-Africanist networks. Would network analysis reveal new key figures beyond names like Du Bois, George Padmore, Kwame Nkrumah, Marcus Garvey, Jomo Kenyatta, and Léopold Sédar Senghor?

Right away, we encountered another issue: the lack of readily available data sets for this work. The absence was not particularly surprising, as it reflects historical and ongoing marginalization of scholarship on the African diaspora more generally and Pan-Africanism specifically within academic knowledge production and archives. As Risam (2018) argues, the lack of preservation and digitization of material related to communities within the African diaspora and in the Global South is a major deterrent to undertaking digital humanities projects. Therefore, research to create a data set was a necessary precursor to data visualization.

This process turned out to be a lot more difficult than expected. We spent months digging into the history of Pan-Africanism, using monographs, journal articles, digital archives, theses and dissertations, historic Black newspapers, organization newsletters, and primary source documents from the events, such as published pamphlets listing attendees and photographs with captions to identify events where Pan-Africanism was an important focus and uncover names of delegates and other participants. Explicitly named “Pan-African” events (First Pan-African Conference, First Pan-African Congress, Second Pan-African Congress, etc.) were the easiest to identify. However, Pan-Africanist conferences went by many other names: writers’ conferences, peace conferences, and anti-colonial conferences. Furthermore, a single event often appears under multiple names, a factor of the relative lack of attention Pan-Africanism has received in academic discourse. In these cases, we labeled events by the names with which they most commonly appear in academic and archival sources. For example, we identify one event as the “All-African People’s Conference,” held in Accra, Ghana in December 1958 based on corroboration of sources, but this event is also referred to as the “Congress of African Peoples” (Adi and Sherwood 2003). Even more confusingly, Immanuel Geiss’s The Pan-African Movement (1974), arguably the first scholarly treatment of Pan-Africanism, refers to the All-African People’s Conference as the “Sixth Pan-African Congress,” while the Sixth Pan-African Congress typically refers to an event held in Dar es Salaam, Tanzania in 1974 in the lineage of earlier Pan-African Congresses but in a different mode given the acceleration of decolonization from 1960 on. Some events were also unnamed. In one such case, we learned that West-African activist, editor, and teacher, Garan Kouyauté held an event in Paris in 1934, and we internally referred to this as “Kouyauté’s Event.” While we kept running into Kouyauté’s name in other sources, we were unable to find substantially more information about that particular event. This became a common theme in our research, where individuals clearly played important roles in the Pan-African movement but do not commonly appear among the most cited figures in scholarship on Pan-Africanist thought. These omissions suggest that there is still much more research on Pan-Africanism that needs to be done, but their inclusion in our data set offers researchers new names of figures whose influence on Pan-Africanism should be pursued.

Despite this challenge, the research process often delivered moments of validation, when the simple act of locating multiple obscure sources confirming an event made us grateful that we could prove that it happened. Therefore, the work of creating the data set was itself a scholarly activity, using both primary and secondary sources to validate the existence of lesser-known Pan-Africanist gatherings that deserve better recognition. For example, in The Pan-African Movement (1974), Geiss introduces an event called, “The Negro in the World Today.” Harold Moody, a Jamaican-born physician residing in London, hosted said event in July 1934 to coincide with a visit from a Gold Coast delegation, including prince and politician Nana Ofori Atta. Geiss explains, “One of the motives given for convening was the racial discrimination which faced coloured workers and students in Britain” (1974, 357). This event, among others, led to the Fifth Pan-African Congress in October 1945 in Manchester, England. However, finding any details of who attended “The Negro in the World Today” proved fruitless, and we almost started to question if this event was significant enough to be included in the data set. A bright moment in our research occurred when we found the event named in a newspaper article titled, “Africans Hold Important Three-Day Conference in London” in the July 21, 1934 issue of The Pittsburgh Courier (ANP 1934, 2). Confirming the existence of this event was celebratory, and these exuberant moments made many excruciating hours of research where we turned up nothing worth it. All told, we identified close to seventy events within our timeframe that fit our criteria of explicitly creating space for Pan-African connections among Black participants from around the world.

More obstacles appeared as we worked to identify the names of delegates and other participants in these events. In some cases, sources only identify the names of organizations being represented and did not include the names of people from the organization who were in attendance. Often, we had much more success identifying the numbers of delegates and attendees at events than locating their names. Knowing the numbers, however, gave us a sense of the percentage of attendee names that we had confirmed. For example, we know that there were over 200 delegates and 5,000 participants at the Fourth Pan-African Congress, held in New York in 1927, but we have only successfully identified twenty-six of those names. In our most successful case, the Conference on Africa, held in New York in 1944, we identified names of all 112 delegates, as well as additional participants and observers.

Among the many names that we added to our data set, we encountered further discrepancies we had to address. Some of the same participants were listed under different names in multiple sources, requiring additional research to verify. In some cases, this was a matter of typos within the sources. For example, a participant named “William Fonaine” attended the First International Conference of Negro Writers and Artists, and a participant named “W. F. Fontaine” attended the Second International Conference of Negro Writers and Artists. We were able to confirm that William F. Fontaine attended both events. In other cases, delegates had changed their names, which was not unusual at the time. In some instances, people changed their names to embrace their African roots and resist the imposition of colonial languages on their identities. T. Ras Makonnen was born George Thomas N. Griffiths in 1900 but changed his name in 1935. Kwame Nkrumah, born Francis Nwia Kofi Nkrumah in 1909, changed his name to Kwame Nkrumah in 1945 (and later became the first Prime Minister and then first President of Ghana). In other cases, differences in non-Anglophone names reflected divergent transliteration practices. We chose to include delegates’ country or colony of origin as well, which introduced a further level of inconsistency. Of course, we encountered changes in names reflecting transitions from colony to independent nation, such as Gold Coast to Ghana. But there were more puzzling inconsistencies as well. In many cases this reflected the mobility of participants in Pan-Africanism, their shifting national allegiances, and/or their affiliation with multiple locales. For others, however, it reflects inconsistencies in archival materials. In perhaps the oddest case, we found “Miguel Francis Delanang” from Ethiopia attending the Bandung Conference and a “Miguel Francis Delanang” from Ghana at the same conference. Based on our research, this is the same person. While we have done our best to identify as many discrepancies as we could, we fully expect that others exist that we have not caught because they are less obvious, such as aliases or pseudonyms that we have not yet connected to another name. Therefore, we view our data set not as a static and finished object but a living, collaborative document for other researchers who want to contribute to it.

Although we could easily spend years continuing our research, we decided that we had a substantial enough amount of data for a subset of twenty-one events that we could use to begin prototyping our data visualizations. When we began the project, we were curious about the networks among the participants. Would a network show significant connections among participants? How dense would these networks be? Which figures would be the hubs in the network? Would they be the usual suspects or might new voices emerge? To explore these questions, we created a force-directed graph—and the results were virtually meaningless. There was little density in the network and few connections among attendees. Light clustering in the network appeared around W.E.B. Du Bois, widely known as the father of Pan-Africanism, which was hardly surprising.

These disappointing results prompted several teachable moments about data and research design. We looked closely at our data set to understand why the network visualization seemed little more than noise. While we had expected to find participants attending more than one event, our twenty-one events gave us over one thousand names with the majority only attending one event. Logically, it was unsurprising that better-known figures like Du Bois attended more events because they had access to the means to do so. Also, since our events spanned six decades punctuated by major events like World Wars I and II, the rise of the Soviet Union, and the beginning of decolonization, the power players in the movement changed as their investment in Pan-Africanism waxed and waned over time. We also know, based on the information we had found about the total numbers of participants, that some of our data sets were incomplete—and may always be incomplete. Without accounting for the situatedness of the data we had curated, the results simply did not make sense.

We also recognized that our initial hypothesis about the existence of a network with well-defined connections was an erroneous assumption. Engagement of delegates with an event did not necessarily imply extended participation in the global dimensions of a movement. This realization led us to reconsider how we imagine what “participating” in a social movement means. In a conversation about these challenges, digital humanist Quinn Dombrowski suggested that perhaps what is most meaningful lies not in the network but in the brokenness of the network—in what a network visualization cannot represent. There may, for example, be forms of participation that cannot be captured within the bounds of face-to-face gatherings. These might be captured, instead, through correspondence between those engaged in Pan-Africanism. There might also be local effects of an individual’s attendance at an event that similarly would not manifest in a network visualization of participants. Rather than offer a clear picture of Pan-Africanism, our data set and meaningless network visualization opened up a new set of questions about the role of digital humanities in understanding Pan-Africanism.

This misstep was also an opportunity to explore the iterative nature of project design with students. Digital humanists, after all, are not unaccustomed to encountering failure and pivoting with research questions and methods to see what these methods make possible (Dombrowski 2019; Graham 2019). Engaging with iterative project design and negotiating the inevitable errors offers undergraduate students the opportunity to develop both creativity and problem solving skills (Pierrakos et al. 2010; Shernoff et al. 2011; Wood and Bilsborow 2013). We began to ask new questions about our data set and continued developing prototypes to see if they offered more meaningful insight on the data. One question that emerged was how to visualize the data in a way that would make the events and delegate information more easily navigable than reading a spreadsheet. We experimented with a sunburst data visualization, which shows hierarchical relationships between data. The top level of the hierarchy focused on decades, then years, then events, and finally participants. The sunburst visualization allowed us to organize the data and provide easy access to a complex data set, while also representing the data proportionally (which decades and years included the most events and which events included the largest numbers of delegates). Another question we considered was how our data might speak to the reach of Pan-Africanism both geographically and temporally. We created two maps to examine this question. The first, a static map, simply dropped pins at the locations of the nearly seventy events we had identified, revealing a broad geographical scope for Pan-Africanist gatherings—in the US, the Caribbean, Europe, Africa, and Asia. A second map, focusing on the twenty-one events for which we had identified a significant number of participants, mapped the attendees’ colonies and countries of origin. This dynamic heat map, animated to aggregate participant data over time, demonstrated the significant geographic scope of Pan-Africanism and its growth and spread over the first sixty years of the 20th century. Critically, we understood these visualizations as representations of particular elements of our data set, each shedding light on different details within the data but none showing the entire picture. While this is a feature of digital humanities scholarship that engages with data more generally—data visualizations are representations that slice and sample data sets, showing particular aspects of the data—it is a critical way of understanding data-driven approaches to African diaspora digital humanities.

Teaching (and Learning) Data Literacy with African Diaspora Digital Humanities

Despite the challenges of this work, we came away from the experience with key insights for both scholarship of the African diaspora and pedagogy. Risam was reminded that when working in the context of a subject that has been marginalized in the broader landscape of scholarly knowledge production, we are inherently limited by what archives have preserved and what scholarship has covered. Our research is encumbered by what Risam (2018) has described as the omissions of the cultural record, and as much as we can undertake the important work—like curating data sets—to avoid reproducing and amplifying these gaps, we inevitably must contend with fragments of information and the larger question of what data can and cannot reveal about the African diaspora. Although this knowledge ultimately proved frustrating, it was profound for Mahoney and Nassereddine in their first foray into working with data. Risam also found the experience an instructive lesson in how to teach humanities students to engage with data when we miss the mark—e.g. when our presumptions about the network failed to pan out. While scientific methods in STEM prompt students to contemplate and negotiate failure, this is not typically foregrounded in humanities methodologies (Henry et al. 2019; Melo et al. 2019; Croxall and Warnick 2020). However, this project offered Risam the opportunity to encourage students to move away from assumptions and be open to the new insights that emerge from a challenge. As Mahoney and Nassereddine are both students pursuing their teaching licenses in English, Risam used this experience as an opportunity to model reflective practice for the heartbreaks we encounter in both digital humanities research and in teaching—sometimes one’s brilliant idea does not prove to be so in execution, and the appropriate response is not to shut down and yield to failure but to pivot—ask questions, reassess, and re-plan.

From this experience, Mahoney had the opportunity to delve deeply into archival research and scholarship on the African diaspora for the first time. She was also surprised to learn that many high school teachers and professors with whom she discussed her work had not heard of Pan-Africanism, reflecting the lack of coverage of this powerful movement within high school and college curricula. Conversely, projects like ours are examples of how we can engage students in addressing these gaps in both curriculum and the cultural record (Risam 2018; Hill and Dorsey 2019; Thompson and McIlnay 2019; Dallacqua and Sheahan 2020; Davila and Epstein 2020). This project also led Mahoney to realize that often we are left with more questions than answers. For example, what breakthroughs or achievements for the African diaspora did Pan-Africanist gatherings create? How were these participants, who faced travel or visa restrictions, funding their travels for these events? Mahoney also discovered the moments of serendipity, joy, and surprise that are part of the research experience, in the way it opens up a virtually limitless garden of forking paths to explore. She was particularly excited to uncover the significance of women to Pan-Africanism. The Fourth-Pan African Congress in New York in 1927, for example, was organized primarily by women. Although women’s names are not counted among the key figures of Pan-Africanism, through the curation of our data set, Mahoney identified that Amy Ashwood Garvey, the first wife of well-known Pan-Africanist Marcus Garvey, arguably played a more significant role in Pan-Africanism than her husband. Aside from one out-of-print biography, Lionel M. Yard’s Biography of Amy Ashwood Garvey, 1897–1969, there is little research focused on Ashwood Garvey, but Mahoney was able to reconstruct her role. Ashwood Garvey used her father’s credit to help Garvey found the Universal Negro Improvement Association in Jamaica, and she worked with Garvey in the US, where they were married and divorced within two years. After their separation, Ashwood Garvey committed herself to Pan-Africanism, co-founding the Nigerian Progress Union and the International African Friends of Abyssinia (later the International African Service Bureau). Additionally, she was a respected speaker at Pan-Africanist and other political events throughout Europe, the Caribbean, the United States, and Africa. After organizing the Fifth Pan-African Congress in Manchester, England in 1945, Ashwood Garvey spent several years in Africa speaking to women and children and raising money for schools, lecturing in Nigeria, residing for two years as a guest of the Asantehene in Kumasi, Ghana, and adopting two daughters in Monrovia, Liberia. Later in her life, she opened the Afro-Woman Service Bureau in London. Mahoney began to recognize the questions that emerged as a factor of the relative lack of scholarly attention that Pan-Africanism has received in spite of its significance, which is a reflection of the biases within the cultural record—and in curriculum—that favor knowledge production on canonical histories, figures, and movements of the Global North over the stories and voices of the Global South (Akua 2019; Lehner and Ziegler 2019; Span and Sanya 2019; Caldwell and Chávez 2020). This experience also led Mahoney to recognize the importance of incorporating the voices of Black writers and artists engaged in Pan-Africanism into her classroom as a high school teacher.

From her crash course in data literacy while working on the project, Mahoney also realized that digital humanities must be included in the high school English Language Arts classroom. Contextualizing her experiences in her prior coursework on English teaching methods and technology teaching methods, Mahoney came to understand digital humanities as a way of teaching data literacy to her own students. In Massachusetts, where Mahoney will be teaching, high school teachers are beholden to the Massachusetts Curriculum Frameworks, which are based on Common Core Standards. In 2016, Massachusetts released Digital Literacy Standards, but there has been no incentive, accountability, or professional development provided to support their implementation. African diaspora digital humanities, in particular, Mahoney recognized, facilitates students’ digital literacy while furthering the essential goal of expanding the canon in the classroom to ensure inclusive representation for all students. Focusing on the two together allows teachers to move past perceived barriers—such as the cost of adding new books to curriculum or lack of interest from colleagues—to work towards justice and equity through students’ engagement with data. In the context of working with informational texts in the Common Core Standards, data literacy encourages students to understand the ethics of data and data visualizations—How was data collected? Who collected the data? What questions were asked? What terminology was used to ask the questions and how might that have informed the response? What is the difference between quantitative and qualitative data? What implicit messages appear in data visualizations? What stories can they tell and what are their limits?

We, therefore, propose that African diaspora digital humanities has an essential role to play in pedagogy, particularly at the high school level. Reading and analyzing data sets and data visualization is a cross-disciplinary skill that needs to be incorporated across the curriculum, and English Language Arts teachers have a responsibility to ensure that students are prepared to understand data, as a cornerstone of literacy. Teaching data literacy holds the possibility of appealing to students who might struggle with or be less interested in literature, allowing teachers to leverage their engagement with data sets and data visualization into deeper connections to the practices of reading and analyzing texts, while building their knowledge of the social value of data literacy (Kjelvik and Schultheis 2019; Špiranec et al. 2019; Bergdahl et al. 2020). Furthermore, it acquaints students with the iterative nature of research and interpretation, while building their capacity to recognize failure and to redirect their efforts towards new avenues of inquiry that may be more fruitful. This is not a matter of “grit”—the troubling emphasis on underserved students’ attitudes towards perseverance rather than on the structural oppressions that impede learning (Barile 2014; Duckworth 2016; Stitzlein 2018)—but strengthening critical thinking skills, particularly when working with English language learners (Parris and Estrada 2019; Smith 2019; Yang et al. 2020). Working with data of the African diaspora also contributes to greater diversity within curricula, while encouraging students to recognize the power dynamics at play in whose voices and experiences are preserved in the artifacts that form our cultural record. Ensuring that students have the opportunity to learn about the Black writers and artists who were the power players of Pan-Africanism in the context of data literacy offers teachers the possibility of promoting equity in the classroom and developing students’ ability to use their knowledge to interpret data through an ethical lens beyond the classroom.


Akua, Chike. 2019. “Standards of Afrocentric Education for School Leaders and Teachers.” Journal of Black Studies 51, no. 2 (December): 107–27. https://doi.org/10.1177/0021934719893572.

Associated Negro Press. 1934. “Africans Hold Important Three-Day Conference in London.” The Pittsburgh Courier, July 21, 1934.

Adi, Hakim, and Marika Sherwood. 2003. Pan-African History: Political Figures From Africa and the Diaspora Since 1787. London: Routledge.

Anthonysamy, Lilian. 2020. “Digital Literacy Deficiencies in Digital Learning Among Undergraduates” In Understanding Digital Industry, edited by Siska Noviaristanti, Hasni Mohd Hanafi, and Donny Trihanondo, 133–36. London: Routledge.

Barile, Nancy. 2014. “Is “Getting Gritty” the Answer?: Can Grit Solve All Your Students’ Problems? This Urban High School Teacher Shares Her Experiences.” Educational Horizons 93, no. 2 (December): 8–9. https://doi.org/10.1177/0013175X14561418.

Battershill, Claire and Shawna Ross. 2017. Using Digital Humanities in the Classroom: A Practical Introduction for Teachers, Lecturers, and Students. London: Bloomsbury Academic.

Bergdahl, Nina, Jalal Nouri, and Uno Fors. 2019. “Disengagement, Engagement and Digital Skills in Technology-enhanced Learning.” Education and Information Technologies 25: 957–983. https://doi.org/10.1007/s10639-019-09998-w.

Brown, Simone. 2015. Dark Matters: On the Surveillance of Blackness. Durham, NC: Duke University Press.

Cairo, Alberto. 2019. How Charts Lie: Getting Smarter about Visual Information. NY: Norton.

Caldwell, Kia Lilly, and Emily Susanna Chávez. 2020. Engaging the African Diaspora in K–12 Education. New York: Peter Lang Publishing Group.

Carlson, Jake, Megan Sapp Nelson, Lisa R. Johnston, and Amy Koshoffer. 2015. “Developing Data Literacy Programs: Working with Faculty, Graduate Students and Undergraduates.” Bulletin of the Association for Information Science and Technology 41, no. 6 (August/September): 14–17.

Clement, Tanya. 2012. “Multiliteracies in the Undergraduate Digital Humanities Curriculum: Skills, Principles, and Habits of Mind.” In Digital Humanities Pedagogy: Practices, Principles, and Politics, edited by Brett D. Hirsch, 365–88. Cambridge: Open Book Publishers.

Croxall, Brian, and Quinn Warnick. 2020. “Failure.” In Digital Pedagogy in the Humanities: Concepts, Models, and Experiments, edited by Rebecca Frost Davis, Matthew K. Gold, Katherine D. Harris, and Jentery Sayers. https://digitalpedagogy.hcommons.org/keyword/Failure.

Dallacqua, Ashley K., and Annmarie Sheahan. 2020. “Making Space: Complicating a Canonical Text Through Critical, Multimodal Work in a Secondary Language Arts Classroom.” Journal of Adolescent & Adult Literacy 64, no. 1 (July/August): 67–77. https://doi.org/10.1002/jaal.1063.

Davila, Denise, and Elouise Epstein. 2020. “Contemporary and Pre–World War II Queer Communities: An Interdisciplinary Inquiry Via Multimodal Texts.” English Journal 110, no. 1 (September): 72–79.

Dombrowski, Quinn. 2019. “Towards a Taxonomy of Failure.” http://quinndombrowski.com/?q=blog/2019/01/30/towards-taxonomy-failure.

Downing, Kevin, Theresa Kwong, Sui-Wah Chan, Tsz-Fung Lam, and Woo-Kyung Downing. 2009. “Problem-based Learning and the Development of Metacognition.” Higher Education 57: 609–621.

Duckworth, Angela. 2016. Grit: The Power of Passion and Perseverance. New York: Scribner.

Earhart, Amy E. and Toniesha L. Taylor. 2016. “Pedagogies of Race: Digital Humanities in the Age of Ferguson.” In Debates in the Digital Humanities 2016, edited by Matthew K. Gold and Lauren F. Klein, 251–264. Minneapolis: University of Minnesota Press.

Eltis, David, et al. 2020. The Transatlantic Slave Trade Database. https://www.slavevoyages.org.

Gallon, Kim. 2016. “Making the Case for Black Digital Humanities.” In Debates in the Digital Humanities 2016, edited by Matthew K. Gold and Lauren F. Klein, 43–49. Minneapolis: University of Minnesota Press.

Gallon, Kim et al. 2020. COVID Black. https://www.cla.purdue.edu/academic/sis/p/african-american/covid-black/team.html.

Geiss, Imanuel. 1974. The Pan-African Movement. New York: Africana Publishing Company.

Glover, Kaiama L. and Alex Gil. 2020. In the Same Boats. https://sameboats.org.

Graham, Shawn. 2019. Failing Gloriously and Other Essays. Grand Forks, ND: The Digital Press.

Hancock, Thomas, Stella Smith, Candace Timpte, and Jennifer Wunder. 2010. “PALs: Fostering Student Engagement and Interactive Learning.” Journal of Higher Education Outreach and Engagement 14, no. 4. https://openjournals.libs.uga.edu/jheoe/article/view/798/798.

Henry, Meredith A., Shayla Shorter, Louise Charkoudian, Jennifer M. Heemstra, and Lisa A. Corwin. 2019. “FAIL Is Not a Four-Letter Word: A Theoretical Framework for Exploring Undergraduate Students’ Approaches to Academic Challenge and Responses to Failure in STEM Learning Environments.” CBE—Life Sciences Education 18, no. 1 (Spring): 1–17. https://doi.org/10.1187/cbe.18-06-0108.

Hill, Craig, and Jennifer Dorsey. 2020. “Expanding the Map of the Literary Canon Through Multimodal Texts.” In Handbook of the Changing World Language Map, edited by Stanley D. Brunn and Roland Kehrein, 77–89. Cham, Switzerland: Springer.

Johnson, Jessica Marie. 2018. “Markup Bodies: Black [Life] Studies and Slavery [Death] Studies at the Digital Crossroads.” Social Text 36, no. 4 (2018): 57–79. https://doi.org/10.1215/01642472-7145658.

Johnston, Brenda, Peter Ford, Rosamond Mitchell, and Florence Myles. 2011. Developing Student Criticality in Higher Education: Undergraduate Learning in the Arts and Social Sciences. London: Bloomsbury Publishing.

Kjelvik, Melissa K., and Elizabeth H. Schultheis. 2019. “Getting Messy with Authentic Data: Exploring the Potential of Using Data from Scientific Research to Support Student Data Literacy.” CBE—Life Sciences Education 18, no. 2 (Summer): 1–18. https://doi.org/10.1187/cbe.18-02-0023.

Lehner, Edward and John R. Ziegler. 2019. “Re-Conceptualizing Race in New York City’s High School Social Studies Classrooms.” In Handbook of Research on Social Inequality and Education, edited by Sherrie Wisdom, Lynda Leavitt, and Cynthia Bice, 24–45. Hershey, Pennsylvania: IGI Global.

Meirelles, Isabel. 2013. Design for Information. Beverly, Massachusetts: Rockport Press.

Melo, Marijel, Elizabeth Bentely, Ken S. McAllister, and José Cortez. 2019. “Pedagogy of Productive Failure: Navigating the Challenges of Integrating VR into the Classroom.” Journal of Virtual Worlds Research 12, no. 1 (January): 1–20. https://doi.org/10.4101/jvwr.v12i1.7318.

Noble, Safiya Umoja. 2019. “Toward a Critical Black Digital Humanities.” In Debates in the Digital Humanities, edited by Matthew K. Gold and Lauren F. Klein, 25–35. Minneapolis: University of Minnesota Press.

Pangrazio, Luci, and Julian Sefton-Green. 2020. “The Social Utility of ‘Data Literacy.’” Learning, Media, and Technology 45, no. 2 (June): 208–20. https://doi.org/10.1080/17439884.2020.1707223.

Parham, Marissa. 2019. “Sample | Signal | Strobe: Haunting, Social Media, and Black Digitality.” In Debates in the Digital Humanities, edited by Matthew K. Gold and Lauren F. Klein, 101–122. Minneapolis: University of Minnesota Press.

Parris, Heather, and Lisa M. Estrada. 2019. “Digital Age Teaching for English Learners.” In The Handbook of TESOL in K‐12, edited by Luciana C. de Oliveria, 149–62. Hoboken, New Jersey: Wiley-Blackwell.

Pierrakos, Olga, Anna Zilberberg, and Robin Anderson. 2010. “Understanding Undergraduate Research Experiences through the Lens of Problem-based Learning: Implications for Curriculum Translation.” Interdisciplinary Journal of Problem-Based Learning 4, no. 2 (September): 35–62. https://doi.org/10.7771/1541–5015.1103.

Ramsden, Paul. 2003. Learning to Teach in Higher Education. New York: Routledge.

Rettberg, Jill Walker. 2020. “Situated Data Analysis: A New Method for Analysing Encoded Power Relationships in Social Media Platforms and Apps.” Humanities and Social Sciences Communications 7, no. 5 (2020). https://doi.org/10.1057/s41599-020-0495-3.

Risam, Roopika. 2020. “‘It’s Data, Not Reality’: On Situated Data with Jill Walker Rettberg.” Nightingale, June 29, 2020. https://medium.com/nightingale/its-data-not-reality-on-situated-data-with-jill-walker-rettberg-d27c71b0b451.

Risam, Roopika. 2018. New Digital Worlds: Postcolonial Digital Humanities in Theory, Praxis, and Pedagogy. Evanston, Illinois: Northwestern University Press.

Shernoff, Elisa S., Ane M. Maríñez-Lora, Stacy L. Frazier, Lara J. Jakobsons, Marc S. Atkins, and Deborah Bonner. 2011. “Teachers Supporting Teachers in Urban Schools: What Iterative Research Designs Can Teach Us.” School Psychology Review 40, no. 4 (December): 465–85. https://doi.org/10.1080/02796015.2011.12087525.

Smith, Blaine E. 2019. “Mediational Modalities: Adolescents Collaboratively Interpreting Literature through Digital Multimodal Composing.” Research in the Teaching of English 53, no. 3 (February): 197–222. https://search.proquest.com/docview/2196370157?pq-origsite=gscholar&fromopenview=true.

Span, Christopher M., and Brenda N. Sanya. 2019. “Education and the African Diaspora.” In The Oxford Handbook of History Education, edited by John L. Rury and Eileen H. Tamura, 399–412. New York: Oxford University Press.

Špiranec, Sonja, Denis Kos, and Michael George. 2019. “Searching for Critical Dimensions in Data Literacy.” In Proceedings of CoLIS, the Tenth International Conference on Conceptions of Library and Information Science, Ljubljana, Slovenia, June 16–19, 2019. Information Research 24, no. 4 (December). http://informationr.net/ir/24-4/colis/colis1922.html.

Stitzlein, Sarah M. 2018. “Teaching for Hope in the Era of Grit.” Teachers College Record 120, no. 3 (March): 1–28. http://www.tcrecord.org/Content.asp?ContentId=22085.

Thompson, Riki, and Matthew McIlnay. 2019. “Nobody Wants to Read Anymore! Using a Multimodal Approach to Make Literature Engaging.” Journal of English Language and Literature 7, no. 1 (January): 21–40.

Tufte, Edwards. 2001. The Visual Display of Quantitative Information, 2nd edition. Cheshire, Connecticut: Graphics Press.

Vanhorn, Shannon, Susan M. Ward, Kimberly M. Weismann, Heather Crandall, Jonna Reule, et al. 2019. “Exploring Active Learning Theories, Practices, and Contexts.” Communication Research Trends 38, no. 3 (January): 5–25.

Wood, Denise, and Carolyn Bilsborow. 2015. “‘I am not a Person with a Creative Mind’: Facilitating Creativity in the Undergraduate Curriculum Through a Design-Based Research Approach.” In Leading Issues in e-Learning Research MOOCs and Flip: What’s Really Changing?, edited by Mélanie Ciussi, 79–107. United Kingdom: Academic Conferences and Publishing Limited.

Yang, Ya-Ting Carolyn, Yi-Chien Chen, and Hsui-Ting Hun. 2020. “Digital Storytelling as an Interdisciplinary Project to Improve Students’ English Speaking and Creative Thinking.” Computer Assisted Language Learning. https://doi.org/10.1080/09588221.2020.1750431.


The authors gratefully acknowledge Krista White for thoughtful feedback on this essay; Gail Gasparich, Regina Flynn, Elizabeth McKeigue, and J.D. Scrimgeour at Salem State University for supporting the Digital Scholars Program; and Haley Mallett for her support preparing the manuscript.

About the Authors

Jennifer Mahoney is an MEd student at Salem State University. She received her Bachelor of Arts in English from Salem State and is currently completing her Master’s in Secondary Education. Mahoney is currently a teaching fellow at Revere High School, an urban public school just outside of Boston, MA. She was the inaugural recipient of the Richard Elia Scholarship and her research interests include contemporary pedagogical approaches, underrepresented historical events, and digital humanities.

Roopika Risam is Chair of Secondary and Higher Education and Associate Professor of Secondary and Higher Education and English at Salem State University. Her research interests lie at the intersections of postcolonial and African diaspora studies, humanities knowledge infrastructures, and digital humanities. Risam’s monograph, New Digital Worlds: Postcolonial Digital Humanities in Theory, Praxis, and Pedagogy was published by Northwestern University Press in 2018. She is co-editor of Intersectionality in Digital Humanities (Arc Humanities/Amsterdam University Press, 2019). Risam’s co-edited collection The Digital Black Atlantic for the Debates in the Digital Humanities series (University of Minnesota Press) is forthcoming in 2021.

Hibba Nassereddine is an MEd student at Salem State University. She received her Bachelor of Arts in English from Salem State and is currently completing her Master’s in Secondary Education. Nassereddine is currently a teaching fellow at Holten Richmond Middle School in Danvers, Massachusetts.

A map titled "History of American Capitalism", which shows superfund sites and the poverty ratio in sections of Michigan and Illinois.

Introducing GIS in the History Classroom: Mapping the Legacies of the Industrial Era in Postindustrial America

Camden Burd

Dr. Burd planned this lesson for a 100 level course titled History of American Capitalism. Students built digital maps using ArcGIS Online and later reflected on the benefits of the technology as an educational tool.

Read more… Introducing GIS in the History Classroom: Mapping the Legacies of the Industrial Era in Postindustrial America

Skip to toolbar