Articles by Sarah Ruth Jacobs

Digital Close Reading: TEI for Teaching Poetic Vocabularies

Kate Singer, Mount Holyoke College


This essay discusses digital encoding both as a method of teaching close reading to undergraduates and as a technology that might help to reassess the terminology used in literary reading. Such praxis structured a textual encoding project designed for upper-level undergraduates who constructed an edition of “Laura’s Dream; or, the Moonlanders” by nineteenth-century poet Melesina Trench. The painstaking process of selecting bits of text and wrapping them with tags reframed reading as slow, iterative, and filled with formal choices. By using the Text Encoding Initiative’s broad-based humanities tag set, students reflected upon classical literary terms as well as TEI’s extensible vocabulary, considering how best to formalize an interpretive reading. Since most women poets of the period taught themselves poetics, Trench’s poem became a test case for whether alternate terminology might best describe her experimentation. Important discoveries included differences between the term “stanza” and TEI’s “line group” tag (<lg>), and between labeling figurative language with classical terms such as “metaphor” or with the “segment” tag, marking self-identified tropes threaded throughout the poem (<seg ana=“visuality”>). As they tagged and then color-coded their readings, students gained the editorial prowess and creativity to develop interpretive language beyond rote or prescriptive terminology.


Introduction: Undergraduate Encoding for Close Reading

Perhaps no ability is as valued in undergraduate English classrooms as close reading. Students are taught to identify critical features of a short poem so that they can describe a work of literature and, hopefully, tangle with a myriad of cultural artifacts in sophisticated ways. For many scholars trained in more recent methodologies sensitive to history, context, and the market factors that shape textual transmission, close reading still retains a modicum of ideological baggage from New Criticism. Approaches that focus exclusively on the ambiguities of a poem’s form, syntax, and the minutiae of language and may end up treating works of literature as art objects hermetically sealed from social context. Nonetheless, students who are taught to identify tone, diction, genre, word order, meter, or metaphorical patterns can conceivably apply critical vocabulary to forge an argument about a work’s significance. They acquire a method of procuring a text’s worthwhile meanings even as they gain purchase on reflective reasoning, analysis, and argumentation—hopefully with clarity, precision, and sound evidence.

So the argument goes, and many of us are still attempting to tout the relevance of such critical thinking skills for the job market and the life of the mind beyond the classroom. The digital encoding of texts may be of help in such efforts, offering one means of bridging academic and public acts of interpretation, even as it helps us think through the ways in which digital media are steadily altering our ways and means of reading. The now-pervasive practice of marking up texts with code is widely understood to bring to light the historical and “textual situation” of a piece of writing by way of contrast with our own digital milieu. Even more acutely, however, digital encoding can become a method of close reading that reimagines literary analysis as a wide-reaching group of skills including searching, sorting, and identifying different sorts of elements—those information-management skills so necessary in light of the ubiquity of reading in the digital age.

As a result, one growing trend in teaching with technology entails student projects that execute digital editions of scholarly works.1 Students are asked to transpose, encode, and present literature to their peers and to the world at large, learning literary editing while producing scholarly work. The premier technology for many humanists has become the Text Encoding Initiative (TEI), a delimited set of XML tags aimed at describing literary, historical, and other humanities documents, markup which is then transformed by other stylesheets to present online. While HTML allows users to encode mainly presentation details (how a browser should display paragraphs, fonts, or images), TEI allows the editor to layer descriptive or analytical meaning into a text’s digital documentation—paragraph and stanza structure, meter, bibliographic details, people and place names, manuscript alterations, and so forth—while leaving web presentation for later. Figure 1 demonstrates a simple TEI encoding of the first stanza of John Donne’s “The Progress of the Soul.”

Figure 1.  A very simple encoding of a poem stanza from Lou Bernard and C. M. Sperberg-McQueen’s “TEI Lite: Encoding for Interchange: An Introduction to the TEI”.

Figure 1. A very simple encoding of a poem stanza from Lou Bernard and C. M. Sperberg-McQueen’s “TEI Lite: Encoding for Interchange: An Introduction to the TEI.”

For additional examples of more complex poetry encoding, see TEI By Example’s use of rhyme tags in Charles Algernon Swinburne’s “Sestina” and that site’s example of manuscript encoding in Emily Dickinson’s “Faith is a Fine Invention.” This site also provides numerous other examples of TEI prose and drama markup in action. Romantic Circle’s recent scholarly editions provide their TEI XML files for viewing, such as this Robert Southey letter, which serves as an instance of encoding for people and places, with such references identified via <ref> tags that allow multiple documents in the large archive to cross-reference people and place names.

Because the encoding process is a painstaking one of selecting and marking discreet pieces of text, then wrapping them with tags in pointy brackets, it offers—especially for undergraduates—a dynamic, hands-on method for self-conscious, unhurried reading. While TEI was created for the purposes of digital encoding and, by and large, for storing, editing, and curating textual documents, the practice of encoding effectively engages the encoder in a determinative act of reading. Students accomplish a sort of digital highlighting as they label, categorize, and sort various areas of the text, using a set of tags or terms that are both familiar and unfamiliar. Their decisions about which structures and information to tag very much determine what they value in a document—or at least what should be preserved and codified for other readers. This highly structured and rigorous process encourages readers to slow down and engage in iterative, self-reflexive interpretation (Buzzetti 2002). For this reason, encoding—and teaching encoding—might be a valuable pedagogical tool, to enhance “close reading” and, additionally, to refocus reading as method of evaluative labeling, categorization, and selection of discrete bits of text.

Even more provocatively, encoding can help us rethink the very labels and terms we use to read. Digital markup languages for the humanities such as TEI build a comprehensive language for description and analysis, furnishing a revised, wide-reaching vocabulary for describing texts. In doing so, such a language can help us reflect—and facilitate reflection—on the literary vocabularies we have inherited from New Critics and Classical rhetoricians, including the types of assumptions and ideologies such terminology may carry with it. One prominent example comes from TEI’s struggle with finding tags to mark figurative language. While the Guidelines provide standard tags for stanza structures, people and places, and meter, there is a paucity in the TEI for literary forms and figures of speech and thought—a lack that presents an important and interesting problem through which to reconsider the project of close reading. The TEI P5 Guidelines for verse end with this well-known and important caveat:

Elements such as <metaphor tenor=“…” vehicle=“…”> … </metaphor> might well suggest themselves; but given the problems of definition involved, and the great richness of modern metaphor theory, it is clear that any such format, if predefined by these Guidelines, would have seemed objectionable to some and excessively restrictive to many. (6.6)

In shying away from such restriction, this aspect of TEI might seem to limit the descriptive or analytical possibilities of XML, particularly for those literary scholars schooled in poetic and rhetorical terminology. What should editors do who want a more extensive, analytical language to mark up verse? Could—and should—critics use TEI’s robust categorizing and descriptive capabilities to help catalogue a text’s literary features? Given the richness of classical terminology and poetic labeling, it would seem as though the paucity of tags for poetry might be limiting in the kinds of ways that other types of tag sets—such as those for historical places and names or for manuscript editing—are not. Of course TEI’s opportunity for customizing or creating one’s own tags could easily provide an individual solution, though one that would not be as readily available to other users. To take another view, however, the TEI’s intentional swerve from traditional, specialized, poetic vocabulary presents perhaps an extremely powerful and valiant challenge to received literary terminology. Since TEI is, at heart, an analytical and descriptive language for the humanities, it might encourage us to rethink which labels, categories, and values are essential in contemporary literary criticism and which terms may be unhelpfully ideological in our efforts to analyze literary texts.

Terms such as personification, synecdoche, or alliteration certainly can help readers to find observable, meaningful poetic patterns, but they do so by dictating specific figurative relations or sounds—designs often used by Roman rhetoricians as blueprints for invention and composition. Students who understand alliteration will look for repetitive consonant sounds at word beginnings and may even infer that such patterns occur elsewhere in the line. Alliteration and similar terms, however, do not necessarily encourage students to think more globally about all different sorts of sound patterns that may be more unique—and which may not have a term to identify their structure. Similarly, locating specific stanza forms, such as couplets, can blind the reader to more flexible or global formal experiments that combine couplets, quatrains, and other nebulous units into less recognizable structures. Finally, many figurative terms such as prosopopoeia (personification) presuppose a specific abstract relation, such as the representation of human characteristics in an animal or inanimate object. These connections likewise authorize very specific ways of reading the links between the literal and abstract (“being endowed with human characteristics”) rather than allowing the reader to identify the abstraction at hand (Shakespeare’s “teeming earth”), to understand the various concepts evoked (movement, excess, fecundity), and only then to build an understanding of how such a figuration might rethink ideas about what it means to be human. To put this another way, students looking for personification in a text may look for human characteristics narrowly defined—e.g., body parts or occasions when the inanimate becomes a person—rather than describing a figure’s human vitality on its own terms.

TEI’s broader language for poetic encoding, particularly when it comes to figurative language, may actually allow for greater creativity and ambiguity in marking poetic features, a tactic that can be quite liberating and helpful to students learning poetic vocabularies, concepts, and interpretive strategies. Students can learn to interrogate poetics terms and interpret texts in descriptive rather than prescriptive ways precisely because they do not automatically resort to classical literary terminology. In doing so, we can begin to see the potential for TEI to be used as an analytical tool in developing alternative poetic vocabularies and methodologies of reading.

What follows is a discussion of the experiments and debates about TEI as a descriptive, analytical language for poetic texts that occurred mainly during an undergraduate senior-level seminar on eighteenth- and nineteenth-century English women’s poetry. In order to plan the teaching of TEI to undergraduate literature students effectively, however, these queries also extended to discussions with colleagues in TEI seminars at Brown’s Women Writers Project as well as conversations with upper-level students, colleagues, librarians, and technologists at Mount Holyoke College and with humanists and digital humanists at other institutions.2 The basic digital venture was to have students build a TEI-encoded electronic edition of a poem by an obscure woman writer. I selected Melesina Trench’s “Laura’s Dream; or The Moonlanders” in part because, as a lunar voyage poem, its science fiction-like imagination of inhabitants on the moon prompted discussion about women and technology—helping the class to theorize both the imagination and language as early technologies. Moreover, the plot of the poem likewise allegorizes pertinent issues of pedagogy. Laura awakens from a fever-induced vision to tell her mother about her vision of the Moonlanders. While Laura’s mother dismisses these dreams as products of an over-heated brain, Laura’s allegories about the lunar beings reach toward truths culled by the young (who play with technology) and taught to an older generation. This view of technology and youthful experimentation was aimed at giving students a way to think about the process of using technology—both the ways they were being taught about it and the ways that they might instruct their teachers. Above all, students gained a new type of editorial agency that gave them permission to explore, alter, and retranslate the text for themselves and their peers.

Additionally, the time period studied in the course—the Romantic century—specifically helped to promote questions about textuality and terminology. This period is now well known among scholars as one that included a print explosion, as the French Revolution and its aftermath spawned a prolific public debate in which women took a very vocal part. Women’s writing of this era, moreover, has had a intimate relationship with digital textuality in our own recent past. Over the past thirty years, new historicist scholars have boldly recovered much women’s writing from the archive, which has slowly, in bits and pieces, become part of syllabi, readers, and editions, if not the high Romantic canon of Wordsworth, Coleridge, Blake, Keats, Shelley, and Byron. Much of this large body of work was initially published electronically, and electronic texts often still supply important historical or standard editions of Romantic-era writing for scholars and students alike. To trace this institutional history, one need only mention a few well-known resources—Brown’s Women Writers Online, UC Davis’ British Women Romantic Poets or Romantic Circles’ editions of Anna Barbauld, Maria Jane Jewsbury, Felicia Hemans, and Mary Shelley. Much of this editorial work has aimed to reflect on both the historical contexts in which women were writing and the various print venues available to women—particularly those media and genres (such as gift books or weekly newspapers) that diverged from the standard, expensive volume published or printed by well-known establishments. The pervasive contextual approach is part and parcel of the new historical methodology that scholars used to rediscover women writers. The use of TEI with such scholarly editions, however, has the potential to help scholars of women’s writing intimately understand women writers’ styles, figurative propensities, and other technical aspects of their work that have been much less attended to—in part because of our dependence on classical rhetoric and literary terms. Many scholars have begun to look to “new formalism,” a methodology that melds New Critical concerns about form and style with New Historicism’s attention to history, politics, and market culture, in order to understand the historical and political implications inherent in forms and genres, especially the ideologies they carry with them.3 Scholarship on women’s poetry has just begun to pay attention to such concerns, and digital markup may be a tool to examine conscientiously both the structures and cultural contexts that make up women’s poetics.

For these reasons, scholarly questions about women’s poetics during this period seemed to intertwine intrinsically with questions about the encoding of poetry using TEI as an analytic language. When teaching canonical poets such as Shelley or Wordsworth, one would not think twice in using classical rhetoric or poetic terminology—such as personification, catachresis, or apostrophe—terminology that these authors were taught in English public schools through intimate knowledge of Greek and Latin. Certainly some women learned classical languages, yet many were autodidacts more versed in modern languages, and it stands to reason that they may have invented or played with poetic categories during such self-teaching. Thus when one begins to teach women’s poetry not merely as a piece of cultural history or cultural studies but as poetry in its own right, one wonders how such terminology intersected with women’s own understanding of their poetics. Much of our current thinking and use of poetic terminology developed through decades of reading that did not include much women’s writing, as Jerome McGann discusses in The Poetics of Sensibility. TEI’s own resistance to such vocabulary may, therefore, induce us to think about how we label different sorts of poetic moves or how we might redescribe women’s writing in ways more endemic to and descriptive of their actual practices.

In discussing the Women Writers Project’s editorial policies, Julia Flanders (2006) writes, “Given that our expectations about meaning and the conventions for [a text’s] expression are based overwhelmingly on the textual tradition with which we are familiar, it seems theoretically important not to allow these expectations to ‘correct’ a dissenting text into conformity” (142). Though Flanders discusses the Project’s tendency toward very diplomatic transcriptions, something similar might be said about the desire to exempt women’s texts from conforming to descriptive vocabularies that literary scholars developed by and large without reference to those texts. While some women’s writing is clearly written within or in resistance to canonical writing—for example eighteenth-century Bluestockings (Curran 2010, 172-173)—other verse may be much further outside our poetic and terminological toolbox. While our first impulse might be to add TEI tags culled from language already available and adjudicated in classical models, the vacancy in the guidelines opens up important questions about poetic terminology, allowing individual students encoders to make meaningful, independent editorial choices. Self-reflexive use of TEI might help open a space of new historical reflection on the ideological import of categorizing and descriptive practices then and now.

It might be important to remember that literary scholars have used TEI to mark up many important electronic literary editions, and the tags they employ certainly shape those versions of literary texts, whether or not we can see such markup by browsing through editions on the web. The World of Dante website has thoroughly encoded people, places, creatures, deities, and structures, and a clickable reading interface allows readers to select such features to highlight within the reading text. This kind of encoding (and its dynamic interface) visibly highlights important facets of the verse, and it arguably focuses readers’ attention on The Divine Comedy as a collection of locations that house people and supernatural beings. Editions of letters, like the The Collected Letters of Robert Southey mentioned above, may choose to focus descriptive tagging on people and consequently privilege, perhaps rightly, the letters as a source of social and literary networks. Even more subtly, editors may select a means to highlight the intricate structures of texts beyond the level of the paragraph or the stanza. The Whitman Archive’s Encoding Guidelines for marking structure includes a discussion of how editors designate poetry, prose, and “mixed genre” within the archive’s wide variety of manuscript materials—generic distinctions that may offer theoretical insight into the generic complexity and interpretive possibilities of Whitman’s work. It would behoove us, then, to wield what Perry Willett (2004) identifies as a productive skepticism toward markup in two ways: first toward our own literary taxonomic tendencies and second toward those ideologies hidden in TEI’s vocabulary.

Since many encoding decisions lead to some degree of interpretation (as does much editing), some sectors of the digital humanities and tech-savvy literary communities have been reticent in using TEI to mark up even more blatantly interpretive attributes of text such as figurative language. James Cummings (2007), in his introduction to TEI in A Companion to Digital Literary Studies, canvasses this under-explored topic:

It is more unusual for someone to encode the interpretive understandings of passages and use the electronic version to retrieve an aggregation of these (Hockey 2000: 66-84). It is unclear whether this is a problem with TEI’s methods for indicating such aspects, the utility of encoding such interpretations, or the acceptance of such methodology by those conducting research with literature in a digital form (Hayles 2005: 94-6). (455-56)

Undergraduates with less codified attitudes towards poetic structures and methods of interpretation may provide a wonderful test group to explore the utility of interpretive coding. At the same time, a digital experience early in students’ careers might lead the way to greater acceptance of such digital-analytical ventures. It may be very helpful to teach the next generation of digital humanists to think through the boons and banes of formalizing interpretation at the very moment they are learning to define notions of text, interpretation, or markup. As McGann and Dino Buzzetti have argued,

Markup should not be thought of as introducing—as being able to introduce—a fixed and stable layer to the text. To approach textuality in this way is to approach it in illusion. Markup should be conceived, instead, as the expression of a highly reflexive act, a mapping of text back onto itself; as soon as a (marked) text is (re)marked, the metamarkings open themselves to indeterminacy. (Buzzetti and McGann 2006, par. 49)

Both markup and interface are needed to create an iterative experience for a specific set of users—users who may have trouble viewing the text as volatile. Whether or not one agrees to the extent of textual volatility McGann and Buzzetti propose, students sometimes have trouble conceiving of textual indeterminacy that can still be effectively analyzed and interpreted.4 Not only do we need to upend notions of textual stability, but we also need to help students learn how to identify and understand productive ambiguity involved with such a textual condition.

Dreaming Up a Workable Project

Because such a view of markup was taught within an undergraduate literature seminar without other digital humanities components, this section describes how the project and such ideas about terminology were framed. Specific conversations early in the semester aimed to contextualize and anticipate encoding as a self-reflexive editing and interpretive practice connected to other “analog” reading practices. First, we repeatedly used literary terminology in our close readings of poems but did so as a matter of speculation or argument by definition rather than merely regurgitating terms with a priori strict rules. For example, when discussing late eighteenth-century poet Charlotte Smith’s address to abstractions like sleep and solitude or to locations such as the South Downs, we would examine several terms at once, such as address, apostrophe, and exclamatory speech. I would remind students that not only were some of these terms created and used first to describe much older texts but that each poem might be redefining its terms in ways that would not necessarily adhere to normative models—historical or contemporary.

Second, we examined each poem’s contemporaneous textual situation as well as its contemporary iterations through editorial or digital culture. Many of Mary Robinson’s poems first appeared in newspapers and only later in volume form, a textual situation only sometimes made clear in digital or print editions. Similarly, Charlotte Smith’s sonnets went through multiple editions in her lifetime as she added poems to each edition, and we discussed how reading those poems both with and without that context altered our understanding of her seminal work. Most online editions did not gesture towards such a complicated and shifting environment for those poems, and students wondered what a digital edition that housed and cross-referenced many of the important iterations might look like. When we began to discuss Trench, we studied two electronic representations—PDFs of an 1816 volume and a basic digital transcription of “Laura’s Dream” available at English Poetry 1579-1830: Spenser and The Tradition Archive. That transcription is framed by an editorial column to the left of the text with several paragraphs of synopsis and information about the poem’s two contemporaneous reviews. Before these comments, however, there is a long blurb placing the poem into a very particular Spenserian context: “In the two-canto allegory of Laura’s Dream Melesina Trench presents a feminine revision of Paradise Lost drawing upon the imagery of Spenser’s Garden of Adonis and Milton’s ‘Il Penseroso’.” This representation of the text gave students the impression—before reading—that Trench was a minor Spenserian acolyte, without a reputation in her own right as the aristocratic diarist and member of the bon ton that she was. This editorial paratext, moreover, seems to substitute for other types of annotations, hyperlinks, or search capabilities that might enable other types of linkages—or encourage students to discover information for themselves. This frame helped to provoke a discussion about what assumptions student readers might bring to such a work and how even seemingly slight contextual materials had quite a powerful sway over the reader.

About six weeks into the semester, we entered into the back-end of the digital edition. I introduced TEI in two installments: first with a half-hour presentation that discussed digital editions and archives more generally, followed by a two-hour hands-on training to teach the encoding basics of TEI with the help of a student Tech Mentor and two tech-savvy librarians. Set up in a computer classroom, we walked each student through launching oXygen—an open-source XML editor—opening a pre-formatted XML document replete with an already populated header and two untagged stanzas of the poem. With the document open, students were guided through the document declarations and the header but not asked to replicate them. (These portions of the document contain instructions for browsers on the document’s status as an XML file and processing instructions as well as bibliographic information about the file and the literary text itself.) This method of introducing but then bracketing off such information for students resembles what Alex Gil and Chris Foster in their experimentation with teaching TEI at University of Virginia have called the “bottom-up model,” and it contrasts with many “top-down” TEI pedagogies that begin with encoding a document’s header and its larger structures before focusing on the text itself (Project Tango). It seemed important in this case to get students interacting with discrete pieces of the text as soon as possible, replicating the literature pedagogy of the seminar where students worked from the minutiae of the text outward. When they finally reckoned their markup with the larger structures of the entire text, they realized just how encoding—and the interpretive problems that arise—require an iterative process (McGann and Buzzetti 2006; Piez 2010).

To tackle the intricacies of tagging, I mirrored on a large screen what students had on their own computers. We moved through the following elements, each of which they would use in their group projects: <lg> (“line group”, the TEI term for stanza), <l> (“line”), <placeName> (places with proper names), <rs type=“place”> (places without proper names), <persName> (people with proper names), <rs type=“person”> (people without proper names), ending with <seg ana=“…”> (the segment tag <seg> selects a specific piece of text, here associated with an editor’s choice of analysis, specified by the attribute, ana=“…”). We used this last tag for tropes, or figurative language that threaded or repeated throughout the poem, for example <seg ana=“vision”> or <seg ana=“affect”>. This tag became the most important of the project, since it allowed students the most freedom to select words or phrases and then label tropological categories without resorting to classical terms such as metaphor.

Figure 2. Sample encoding by group 3, using <lg>, <l>, <placeName, <rs type=”place”>, <rs type=”person”>, and <seg> tags.

Following this session, students were given the assignment sheet for their digital project along with directions to the necessary documents, which had been uploaded to folders for each group on our class website. The assignment largely replicated the in-class TEI tutorial but with a new and longer passage of about sixty lines. Students were placed in groups of three and asked to encode the following: 1) lines and line groups; 2) people and places (real, fictional, and mythological) both with and without proper names; and 3) between one to three analytical strings (<seg@ana>) that seemed especially significant to their group of lines. At the suggestion of members of Brown’s seminar on contextual encoding, I assigned the same set of lines to two groups each, so that we could compare and contrast how these line groups were encoded. Students were also given a CSS stylesheet that would allow them to color-code their tags, particularly the figurative tropes, so that we could visualize their encoding at the project’s end. Finally, they were to write up a short project report, analyzing what they had learned about the poem from encoding that they did not know before from simply reading it. The grading of the project would be three-fold, based on accuracy of the encoding, creativity, and their written reflections about TEI and poetics.

In hindsight, there were several adjustments that would have helped to teach TEI more efficaciously. Students often began coding with errors, forgetting, for example, the difference between <persName>, used for proper names, and <rs type=“person”> for other references to people, despite the cheat sheets given for reference. I was able to catch some of these coding errors because two groups voluntarily came to my office hours to ask other questions about their tagging. In response, I segmented about twenty minutes of class time the week before the projects were due to review some of these errors, but I was not able to catch everything. The next time I teach this project, I will make sure to have a more hefty review session, perhaps where groups workshop their coding during the seminar, allowing students to teach each other valid coding and to raise questions about particular, tricky examples that would benefit the entire class. Alternately, I might have a review lesson or two where we code another piece of a different poem, to give them another application for the coding concepts. For example, students might complete a short encoding exercise on one day focusing on tagging people, personifications, apostrophes, and so forth and spend another day marking up metaphors, similes, and related figures.

Part of the trouble, I realized, was that most of the groups did not often meet with our Tech Mentor, a senior English major who was not part of the class but who was helping with the TEI portion of the project. Students either felt they already understood the material or felt their expertise was equal to the mentor’s. If I used a Tech Mentor again, she would need to be a more visible and integral part of the class project. Ideally, I would have her help to introduce the TEI materials and perhaps serve as a project manager, not simply as a student with office hours in the computer lab.

Finally some students, at the end of the project, wanted more power to manipulate and change their own CSS documents. Originally, I had scheduled for us to do this together in class, once students had finished their encoding, but many students asked that they be able to change color coding as they developed and experimented with different markup schemes—requesting a truly iterative process. As a consequence, I made time before the project was due to show them how to change various aspects of their CSS and how to upload their documents into the proper directory structure so they could view changes to the text’s color and layout as they made them.

Formalizing Figurative Ambiguity & Conceptual Mapping

Figure 3.  Sample stanzas of encoding by groups 1 and 2, using  tags.  The website flashes small, mouse-over labels for the first instance of the descriptive tag.

Figure 3. Sample stanzas of encoding by groups 1 and 2, using tags. The website flashes small, mouse-over labels for the first instance of the descriptive tag.

Above are two short samples of a stanza encoded by two different groups. Encoding done by all four groups is available on The Melesina Trench Student Edition site; visualizations discussed later in the paper can be viewed separately: stanzas by group 1, group 2, group 3 and group 4.

The project results spawned two interlocking questions for student encoders. How much literary data should editors formalize by tagging it for a readership of their (student) peers, and what is the best vocabulary to do so? Rather than move toward a more taxonomic approach to interpretive tagging than currently exits in the TEI Guidelines, the expansiveness of tags such as <seg@ana> and <lg> fueled student creativity and introspection about poetic categories and vocabularies. As Cummings (2007) writes, “What might jar slightly is the notion of a text’s ambiguity that informs most of New Critical thought; it is inconceivable from such a theoretical position that a text might have a single incontrovertible meaning or indeed structure” (458). If TEI users have been less interested in interpretive than editorial markup, perhaps it is because TEI appears to read both structure and meaning too definitively into a text that should be a resource or open set of data for any scholar to interpret at will. Yet, this may be more the fault of our blindness to the ways in which markup may act not as formalizing structures but can mark, instead, moments of textual instability, suggestiveness, or ambiguity. This method would pinpoint moments of figuration that educe both semantic or non-semantic abstraction. Certain TEI tags, precisely because of their ambiguity, became generative in such a way.

While contemplating the poem’s form, for example, we examined the difference between labeling pieces of poems as “line groups” rather than more specifically as stanzas, couplets, quatrains, or tercets. Noting that we could, if we wanted, designate an <lg type=“stanza”> or even use the “type” attribute to specifically describe the stanza such as <lg type=“couplet”>, we discussed what benefits were allotted to us if we thought of stanzas as line groups. “Laura’s Dream” uses stanzas of varied lengths, as does much other women’s poetry of the period. While Trench seems to be working from an eighteenth-century model of heroic couplets, she nonetheless groups stanzas of irregular lengths into chunks that seem amorphous, as they are built from couplets, quatrains, and other clusters of lines with alternating rhyme. The more general TEI term “line group” supplied us with a better or more flexibly descriptive language for these chunks of verse, as an interstitial form that Trench uses. In this way, the TEI vocabulary might enable a helpful reconsideration of codified stanzaic units in other, similar poems, enabling students to find moments of experimentation as well as codification. This kind of encoding may open the way toward discovering in women’s writing other types of micro-forms or more nebulous poetic structures previously invisible to eyes looking for quatrains, sonnets, or Spenserian stanzas.

Similarly, rather than gravitating to figures of personification and debating how and why something like “Genius” was personified in “Laura’s Dream,” students became more interested in discussing the difference between proper names (and places) such as “Arno,” “Greece,” “Laura,” or “Apollo,” and those more general entities without specific names, such as “valley” or “Lady Fame.” In the project reflection one group commented, “We especially noticed that there was only one ‘proper’ name and one ‘proper’ place throughout our entire fifty lines. This was surprising in light of the fact Aurelio and Laura’s adventure introduced them to many new creatures and locations. ‘Improper’ people and places were also sparsely dispersed throughout the poem.” This distinction helped students navigate the dreamscape of the poem in ways that emphasized its figural dimension. Rather than viewing the poem as an allegory that simultaneously and consistently operated in two places—Laura’s home, where she kept ill, and the moon, where she dreamed, the varying levels of referentiality mapped by the TEI elements across the poem revealed the “valleys” and “peaks” of the text where figural qualities of space and place, vision and imagination were heightened or intermixed. In a sense, the use of <rs> (used for people and places that did not have proper names) began to infect the more literal tags with hints of figuration. At the very least, these tags suggested how previously ignored phrases or words, such as “hidden recesses” or “flowing rivers,” might be potent with abstract meaning. While students considered the idea of the poem as a database of names and identifiers or women’s roles (what some text encoders have called an “ography”), the less visible people and places within the poem heightened their awareness of what sorts of literal (biographical, topical) information often gets attention in women’s writing (and perhaps encoding) as opposed to the more “improper,” abstract, and visionary people, places, and objects.

We held a comparable discussion regarding TEI’s more ambiguously descriptive language for figures of speech and thought. The <seg> tag, i.e., “segment,” can tag any length of text and, in doing so, resists the privileging of word, line, or lemma. As Susan Hockey once wrote, “There is no obvious unit of language” (Hockey 2000; McGann 2004). Rather than focusing on individual words or syntactical units, students instead roved the text for moments or nodes of figurative thickness where the poem seemed to move beyond the literal, without having to worry about how to match such moments to prescriptive classical structures.

By way of a counter-example, we looked at the extended tag sets for sound, meter, tropology, and syntax that were at the time attached to the Poetry Visualization Tool and are now associated with Manish Chaturvedi, Gerald Gannod, Laura Mandell, Helen Armstrong, and Eric Hodgson’s tool called Myopia: A Visualization Tool in Support of Close Reading. This interface brilliantly employs four different customized tag sets that then enable users to visualize patterns in the poems, cross-referencing these different dimensions of reading. What follows are the first two lines of the tropological encoding of John Keats’ “Ode on a Grecian Urn”:

Fig. 4 TEI of "Ode to a Grecian Urn."

Fig. 4 TEI of “Ode to a Grecian Urn.”

There is much to say about this customized encoding, most obvious being the creation of <litFigure> tags with associated types. One might notice that the <litFigure> of metaphor takes a tenor attribute yet leaves the vehicle a bit less formalized, opening up the metaphorical transport for interpretation or readerly description. Even more alluring are the <ambiguity> and <connotation> tags with associated attributes. While <ambiguity> signals a word that might have several meanings or ideas in terms suggested by William Empson’s seminal New Critical work, Seven Types of Ambiguity, <connotation> encourages readers to label figurative or tonal meaning beyond a word’s literal denotation. Both tags allow the editor to formalize interpretations of nebulous and often subjective meanings.

In some ways, as suggested earlier, replicating standard terminology seems a missed opportunity to reconceptualize our approaches to poetic structures—particularly in an age when representations of structures may shift as they become computerized data. Rather than coding for specific literary figures, <seg@ana> encouraged students to find concepts and abstractions that circulate throughout a given text. While student groups did label each trope once they had identified it (e.g., <seg ana=“ascension”>), this labeling offered them the creativity to describe and define the type of abstraction—and how it operated in the text—on their own terms. Here was one group’s response to the procedure of figurative tagging, as they narrated it in their project reflection:

Our initial debate came when deciding how to word our tags. The first theme we chose was “decay,” but we had difficulty deciding if that also meant “dirt-related” imagery. We were unable to include as many words as we had initially wanted, so we revised our choices for tags. “Decay” became “morbidity” in order to more accurately describe the themes observed. “Dirty” became its own tag, and evolved first to “earthy” and then to “earth” so we could include all organic imagery. “Dirty” fit under “earth,” so we felt as if we were not losing anything in altering the tag. Rather, we were able to extend the theme. “Movement” as a tag did not ever change in wording, but we did debate over specific cases in which we wanted to use the tag. We had trouble deciding whether “movement” meant only physical motion, or if it included past actions or descriptive terms that implied motion.

The <seg> element often entailed ambiguating—rather than disambiguating—pieces of text, and it frequently provoked debate about how to do both at the same time. More acutely, coding in this manner helped students to work more readily on the abstract plane of analysis, moving beyond “theme” to discover language’s play with various levels of abstraction. Encoding enabled them to grapple with larger semantic issues without leaving the text proper and falling into universal generalizations. Most importantly, it did not freeze meaning so much as encouraged students to toy with tying the literal to the abstract in exploratory ways. The more generalized segment analysis challenged students to pinpoint where abstraction happens in a poem’s figurative terrain and then to map how attendant concepts mutate through a given passage.

There was something to segment tagging that was pleasurable and generative, even more than “analog” close reading. Students very much understood that they were reproducing their version of the text, and their readings took on an independence and intellectual verve due to such a sense of editorial responsibility. As one student commented, “It is really thought-provoking to contemplate how an encoding exercise such as this one is an act of appropriation, on the part of the encoder, in attempting to determine both what the poet wanted to emphasize and what the reader will, ultimately, extract from the encoded rendition of the poem; we are reshaping the reading experience in the author’s absence.”

Students took this self-reflexive method of iterative, renditional reading and mapped it onto to their “analog” reading of texts during the remainder of the course. While several students in their course evaluations mentioned their heightened comfort and skill with reading poetry, it was students’ final papers that revealed how the analytical habits learned in TEI encoding transferred to their reading practices with print texts. These final papers were written only a few weeks after students finished the TEI assignment, and we spent half of the final seminar discussing the results of these projects, while the other half was devoted to students presenting the initial findings of their final papers. Students were given free rein to write any sort of fifteen-page paper they liked that would build a substantial argument about one of the poets or a group of poems studied during the semester. The most palpable internal evidence that students were engaged in digital-style close reading was the complete lack of block quotes in all except one of the papers. Students were truly selecting bits of the text, then proceeding to define their poetic value in distinctive ways, even when making global claims. For example, one paper discussing Madame de Staël’s representations of various nationalities in her novel Corinne, focused exclusively on descriptions of the character’s physical and emotional characteristics. This reader, however, did not dwell on more obvious features but spent time with illustrations of characters’ hair, particularly the difference between Italian Corinne’s “beautiful black hair” and her half-sister’s English blond locks, “light as air.” The reader went on to make a particularly astute analysis of the simile—as well as the reason for the English hair being tagged with a simile while the Italian hair garnered only straight description. Perhaps many good English students might perform an interpretation responsive to the figuration of physical features, especially in a course on gender and poetics. At the same time, the student was especially attentive to the comparative instances of figuration and its lack—the different levels of figurative play within a single passage.

This particular consideration for figurative language and its trends throughout a poem presented itself even more strikingly in the scope and tendency of the students’ main arguments and interpretive techniques. Almost all of them chose make an assertion about the prevalence of one or more tropes in a particular author’s body of work. Some students used the word “trope” specifically in their thesis statements, while others employed such figurative concepts to sculpt and organize their analyses. (A Voyant word frequency analysis of the student papers as a corpus revealed that “trope” was the top technical term throughout the papers, used nearly as frequently as the word “poem.”) For example, one student working on the myth of the desperate poet in Mary Robinson and Letitia Landon’s poems argued, “Robinson and Landon create women of modern myth in sentimental poetry through tropes of voice, beauty and the performative.” Two students wrote about Elizabeth Barrett Browning with varying approaches to the language of maternity. One constructed the following thesis: “By using the trope of maternity, Barrett Browning gives audiences something perhaps more accessible with which to identify than slavery in order to nail home her belief that slavery is a crime.” The second came to an alternate interpretive conclusion because she identified the trope in a different way—as “motherhood”—emphasizing the figurative crux to be about feminine subjectivity rather than an existential or moral issue: “By exploring the trope of motherhood through varying lenses of naturalness, Browning rejects the uniformity of maternal conduct and allows for the construction of a feminine self outside the scope of a traditional, patriarchal role.” A student who wrote on Melesina Trench considered a shift in tropes throughout “The Moonlanders,” though quite separate from those discussed in class or through the TEI projects: “By replacing the traditional Romantic trope of the woman’s song with the woman’s dream vision, Trench offers a speculative fiction of a society void of institutionalized patriarchies in which nature can sing for itself and woman, though she lacks wings.” In one final example, another student used her paper to take on the Herculean task of questioning how we might define masculine versus feminine tropes during the Romantic period, cycling through multiple examples from multiple authors.

These papers employed the looser term “trope” learned during the TEI project without any prompting by me. They did so to cull figurative patterns for interpretation, often cross-referencing one trope with another to find correlations, associations, differences, and ambiguities between and among them. For example, the first paper on Barrett Browning sought to reveal how slavery and maternity worked together to create a certain moral affect, while the second looked at the multiple iterations of motherhood in differing amounts of tension with patriarchal roles for women. The paper on “The Moonlanders” similarly claimed that Trench rejected a traditional trope of song—which we had discussed in other women writers during the semester—and substituted the trope of dream vision to distance the poem from notions of institutional and patriarchal dominance. These students were invested in identifying figurative trends, their relations to each other, and the interpretive reasons why they trended together. Dueling notions of a single trope or the changing emphasis among multiple tropes helped students to articulate fluidity and ambiguity toward a particular concept within the span of a poem or an author’s body of work.

To be sure, some of these tropes might appear to be standard topics or representations often found in women’s writing, e.g., maternity, childhood, notions of femininity and masculinity, nature, death, and affect. The resort to classical or standard tropes in these papers might be read as student resistance to the more ambiguous or fluid tagging. The use of terms like “apostrophe” or even concepts such as “the gaze” reveals that some students certainly need or prefer more rigid, defined terminology. Moreover, not every poem may call for the qualities that “The Moonlanders” evoked, and students may have been sensitive to the need for more formalized structures or vocabulary in some poems while others required more ambiguous, fluid vocabularies.

Even these supposedly standard tropes, nonetheless, were atomized throughout the span of the papers into more complex or original designations. In this way, many papers exhibited the sort of iterative thinking that TEI encoding encouraged when students were asked to both identify tropes and then name (and rename) them in individual, unique ways. This sort of reading became apparent during moments in the papers where students would scrutinize, reassess, and redefine a term in multiple and unconventional ways. One student who used classical terminology, for instance, began with several claims about the relationship between affect and apostrophes to nature, using apostrophe as her major culling point. The argument developed, however, in increasingly interesting and individual ways such that, by the paper’s end, the reader had established for herself distinct terms of analysis including physical and emotional distance, emotional chaos, and emotional void. These last two terms organized the major part of the paper in a language that was the reader’s own and that allowed her to draw her own conclusions about the nature of address and how it worked to shape affect in the poem.

Another such case was a paper on Charlotte Smith’s “Beachy Head” that chose to take Lyn Hejinian’s term “open text” and modify it to “open voice” to document the specific figurative relationship between the eponymous coastal cliff’s open landscape and its ability to produce androgynous language with multiple meanings. Here gendered tonality was cross-referenced with the tropic tendencies of language describing both nature and history. Again, what was unique about the paper was that it not only fashioned a new term from an older one, but that it did so to elucidate comparative figurative trends and flexible concepts running through a poem.

The result of such tendencies in analysis was a particular attention to language, figuration, and especially the iterative creation of self-defined vocabularies to describe tropic trends. These papers were exceedingly strong on the analysis of significant linguistic tendencies. The emphasis on particularities and correlations that TEI emphasizes—correlative reading—did not, however, necessarily provide a ready mechanism for synthesis or cumulative interpretation. While final papers tended to dwell on the particulars of poems, some tended to be less strong on overall, synthetic arguments, and they struggled to link multiple tropes or trends into a singular argument about the work. For my purposes, I was comfortable with this unevenness because the observations themselves were fiercely interpretive, if not always global. Even though we did discuss the organization of longer papers in class, for the most part, these papers reflected the course’s distinct focus on generative analysis and strategies of reading that encouraged the creative selection and identification of dense poetic moments.

The Final Read: Generative Visualizations of Interpretive Coding

Figure 5.  Sample visualizations of encoding by groups 1 and 2. For encoding & trope tags, see Figures 2 and 3.

Figure 5. Sample visualizations of encoding by groups 1 and 2. For encoding & trope tags, see Figures 2 and 3.

The final piece of our attention focused on an undergraduate user-friendly visualization to help reproduce and interpret students’ analytical markup for both this seminar and others in the future. During the TEI hands-on demonstration in class, we used a CSS stylesheet along with oXygen’s “Open in browser” button to demonstrate how the CSS would transform the tagged XML document. Since cascading stylesheets assign fonts, colors, and spacing to individual tags, we chose it as a method of display that students would be able to learn and manipulate without too much difficulty. As mentioned above, students were especially gunning for the point at which they could transform their underlying encoding to “the real edition.” The disjunction between the encoding “signified” or genotype and the transformed document “signifier” or phenotype presented some important practical and theoretical issues (Buzzetti 2002). Students began to ask why TEI-encoded websites didn’t always have a button or function to display encoding to savvy readers, as another type of annotation or editorial self-awareness.

Our stylesheet had been written not only to display the verse in a clean format, but to highlight with distinct colors the contextual place and name tags as well as each of the <seg@ana> strings, enabling students to visualize figurative patterns across lines or verses and to augment the analysis of structures across traditional formal structures (McGann 2004).5 After doing so, they most frequently noticed the conjunction of certain tropes. For example, one group argued that the trope “religion is always within a line that is also tagged for emotion or male dominance.” A second group encoding the same passage, however, tagged similar phrases and lines with the tropes “spirituality,” “spheres,” and “ascension.” This comparative approach suggested a diametrically opposite reading based on their triangulation of three different tropes—a type of spirituality that enabled female transcendence rather than an institutional religion that codified masculine dominance and feminine affect. If nothing else, the clear encoding and representation of such markup revealed to students how two different readings could obtain equal validity.

At this point we compared our visualizations to other attempts to highlight themes and contextual encoding done by the Virtual Humanities Lab and Vika Zafrin’s Roland hypertext. These sites employ either colored sticky notes appearing on the margins with thematic and encoding information or, in the case of Roland, a clickable list of themes that highlights encoded lines in the poem. Unlike these verbal references, the purely sensory color-coding of the “Laura’s Dream” CSS (without overt reference to the trope name) established a non-verbal interpretive layer rather than another linguistic boundary. Even though students knew the colors were keyed to semantic categories, the visual rendering often momentarily lost its verbal reference and, in doing so, encouraged redefinitions and reinterpretations of a given trope. When we looked at visualizations projected on the classroom screen, we began to question how to encode something as slippery as affect—whether affect was, after all, a stable category or whether it was better to isolate various types of feelings, sensations, and desires into various categories. Different groups had keyed different shades of purple to melancholy and morbidity, and visualizing their similarity raised important questions about how to categorize, differentiate, and interrelate such close nineteenth-century tropes. This procedure seemed especially important for literature students who can be quick to assume they understand the definitions of historical trends such as sentiment, sensibility, or the Victorian cult of death.

In our discussions of visualizations, my students began to dream of an interface that allowed them to bring our class’s particular collections of text and commentary to bear on a primary text, one with the ability to then permit future classes to render their own versions in the space of the same edition. While tools like NINES offer the functionality of a digital collection, students envisioned a tool that could digitally “highlight” pieces of text, save these alterations, yet still leave the original open and blank for others’ readings. Because XML encoding often hides itself from view, TEI editions can give students a type of double-blind reading, where they can see a supposedly transparent document and then examine the editing and marking underneath. Moreover, since collections of texts and especially important passages change from class to class even with the same course, teaching editions might be built over and over again, by each class, compiling a reading database. This type of textual situation, using a relatively small text in a specific setting, might create one instance of a text with a computable repository of multiple, successive, yet interrelated student readings of a poem.

The students’ interest in interface and design, by the end of the project, revealed that they were thinking carefully about both authorial and editorial choices, including the coding decisions that compose the production of digital editions and other texts for the web. Their imaginary wish list of possibilities largely revolved around what kinds of functionality they might dream up for the next iteration of the digital edition. At the same time, students appeared to have a greater appreciation for the complexities of even the most simple looking web page. Rather than scoffing at the static nature of some web pages, as they did when we first began to look at various digital editions of Romantic period women writers, they were both more judicious and discerning as to the various intents and audiences of specific sites. Moreover, where many of them initially voiced a strong dislike of editions with too much editorial information, in particular revolting against texts with hyperlinks that were too directive in providing them with a particular interpretation, they came to see such sites as one type of edition among many. While at the end of the course we did not have time to discuss the editorial prerogatives of particular sites or digital editions, students did come away with the notion that the web and its texts can be added to and altered by its users—given the right skills. Thus, their most powerful response was to feel as though they now could help to create their own reading experiences—based on their own contrasting needs and choices. Several students went on to learn other digital tools (such as Dreamweaver, Photoshop, and additional XML), and others mentioned feeling more prepared to engage with the internet on their own terms and with an understanding of how sites might be constructed under the hood.

It might be easy to ask whether using highlighters and a hard copy of a poem would give the same results as digital close reading—and without the energy expenditure of setting up a TEI project. There are, however, a number of arguments in favor of such entry-level digitization. The Trench project taught students a very basic sort of encoding, introducing them to the idea of texts (or poems) as data but with familiar coordinates, based as it was upon close reading. It empowered upper-level students working in groups on questions of interpretation and meaning (Blackwell and Martin 2009), rescripting the scene of reading as a collaborative, social, and descriptive one rather than something more hermetic, fixed, and prescriptive. The physical, hands-on nature of a project where students literally rebuilt pieces of a poem and painstakingly marked it by hand resonated with them in ways that “analogue” reading did not. More dramatically, the broad-based “humanities language” of TEI enabled students to question, historicize, and reconsider the poetic terminology we use to describe poems. The expansive nature of its very language—in terms like “line group” or “segment analysis”—may pave the way for students and scholars to locate within women’s writing less formalized structures and figurative pathways previously unnoticed. What these forms and figures amount to, I would argue, are pieces of women’s conceptual thinking, which TEI can now equip us to trace and map. This kind of interpretive markup may, finally, give us some inkling of how TEI might be used as an analytical tool for smaller-scale, case-based projects perfect for undergraduates as they learn to parse and categorize their own textual situations.


Blackwell, Christopher and Martin, Thomas R. 2009. “Technology, Collaboration, and Undergraduate Research.” Digital Humanities Quarterly 3.1. Accessed February 27, 2011,

Burnard, Lou and C. M. Sperberg-McQueen. “TEI Lite: Encoding for Interchange: An Introduction to the TEI.” Final Revised Edition for TEI P5. Text Encoding Intiative. Accessed March 24, 2013.

Buzzetti, Dino and Jerome McGann. 2006. “Critical Editing in a Digital Horizon.” In Electronic Textual Editing, edited by Lou Burnard, Katherine O’Brien O’Keeffe, and John Unsworth, 53-73. New York: The Modern Language Association of America. Accessed April 24, 2013, OCLC 62134738.

Buzzetti, Dino. 2002. “Digital Representation and the Text Model.” New Literary History 33.1: 61-88. OCLC 4637615635.

Cummings, James. 2007. “The Text Encoding Initiative and the Study of Literature.” In A Companion to Digital Literary Studies, edited by Ray Siemens and Susan Schreibman, 451-476. Malden, MA: Blackwell Publishing. OCLC 81150483.

Curran, Stuart. 2010. “Women Readers, Women Writers.” In The Cambridge Companion to British Romanticism, edited by Stuart Curran. 2nd edition. New York: Cambridge University Press, 2010. OCLC 25410010.

Digital Humanities Questions and Answers. “Can you do TEI with students, for close reading?” Association for Computing in the Humanities. Accessed February 27, 2011.

Fairer, David. 2009. Organising Poetry: The Coleridge Circle, 1790-1798. New York: Oxford University Press. OCLC 294886679 ISBN: 978019929616.

Flanders, Julia. 2006. “The Women Writers Project: A Digital Anthology.” In Electronic Textual Editing, edited by Lou Burnard, Katherine O’Brien O’Keeffe, and John Unsworth, 138-149. New York: The Modern Language Association of America. OCLC 62134738.

Hockey, Susan. 2000. Electronic Texts in the Humanities: Principles and Practice. New York: Oxford University Press. OCLC 45485006.

“Laura’s Dream; or, The Moonlanders.” Spenser and the Tradition: English Poetry 1579-1830, edited by. David Hill Radcliffe. Accessed September 19, 2012.

McGann, Jerome. 2004. “Marking Texts of Many Dimensions.” In A Companion to Digital Humanities, edited by Ray Siemens, Susan Schreibman, and John Unsworth. Malden, MA: Blackwell Publishing. Accessed February 28, 2011. OCLC 54500326.

—. 1998. The Poetics of Sensibility. New York, Oxford University Press. OCLC 830989291.

Melesina Trench Student Edition, edited by Kate Singer. Accessed July 15, 2012.

Myopia: A Visualization Tool in Support of Close Reading, edited by Manish Chaturvedi, Gerald Gannod, Laura Mandell, Helen Armstrong, and Eric Hodgson. Accessed July 15, 2012.

Piez, Wendell. 2010. “Towards Hermeneutic Markup.” Digital Humanities 2010 Conference, King’s College London. Accessed February 28, 2011.

Poetry Visualization Tool, edited by Laura Mandell. Accessed February 28, 2011.

Project Tango. University of Virginia. Accessed February 26, 2011.

Robinson, Daniel. 2011. The Poetry of Mary Robinson: Form and Fame. New York: Palgrave Macmillan. OCLC 658811982.

Rudy, Jason. 2009. Electric Meters: Victorian Physiological Poetics. Athens, Ohio: Ohio State University Press. OCLC 286479053.

Schwartz, Barry and Kenneth Sharpe. 2010. Practical Wisdom: The Right Way to Do the Right Thing. New York: Riverhead Hardcover. OCLC 646111798.

Southey, Robert. The Collected Letters of Robert Southey. Part I: 1791-1797. Letter 2. Ed. Lynda Pratt. Romantic Circles. Accessed March 24, 2013.

TEI Consortium. 2012. TEI P5: Guidelines for Electronic Text Encoding and Interchange, edited by Lou Burnard and Syd Bauman. Version 2.1.0. Last updated June 17. N.p.: TEI Consortium.

Van den Branden, Ron, Melissa Terras, and Edward Vanhoutte. TEI by Example. Accessed March 24, 2013.

Virtual Humanities Lab. Brown University. Accessed February 26, 2011.

The Walt Whitman Archive. Eds. Ed Folsom and Kenneth M. Price. Center for Digital Research in the Humanities, University of Nebraska-Lincoln. Accessed March 24, 2013.

Willett, Perry. 2004. “Electronic Texts: Audiences and Purposes.” In A Companion to Digital Humanities, edited by Ray Siemens, Susan Schreibman, and John Unsworth. Malden, MA: Blackwell Publishing. Accessed February 28, 2011. OCLC 54500326.

World of Dante, edited by Deborah Parker. University of Virginia. Accessed February 27, 2011.

Zafrin, Vika. RolandHT. Dissertation. Accessed February 26, 2011.

About the Author

Kate Singer is an Assistant Professor in the English Department at Mount Holyoke College.  She is the Pedagogies Editor at Romantic Circles and the editor of the student edition of Melesina Trench’s The Moonlanders.  She has published articles on Percy Shelley and Maria Jane Jewsbury and is currently at work on a book entitled Against Sensibility:  British Women Poets, Romantic Vacancy, and Skepticism.

  1. While well-known scholarly sites such as Romantic Circles, The Whitman Archive, and The Rossetti Archive have long used graduate students to encode their digital editions and archives, more recently undergraduates have been offered such opportunities. See, for example, Wheaton College’s Digital History Project and Digital Thoreau, in addition to a number of undergraduate digital humanities courses that now introduce TEI encoding.
  2. I am particularly indebted to Shaoping Moss, Instructional Technologist, and Nicole Vaget, Professor of French, who spearheaded the TEI initiative at Mount Holyoke College in 2005. At the College, Vaget’s projects pioneered the TEI encoding of rare French manuscripts with students and the use of a multimedia approach in foreign language teaching at the College. My goal, very much aided by conversations with them, was to think about how teaching basic TEI encoding might help literature students gain purchase on a method of digital close reading, aware of both older reading methodologies and new ventures in the digital humanities. I likewise owe most of my TEI knowledge and curiosity to Julia Flanders, Syd Bauman, and the participants of several WWP Textual Encoding Seminars.
  3. A few good examples of such new formalism include Jason Rudy’s Electric Meters, David Fairer’s Organising Poetry, and Daniel Robinson’s The Poetry of Mary Robinson.
  4. One discussion of this phenomenon occurs in Barry Schwartz and Kenneth Sharpe’s Practical Wisdom: The Right Way to Do the Right Thing, particularly Chapter 9, “Right by Rote: Overstandardization and the Rise of the Canny Outlaw.”
  5. Part of the inspiration for such a visualization of close-but-distant reading was John Walsh’s Thematic Networks, a tool in the Swinburne Project.

Establishing a New Paradigm: the Call to Reform the Tenure and Promotion Standards for Digital Media Faculty

James Richardson, LaGuardia Community College


The challenges facing tenure-track faculty in the areas of digital technology are unique. The relative infancy of web and multimedia technology has created an unexpected quandary for digital scholars teaching within academia. In many cases, these teachers are the vanguard for the movement to educate students and faculty across disciplines in how to best utilize new technology in the academic, artistic, and economic sectors of society. Until now, professors teaching in the area of digital technology have been traditionally judged by the liberal arts definition of scholarship. However, in the case of new and evolving fields of study, there are alternative criteria that would be better suited for the digital disciplines, and would serve as a more accurate assessment on the quality of faculty scholarship as they march towards tenure, promotion and reappointment. Under the current system there are numerous institutional biases and obstructions that unnecessarily complicate the pathway to tenure and promotion for faculty working with technology. If digital scholars are going to advance within the academy, the existing tenure and promotion system must be redefined and expanded to include a more
modern definition of intellectual excellence.


There is a growing risk that the academy will begin to seem irrelevant if it continues to underestimate the cultural and technological shifts taking place all around us. The sharp divide in academia over the nature of what constitutes tenure-worthy digital scholarship cannot be universally defined without updating the current peer-review system. In “Tenure in a Time of Confusion,” historian Paula Petrik states that the most pressing questions involving digital scholarship are “who will review digital projects, what criteria should be used to evaluate multimodal scholarship, and what skills and qualifications should the reviewers of digital research possess?” (Cheverie, Boettcher, and Buschman 2009, 224).

The key dilemma in assessing digital scholarship is that many academics, who have direct responsibility for setting standards under which digital practitioners are judged, are not technically conversant with and remain largely unaware of the distinctive training and discipline-specific research that is required to effectively excel in these fields (Cross 2008, 2). In fact, most committees responsible for evaluating candidates for tenure and promotion have historically been populated with senior faculty members from traditional non-technical disciplines. As a result, it can be difficult for some conventional scholars to appraise the academic merit of work from disciplines that did not exist twenty years ago (Jaschik 2009). More to the point, in many cases we are asking those tasked with setting standards for multimedia-based research to create fair and impartial rubrics to assess the quality of non-traditional faculty scholarship when they do not adequately understand the technologies and the industries from which these digital professionals have originated. Even in cases where committee members may have a background in digital fields, the predominant attitude in the academy is that digital projects are inferior to publications in peer-reviewed scholarly journals, and should be viewed with some skepticism as to their merit as scholarship (Cheverie et al. 2009, 220).

In the most recent attempt to address the need to establish standards for digital scholarship, the Modern Language Association (MLA) takes the explicit position in its 2012 Guidelines for Evaluating Work in the Digital Humanities and Digital Media that institutions of higher education that “recruit or review scholars working in digital media or digital humanities must give full regard to their work when evaluating them for reappointment, tenure, and promotion” (MLA 2012, 3). In addition to this basic request for broadening the definition of what constitutes digital scholarship, the new MLA guidelines highlight several areas in which institutions and digital practitioners can more effectively set the stage for fair and equitable assessment of multimodal scholarship. Or, to state it more succinctly, institutions of higher education need to create more open standards for evaluating scholarship that blend diverse forms of media as well as evolving technological methods to deliver that scholarship.

Many of the MLA’s proposed solutions call for institutions to improve communications between governing bodies and digital practitioners by clearly documenting scholarship expectations at the beginning of the hiring process and crafting discipline-specific guidelines for faculty producing digital scholarship so that they “can be adequately and fairly evaluated and rewarded” (MLA 2012, 1). The recommendations also call for engaging other digital experts, internal and external to the institution, at key review intervals to define what constitutes exceptional digital work, and to respect that work by viewing and assessing it within the medium for which it was created. Essentially the MLA is advocating that tenure and promotion committees should be discouraged from evaluating multimedia and web-based work by asking the faculty member to reproduce it in printed format. They are requesting that digital artifacts be reviewed only within their original digital media.

In the case of individual digital practitioners entering the tenure system, the MLA advocates that scholars be mindful of the emerging nature of their fields and negotiate at the start of their service the methods by which they will be assessed (MLA 2012). In short, the MLA is asking digital scholars to become fully engaged in their careers by taking a proactive stance in negotiating their responsibilities and methods of assessment, documenting their successes in regard to the impact that their work has on the furtherance of multimodal studies, and making use of all available institutional supports to maximize the opportunities for fair scholarly evaluation.

While the MLA should be commended for being one of the few professional organizations bold enough to buck the traditional academic evaluation system and to take on the task of laying the groundwork for the future assessment of digital scholarship, the guidelines they are proposing stop short of offering specific criteria that can immediately be applied to improve the tenure and promotion process for digital scholars currently in the system. Many of the suggestions put forth by the MLA speak to the future while ignoring the present challenges facing digital practitioners. While it is understandable that the fast moving and fluid landscape of digital disciplines makes it difficult for any organization to craft guidelines to cover all contingencies, one underlying problem facing the academy is that unless immediate changes are made there is a strong possibility that many of the current generation of digital educators could leave traditional institutions of higher education and not return.

The New Paradigm: Immediate Solutions

Until now, faculty members teaching in digital disciplines such as web development and interactive design have traditionally been judged by liberal-arts definitions of scholarship. In most cases, this definition has been limited to whether or not the candidates for tenure or promotion have published articles in double-blind peer-reviewed journals. There is a significant limitation in applying the academy’s existing reliance on peer-reviewed journals to the digital media in that, compared to other long-established fields, there is a lesser number of refereed publications universally dedicated to the field (Ippolito, Blais, Smith, Evans, and Stormer 2009). The limited number of digital media–specific journals can also present a professional stumbling block for digital faculty looking to advance through the ranks of the professoriate. The lack of appropriate publishing venues for their work can compel digital academics to seek opportunities to publish in journals external to their fields of expertise and place them in direct competition, and at a great disadvantage, with authors from non-digital disciplines.

To address this need for change in the current tenure and promotion process, there are several benchmarks that can be immediately applied to provide a more balanced approach toward evaluating the multimodal scholarship of digital practitioners. The suggestions offered in this article to improve the system of tenure, reappointment, and promotion could easily be implemented for digital practitioners currently in the tenure pipeline.

Step 1: acknowledge the distinctions between production and research degrees

Part of the reason that digital practitioners find themselves in difficulty during the tenure and promotion process is that they have not successfully advocated for greater latitude in what constitutes scholarly activities. This includes being clear about various and often subtle subcategories within multimodal studies, most notably the distinction between digital humanities and digital media. The academic delineation between these two new fields is usually lost on many contemporaries from non-technical academic fields. To further complicate matters, the cross-disciplinary nature of these new academic fields has made the lines of demarcation between them significantly less precise. Since both fields are at their cores driven by, or at least defined by developing technologies, it can be easy for traditional academic colleagues to confuse the two. Nevertheless, the differences between the areas of expertise are real and can require different methods of assessment in regard to scholarly production by faculty in each field.

Faculty members teaching in the digital humanities are generally from the liberal arts and social sciences where they study the theoretical effects of technology from a cultural and pedagogical standpoint. These instructors are less involved with the technical inner workings of systems, software, and hardware and are more focused on how the technology can be, and is, used in society and in the classroom. As an example, consider a history professor within the digital humanities, who can, without being able to program an interactive application on the steam powered train engine, utilize existing software to create a multimedia presentation to illustrate how the development of the railroad helped to transform the early American economy. While there are a growing number of academics, like University of Nebraska scholar Stephen Ramsay, who believe that digital humanists (“DHers”) must be able to code and build multimodal artifacts (Gold 2012, 3), there are many DHers who are not required to have the system design skills necessary to educate students on the impact and educational uses of new technologies. Over time as digital convergence continues to blur the lines between the theory and production, this will become less and less true. However, at the present time, the academic responsibilities of digital humanists and media technologists can be somewhat different.

Educators within the field of digital media frequently originate in the visual arts or computer information systems disciplines. They are called upon to teach students how to create and implement systems, software, or hardware from the design phase all the way through the physical conception of new technology. Consequently, it is absolutely essential that these educators are experienced in building digital artifacts and systems in the course of their duties. As opposed to digital humanists, digital media technologists are usually less concerned with the cultural and pedagogical impact of their discipline and mainly concentrate their efforts in assessing which technologies and creative processes offer the greatest opportunities for long-term high-tech innovation. These practitioners are generally more focused on educating students in the specific creative and technical skills necessary to plan and develop the next set of digital tools.

As we begin to discuss methods of evaluation, in the case of the digital humanities—where many professors have research-centric PhDs in traditional fields such as English, the social sciences, and even economics—the peer-reviewed article may be an appropriate base from which to begin to assess their academic scholarship. The work of organizations like the MLA has had a profound impact in prompting traditional publishing venues to recognize the increasing value of digital technologies in influencing humanistic inquiry. This work has helped to redefine the methods of scholarship for future digital humanists by fostering a number of new journals, such as Digital Humanities Now and the Journal of Digital Humanities, that recognize scholarly work beyond the traditional research article. These online and open-access peer-reviewed publications have aided educators who study the effects of technology from a cultural, economic, and pedagogical standpoint in presenting their research in true multimodal fashion. The web-based nature of these new journals has created an environment where the work of digital humanists can move beyond the purely textual to a more visual and technologically dynamic presentation.

Conversely, the educators teaching in the field of digital media commonly have terminal degrees with a more production-centric focus resembling Master of Fine Arts (MFA). The degrees earned by these academics generally concentrate less on the traditional ability to research and write and place greater emphasis on the capacity to design or build new creations. For these practitioners, instead of using the peer-reviewed article or monograph as the evaluation standard, academic institutions could adopt measures more closely resembling the criteria utilized in evaluating professors in the visual and performing arts in which educators are required to develop and maintain professional portfolios of their work. That approach would allow a digital media scholar’s academic excellence and scholarly achievement to be determined by peers within their field through exhibition and critical portfolio evaluation. Since the concept of the professional portfolio has long been a foundation in creative fields, embracing this process to demonstrate digital practitioners’ command of their production discipline would be a natural extension and a more effective base from which to begin to evaluate their scholarship.

At LaGuardia Community College, where I am a faculty member in the humanities department, there are differing standards applied to the creative and the more traditional academic disciplines in regard to tenure, promotion, and reappointment. Faculty members from the creative and performing disciplines, such as theater and fine arts, have far more latitude in what constitutes scholarly achievement within their areas. They are not required to publish in refereed journals, but must instead provide scholarly evidence of their work through recognized gallery exhibitions and artistic reviews of their portfolio creations in appropriate publications. On the other hand, faculty members in more traditional academic disciplines within the humanities, such as philosophy, are strongly encouraged to follow the customary path of academic publishing in order to successfully move through the ranks of the professoriate. Another institution within the City University of New York system, Hostos Community College, has taken this process a step further by adopting clearly defined written guidelines that are specific to the departments and to the disciplines in which faculty members are being assessed. In their 2010 Guidelines for Faculty Evaluation, Reappointment and Tenure, posted on their website, the college not only outlines rubrics for judging faculty but also establishes the use of a portfolio in the overall process.

While the specific methods of scholarly valuation outlined above may be appropriate for digital humanists and digital media technologists, these approaches should merely be a starting point for assessment and not the only, or even primary, system for measuring academic achievement. The digital convergence that is taking place in information technology has affected institutions of higher education. Depending upon the university in question, faculty members in digital programs can be drawn together from multiple disciplines, both technical and non-technical, to constitute new digital media departments or programs. It is not uncommon for instructors with backgrounds in fields such as fine arts, film, theater, graphic design, information technology, photography, computer science, English literature, business, and law to comprise the core teaching staff of a digital media program (Ippolito 2009, 72). As a result, any department consisting of faculty members drawn from such diverse fields can pose difficulties for their tenure and promotion committees in determining the scholarly quality of a scholar whose academic work is within a singular academic discipline. In many instances, the qualifications for success within the various subfields of digital media can be so varied that applying a single assessment standard to digital scholarship becomes impractical. The predicament then facing academics engaging in digital media is that the cross-disciplinary nature of their work necessitates that they advocate for themselves to develop and frame the context of their creative work in a manner acceptable to tenure and promotion committees (Jaschik 2009).

Step 2: give greater recognition of professional development via industry certifications

In order to stay current with the technical advances that are affecting so many contemporary social and economic changes in the academy, educators in the digital disciplines are required to spend a great deal of their time updating and mastering new technologies. Ongoing training is necessary to enable the digital media faculty to bring evolving information into the classroom and to incorporate it into their traditional research and production work. This can be especially true for educators who have the responsibility for teaching production-centric courses that require a firm grasp of current versions of software, hardware, and technical procedures. However, under the current tenure and promotion system much of this ongoing research and training work to maintain competence with digital technologies is unfairly regarded as merely supplemental activity. While some may argue that evolution and changing standards occur in nearly every discipline, the rapid progression of technical innovation is especially concentrated within the digital disciplines as entirely new software, languages, and methods of development are adopted rapidly and repeatedly. These constant technical changes present a significant challenge for practitioners in the digital fields.

An effective way to address this disparity and credit digital practitioners for the constant technical preparation that is essential to their professional success would be to more fully credit the attainment of well-established industry certifications in the tenure and promotion process. Obtaining qualified certifications from established and valued organizations is a rigorous process in which educators must demonstrate both practical and theoretical expertise. Industry certifications from leading companies such as Microsoft, Apple, and Adobe can also provide external professional validation of faculty expertise in both the technical and creative disciplines. In addition, these companies offer specialized teacher certifications such as the Microsoft Certified Trainer (MCT), the Apple Certified Trainer (ACT), and the Adobe Certified Instructor (ACI) distinctions that not only evaluate a candidate’s mastery of the material but also appraise an educator’s ability to teach technical and creative subjects.

Companies offering accreditations have established their own set of rubrics for software and system development that identify the critical information candidates must master before they can be granted the designation of Certified Trainer or Instructor. In the case of the Adobe Certified Instructor, candidates must demonstrate expertise not only with various creative software packages but also distance learning and presentation software such as Adobe Connect and Presenter, both of which facilitate the development of e-learning content for digital distribution. Attaining these certifications can greatly benefit faculty members teaching production courses as they are introduced to vendor specific workflows that can be passed on to students to facilitate greater efficiency in producing digital content.

The Mozilla Foundation has started a recent trend in online certification and skill representation that has begun to gain traction in many educational and professional circles. The Open Badges initiative is an open source standard in which can be adopted by organizations to issue digital badges as a means to verify educational achievement or competency. The badges would be issued and backed by an organization or school to serve as a graphical certification of accomplishment that could be displayed on websites, social media networks, or traditional offline venues such as resumes. If the badges are supported by well-developed rubrics to substantiate effective instruction in the subject matter, these symbols could function as a powerful endorsement of technical or educational proficiency. Institutions such as Codeacademy, Peer to Peer University (P2PU), and the Carnegie Mellon Robotics Academy have already, or are currently, developing open badges as a way to acknowledge technical achievement.

For many years, tenure and promotion committees have struggled to evaluate intellectual work from disciplines outside of their areas of expertise. It is for this very reason that publishing in peer-reviewed journals has been the default method for determining the academic worth of a candidate for tenure or promotion. Laura Mandell, a Professor of English literature and chair of the MLA Information Technology Committee, has suggested that “a big part of the problem is that for the past 50 years, what people have done on promotion and tenure committees is to say ‘OK, this was accepted by Cambridge University Press. I don’t need to read it because I know it’s quality’” (Jaschik 2009, 1).

Committees have typically been able to “outsource” tenure and promotion decisions to peer-reviewed journals and rely on that process to vet the competence of fellow academics (Harley and Acord 2011). Unfortunately, this practice of evaluating by proxy can only be successful if there are established peer-reviewed journals within the field in question, or failing that, qualified authorities on tenure and promotion committees who can assess the work. What happens to this process when the scholarship that needs to be evaluated originates from a field like digital media where there are few peer-reviewed journals? Or in the case of the digital humanities where standards for publications are only just beginning to evolve to include multimodal artifacts? How can tenure and promotion committees be expected to serve the best interests of their institutions, as well as fairly evaluate faculty in digital disciplines, without the benefit of this specialized expertise? The answer is simply that they cannot. By expanding the number of external sources for evaluating technological excellence to include select industry certifications, tenure and promotion committees would be presented with additional and appropriate measures through which to vet candidates for advancement.

Step 3: give greater recognition to curriculum design and development

Just as instructors within digital humanities and digital media must maintain their skills through ongoing professional development, designing and updating course materials in a rapidly evolving technical field is also a time-consuming process that requires constant research and updating. Under the current tenure and promotion system, curriculum design is unfairly regarded as a supplemental activity and as of lesser value than the traditional printed article.

Faculty members who are designing innovative online courses in various disciplines are at the forefront of an entirely new method of student instruction. Hybrid and online courses, because of asynchronous interaction between teacher and student, require a different level of preparation and engagement by instructors. The interpersonal dynamics of the face-to-face classroom are radically altered when the interaction between student and teacher takes place in a virtual environment. As new forms of online education, most notably MOOCs (Massive Open Online Courses), are being adopted at a startling pace within both private sector and traditional academic circles, by essentially relegating this new area of curriculum development to auxiliary status in the tenure and promotion process, many institutions are setting a precedent that may cause future complications. The secondary status given to curriculum development, regardless of innovation, will help assure that only senior faculty with tenure will chance engaging in this new area of teaching and scholarship.

According to data released by the research firm Ambient Insight, the number of post-secondary students in the United States who will take some or all of their classes online is expected to climb sharply to more than 22 million by the year 2014. The CEO of Ambient Insight, Tyson Greer, has suggested that “the rate of growth in the academic segments is due in part to the success and proliferation of the for-profit online schools” (Nagel 2009, 2). Until recently there has been very little serious competition in the higher education arena for traditional academic institutions. Similar to many other industries, the introduction of technology, in this case online instruction, presents an opportunity for the significant digital disruption of higher education, especially if the curriculum development work of digital practitioners is not adequately recognized in their assessments by tenure and promotion committees and if the academy fails to provide incentives to academic curriculum designers to respond to the competitive threat from the private sector.

Step 4: give greater recognition to service supporting innovative uses of technology

Faculty with digital media expertise are in a unique position to educate students as well as faculty members in other disciplines in how best to utilize new technology in the academic, artistic, and economic sectors of society. As a result, colleges and universities are increasingly asking these educators to consult on and lead large-scale initiatives that benefit the institution. In many cases these educators are asked to serve in these highly specialized roles at a fraction of the price that an outside consultant would be paid. Even in cases where faculty members are helping to build, support, and promote pedagogical initiatives that enhance the reputation of the institution and numerous disciplines, the valuable services that they provide are rarely viewed as scholarship.

A perfect example of the type of service that should be recognized can be found in the recent launch of the City University of New York’s Academic Commons project ( The CUNY Academic Commons was the brainchild of a small number of non-tenured faculty and staff who had the pioneering idea to create an online academic social network exclusively for use by the university’s faculty, staff, and graduate students. Built entirely on a foundation of open source software, the focus of the online network was to create an environment for communication and collaboration between the scholars teaching within the 24 units that make up the CUNY system. Since the launch of the online network in 2009 the project has expanded to include the “Commons in a Box” initiative, an open-source venture that will enable other academic institutions to create and customize their own virtual spaces for academic research and collaboration. However, in discussions with Matthew Gold, the project leader for the initiative and the only key person on the project in a traditional tenure-track academic role, I learned that his contribution to the creation and expansion of the Commons was defined as “service to the university,” and thus not given the same weight as a traditional refereed publication would have been in his faculty evaluation for tenure and promotion. Despite the fact that the project has brought a considerable amount of attention to CUNY as organizations such as the MLA sign up to utilize the Commons in a Box software to support their own academic initiatives and institutional purposes (Roscorla 2011), Gold felt compelled to publish an article on his experience to have the project be counted as true scholarship. Gold’s article, co-written with George Otte and entitled “The CUNY Academic Commons: Fostering Faculty Use of the Social Web” (Gold and Otte, 2011), was a case study on the implementation of the Common project to detail the creation and impact of this new academically focused social network.

As a faculty member and digital practitioner, Gold’s experience is not unusual. Sean Takats, a history professor and director of research projects at the Roy Rosenzweig Center for History and New Media, details similar challenges in his blog post “A Digital Humanities Tenure Case: Part 2: Letters and Committees” (Takats 2013). Takats takes the bold step of pulling back the curtain and discussing in great detail the challenges he faced as a digital humanist on the tenure track. Takats was a project lead and co-director for Zotero, a digital software platform designed to assist academics in organizing and sharing research. The software he helped to bring to fruition has been widely recognized and adopted as an excellent resource within the digital humanities and communities well beyond. However, many of the digitally inspired accomplishments achieved by Takats were met with resistance by members of his college-wide tenure committee because “some on the committee questioned to what degree Dr. Takats’ [sic] involvement in these activities constitutes actual research (as opposed to project management). Hence, some determined that projects like Zotero et al. while highly valuable, should be considered as major service activity instead” (Takats 2013, 1).

As technology continues to digitally disrupt the established methods of operating inside the academy, it will be imperative for institutions of higher education to be able to take advantage of innovative ideas developed by the multimodal “thought leaders” within our midst. In the coming years projects like the CUNY Academic Commons and Zotero, which converge on the boundaries bordering cutting-edge technology and ground-breaking pedagogy and academic collaboration, will become increasingly prevalent. If these endeavors are to be successful they will require expert stewardship that can usually only come from leaders familiar with both the technology and the pedagogy. Unless the academy starts to recognize in the tenure and promotion process the contributions of faculty with the capabilities to shepherd these types of digital initiatives, institutions may find it increasingly difficult to get non-tenured educators to play active roles in the future.

Step 5: create discipline-specific communities of digital innovators and thought leaders

Anvil Academic, a new joint project by the National Institute for Technology in Liberal Education (NITLE) and the Council for Library and Information Resources (CLIR), is seeking to fill the void in objectively judging digital scholarship by offering a new virtual ecosystem where non-traditional scholarly work can be evaluated under the direction of traditional university presses and publishing outlets. The founders of the Anvil project hope to provide a true multimodal publishing platform that would enable all forms of digital media to be presented, reviewed, and sanctioned by well-established academic associations possessing the gravitas to substantiate the quality of digital scholarship (Kolowich 2012).

The Anvil project and similar initiatives, such as the CUNY Academic Commons, can help to provide answers to many of these problems by fostering virtual communities for multimodal scholars to collaborate and create more efficient methods of peer-to-peer communication specific to the digital disciplines. For example, the CUNY Games Network, a group on the CUNY Academic Commons dedicated to the study and pedagogical uses of interactive simulations and games, is helping to connect digital practitioners from across the the CUNY system. These educators, many whom may have rarely been able to interact with their colleagues on other CUNY campuses, are now collaborating on research, sharing curricular material, and engaging in ongoing discussions surrounding all aspects of gaming. The Academic Commons, and similar projects, can help to establish essential enclaves within the ranks of the digital disciplines to promote reform and respond to the concerns that tenure and promotion committees may have on the topics of digital scholarship and peer review. For example, digital practitioners within the tenure review process could use similar online systems to establish portfolios to display their interactive creations and have them assessed by qualified peers in the larger academic community to ascertain the quality of the scholarship. The establishment of these online portfolios could also provide snapshots to assess professional growth of a candidate over the period of time they are on the path toward tenure.

The underlying fears surrounding the establishment of discipline-specific communities invariably revolves around whether or sufficient peer review would occur in such environments. In Planned Obsolescence, Kathleen Fitzpatrick explains how the open-source blogging system CommentPress, integrated into the larger MediaCommons academic network, was used as a means to enable peers from within the digital humanities to provide an ongoing critique of her latest manuscript throughout various stages of the reviewing and publishing process. The asynchronous, communal, and open peer-to-peer review that took place within these digital confines would have been difficult to replicate in a traditional print setting. Fitzpatrick suggests that communal learning systems like CommentPress can become “useful tools not just for quickly and engagingly publishing a text, and for seeking feedback while a text is in draft form, but for facilitating an open mode of review” (Fitzpatrick 2011, 115) of digital publications. The open nature of these communal learning systems, where commenters do not reply in the manner attributed to standard double-blind, peer-to-peer reviews, can produce a higher level of trust in the critiques offered because the reviewers are not anonymous and have placed their opinions and academic reputations out in public.

Step 6: broaden the definition of publications to include multimodal productions

The definition of academic publishing should and must be expanded to include new multimodal outlets that are poised to overtake print-based media. Paula Petrik notes that academics are traditionally “people of the book” and will have to adapt to a new digital paradigm in order to fairly evaluate “non-traditional forms and formats of scholarship” (Cheverie et al. 2009, 224). These “people of the book” will continue to have their perceptions of scholarship challenged as academics integrate larger amounts of technical, visual, audio, and web-based elements into their scholarly pursuits. For example, in the same way that high impact sites like the Huffington Post have supplanted conventional printed newspapers and magazines, the rapid adoption of tablet computers and smartphones will redefine the ways students and educators will consume and process information in the coming years. This transformation is already underway. Apple released its iBooks Author application in early 2012, which was designed to enable educators to produce and distribute content that previously required traditional publishing venues. In addition to conventional text, multimodal scholars will now be able to combine videos of speeches, slideshow presentations, music and spoken audio, animated 2-D and 3-D illustrations, and interactive applications, all within a digital format that will run on a tablet device running the Apple iOS. And Apple isn’t the only company banking heavily on the future of fully interactive digital publications. The applications within Adobe’s Digital Publishing Suite offer similar functionality as iBook Author, with the added benefit of being able to create content for alternative tablet devices by Microsoft, Android (Google), and others.

Now that faculty members have access to these alternative production applications, they will be fully able to design customized textbooks to better support the specific curricular needs of their classes and programs. The impact that will be felt on a curriculum-design level will be nothing short of revolutionary for digital practitioners innovative enough to incorporate these tools into their scholarly practice.

Professor Stephen Nichols of Johns Hopkins University, in a discussion of academic peer review, believes that the continuing use of phrases such as “publications” as the primary seal of approval for tenure and promotion will discourage younger faculty members from engaging in digital scholarship, since it is viewed as of significantly lesser value than print-based, peer-reviewed journals (Cheverie et al. 2009). The bias against digital scholarship that Nichols describes creates a climate of fear inhibiting experimentation, which is detrimental not only to scholars in the digital disciplines, but for the entire academy. Fearing to test the limits of academic and technical innovation runs contrary to everything that the educational system should aspire to achieve, and also has a negative impact on the evolution of pedagogical practice.

Ken Norman, a professor of psychology at the University of Maryland, agrees with Nichols. Based on research that he conducted on university models for tenure and promotion, Norman concludes that junior faculty members generally “wait to get tenure before they become cyberized” because “positive tenure and promotion decisions are based on grants and publications in top-tier journals” (Cheverie et al. 2009, 227-28). While delaying the integration of technical innovation into their scholarship may not constitute a burden for faculty in liberal arts and science departments, it can be a substantial professional barrier for digital practitioners. The speed at which technological advances occur in digital disciplines creates a finite window of time to study and implement digital research. Any delay in assimilating new developments into their scholarship places digital scholars at risk of having their research become obsolete before it can ever be published. It is precisely for this reason that it is imperative for the academy to recognize that educators are no longer limited to the printed word in order to participate in deep and meaningful scholarly production. If this position is adopted by academic tenure and promotion committees they will be forced to take the appropriate steps to acknowledge these educational trends, and reward them accordingly.


The definition of scholarship can take many forms and will vary greatly based upon the academic discipline. One of the fundamental goals of scholarship is to create intellectual work that advances the field of study in which the academic endeavor originates. The holy trinity for tenure and promotion—encompassing publishing, service, and teaching—has always been skewed more heavily toward publishing. The impediments to scholarly acceptance of digital media educators closely mirror the challenges that faced earlier academic pioneers of ethnic, Black and, women’s studies during the 1960s and 1970s (Jaschik 2009). It can be said that very little has changed since that time. The academy is an institution bound by tradition, and when new fields of study are developed, it often responds with hesitation and skepticism to emerging disciplines.

Under the current system there are numerous institutional biases and obstructions that unnecessarily complicate the pathways to tenure and promotion for digital faculty. Key among these barriers is the traditional peer-review system that has essentially contracted out the decision-making process for tenure candidates to a select group of academic journals and presses. Because most tenure and promotion committees lack the expertise to critique every discipline, especially in fields that span several areas of study, this aging paradigm is not practical for the emerging digital disciplines. Just as other industries outside of the academy have been altered by major economic and technical changes, higher education may experience a similar transformation unless the academy begins to adapt (Pearce, Weller, Scanlon, and Kinsley 2010). Without modifications many of these digital scholars, in order to validate their own definition of intellectual excellence, will leave the academy in favor of the higher salaries that they can command in the private sector.

Looking back on my own academic career, I am amazed at the naiveté with which I negotiated my academic contract and the methods by which my scholarship would be assessed. As the sole fulltime faculty member in a new discipline established by my college, I was completely unaware of the territory that would have to be traversed to fashion appropriate standards for my scholarly evaluation. While my educational and professional experience had equipped me to teach in the digital disciplines, I was ill prepared as digital media faculty member for navigating the terrain of the academic tenure and promotion process. If any of the recommendations from the MLA had been in place when I was hired to help establish a new digital technology major at my college, my journey through the tenure process might have been a more balanced and constructive experience.

I transitioned to the university from the private sector more than a decade ago, and I have found that my experience is not unique among educators working within the digital humanities and digital media fields. The tenure and promotion system should embrace expanded definitions of acceptable scholarly venues to advance the practice of multimodal scholarship, not only to attract and retain the next generation of digital professionals, but also in order not to discourage new or established faculty members from engaging in technology-based pedagogy and scholarship.


Cheverie, Joan F., Jennifer Boettcher, and John Buschman. 2009. “Digital Scholarship in the University Tenure and Promotion Process: A Report on the Sixth Scholarly Communication Symposium at Georgetown University Library.” Journal of Scholarly Publishing 40:210-30. OCLC 360067692.

Cross, Jeanne Glaubitz. 2008. “Reviewing Digital Scholarship: The Need for Discipline-based Peer Review.” Journal of Web Librarianship 2:1-29. OCLC 652131661.

Fitzpatrick, Kathleen. 2011. Planned Obsolescence: Publishing, Technology, and the Future of the Academy. New York: New York University Press. Kindle edition. OCLC 710019002.

Gold, Matthew K., George Otte. 2011. “The CUNY Academic Commons: fostering faculty use of the social web.” On the Horizon 19: 24-32. OCLC 701118378.

Gold, Matthew K, ed. 2012. “The Digital Humanities Moment.” Debates in the Digital Humanities. Minneapolis, MN: University of Minnesota Press. Kindle Edition. OCLC 784886612.

Harley, Diane and Sophia Kryz Acord. 2012. “Peer Review in Academic Promotion and Publishing: Its Meaning, Locus, and Future.” CSHE Center for Studies in Higher Education:1-117. OCLC 709559995. Accessed February 14, 2013:

Ippolito, Jon, Joline Blais, Owen Smith, Steve Evans, and Nate Stormer. 2009. “New Criteria for New Media.” Leonardo 42:71-5. OCLC 4893498214.

Jaschik, Scott. 2012. “Tenure in a Digital Era.” Inside Higher Ed. Accessed February 14, 2013:

Kolowich, Steve. 2012. “New Seal of Approval.” Insider Higher Ed. Accessed April 17, 2012:

Modern Language Association (MLA). 2012. “Guidelines for Evaluating Work in Digital Humanities and Digital Media.” Accessed February 14, 2013:

Nagel, David. 2009. “Most College Students to Take Classes Online by 2014.” Accessed Feb 14, 2013:

Pearce, Nick, Martin Weller, Eileen Scanlon, and Sam Kinsley. 2010. “Digital Scholarship Considered: How New Technologies Could Transform Academic Work.” In Education, 16. OCLC 728081434. Accessed February 14, 2013:

Roscorla, Tanya. 2011. “CUNY Plans to Share Social Network Tools That Break Down Silos.” Accessed February 14, 2013:

Takats, Sean. 2013. “A Digital Humanities Tenure Case, Part 2: Letters and Committees” Accessed February 14, 2013:


About the Author

James Richardson holds a M.P.S. in Interactive Telecommunications from New York University’s Tisch School of the Arts and has served as a project manager and consultant for numerous Fortune 500 companies.

During his career he has managed the deployment of Multimedia and Telecommunication initiatives for companies such as MetLife, Century 21, ADP, Bankers Trust, Suze Orman Inc, and the City University of New York.

Professor Richardson is well versed in Internet technology, game design, digital audio and video production, e-commerce strategy, animation, and web development. His latest project involves creating an interactive iPad application to motivate at risk youth to find their voice in the information age.

Incorporating the Virtual into the Physical Classroom: Online Mastery Quizzes as a Blended Assessment Strategy

Kyle Beidler, Chatham University
Lauren Panton, Chatham University


An increasing volume of research has supported the assumption that pre-lecture, online, and mastery quizzes can be a beneficial pedagogical strategy. However, there has been limited documentation of attempts to combine these pedagogical tools as an assessment of individual course lectures. This paper presents a “blended” instructional approach, which combines an online mastery quiz format with traditional face-to-face meetings within the context of a small graduate course. Preliminary findings suggest that online mastery quizzes that are incorporated into traditional classroom instruction are a useful means of evaluating weekly course lectures and also provide a catalyst for classroom discussion.



Landscape Architecture; Pedagogy; Assessment; Mastery Quizzes


Course quizzes represent a common assessment strategy and teaching technique that have been used by instructors for generations. Quiz formats have increasingly varied with the advent of digital technologies. There are now pre-lecture, out-of-class, and mastery quiz formats that have been implemented using both traditional and digital media. In addition to this growing range of quiz typologies, quizzes have produced somewhat mixed findings when studied from an educational perspective.

Paper-based quizzes given at the start of a class period have been used as a means of encouraging students to be both punctual and prepared for scheduled class meetings. Pre-lecture quizzes are also a common tool used to assess the students’ current understanding of the course material. Generally, such quizzes are believed to increase student engagement. However, research findings have varied in terms of student performance.

For example, Narloch and his colleagues found that students who received pre-lecture quizzes, as compared to no quiz, performed better on exam questions (Narloch, Garbin, and Turnage 2006, 111). This study also suggested that simple objective or low-level questions (i.e. fill-in-the-blank, matching) improved student performance on higher-level assessments such as essay questions (ibid., 112). These findings are similar to an additional study that suggested low-level quiz questions can increase student exam performance. However, this same study contradicts the proposed correlation between low-level questions and higher-order cognitive skills such as deductive exploration (Haigh 2007).

In contrast, others have suggested that pre-lecture quizzes do not automatically lead to increases in student performance as indicated by final grades. A comparative study found that exam scores were not significantly improved in sections of a biology course that included weekly quizzes comprised of fill-in-the-blank questions (Haberyan 2003). Connor-Greene found that daily essay quizzes can be a catalyst for thinking within the classroom. However, the author cautioned that the relationship between quizzing and actual learning warrants further study (Connor-Greene 2000).

With the increase of computer technology in higher education, much research has also analyzed the perceived benefits of computerized and online quizzes. Early findings suggested that computerized quizzes can improve exam performance if students used the quizzes to test their knowledge as opposed to learn the material (Brothen and Wambach 2001, 293). Others have suggested that online quizzing is as effective as in-class quizzing only after reducing the possibility of cheating by adjusting the question bank and available time (Daniel and Broida 2004).

Additional studies of online quizzing found that students who elected to use online quizzes performed better in summative exams (Kibble 2007). Kibble’s online quizzes were voluntary, and thus better performing students were more likely to use online quizzes to improve their performance. A later study was able to control for selection bias as well as a number of confounding factors by using retrospective regression methodology. Findings remained consistent with the majority of the literature and suggest that exposure to regular (low-mark) online quizzes has a significant and positive effect on student learning (Angus and Watson 2009, 271).

A study of online, out-of-class quizzes within the context of a small course found that digital quizzing could only be significantly related to student engagement and perceptions toward learning, as opposed to student performance (Urtel et al. 2006). Despite the lack of support regarding academic performance, the authors still concluded that the unintended benefits of the online format outweighed traditional in-class quizzing. This suggests that additional or secondary benefits may alone justify the use of online quizzes within the context of small courses.

A third quizzing format has been studied in both traditional and virtual contexts. Commonly referred to as “mastery” quizzes, the definition and application of this assessment strategy are not consistent within the literature. The distinguishing feature shared among mastery quiz formats is that students have multiple attempts to take any given quiz. Typically, in a virtual context, each mastery quiz randomly selects from a pool of previously prepared questions on a designated topic. The random selection of questions fosters a more dynamic interface because it is unlikely that multiple attempts are identical, assuming a sufficiently large question-bank.

In an early study of digital mastery quizzes, this pedagogical tool was used as an instructional supplement to an online course (Maki and Maki 2001). Students were required to pass a web-based mastery quiz prior to a set deadline. Students were allowed to repeat the quiz and earned course points for passing up to four mastery quizzes. The researchers found that performance on the mastery quizzes was correlated with the student’s performance on exams given in a physical classroom setting (212).

Additional studies have also supported the correlation between online mastery quizzes and exam performance (Johnson and Kiviniemi 2009). Johnson and Kiviniemi’s mastery quiz format required students to take an electronic quiz based on the weekly assigned readings. Administered by a web-based system, the software randomized both questions and answer choices to prevent students from memorizing response-options. However, students were not given a time limit, and there were no apparent controls to limit the potential for cheating in this study. Brothen and Wambach (2004) have suggested that online-quiz time limits are associated with better exam performance because they reduce the opportunity to look up answers in lieu of learning the material.

Other studies have defined a mastery quiz as an “unannounced spot quiz that is presented twice during class, once at the beginning of the lecture period and then again at the end” (Nevid and Mahon 2009, 29). This pre-lecture and post-lecture application of the mastery quiz concept allows students to acquire knowledge on the tested concepts and focuses their attention during the lecture period. The authors of the study found that students showed significant improvements assessed by pre-lecture and post-lecture comparisons. Credits earned on mastery quizzes also predicted exam performance on concepts covered by the mastery quizzes (Nevid and Mahon 2009).

Collectively, this body of literature largely suggests that quizzing is a beneficial pedagogical strategy, but warns that the relationship with student performance has been somewhat inconsistent. This begins to imply that quizzes may present greater assessment benefits than teaching and learning outcomes. However, none of the studies reviewed have focused on the use of mastery quizzes as a means of assessing an instructor’s classroom activities. Therefore, this study highlights the application and lessons learned from a “blended” quizzing approach that incorporated web-based, pre-lecture and post-lecture mastery quizzes within a physical classroom setting as a means of assessing the effectiveness of face-to-face lectures.1

Methods and Procedure

Data was collected during a single semester in a landscape architecture construction course at a small east coast university. The program is only offered at the graduate level and thus the course was comprised of a small set of graduate students (N = 11, 73% women). The class generally reflected the graduate school’s ethnic (75% = White) and age (mean age = 29.3) composition. Permission from the university’s institutional review board was received to use course and survey data to analyze the effects of mastery quizzes implemented in the course.

Previous pedagogical studies of landscape architecture construction studios have suggested that there are significant differences between the learning preferences of undergraduate and graduate students. In a 2003-04 survey, online lectures were found to be highly preferred by graduated students, compared to undergraduate students (Li 2007). This finding was supported by a 2011-12 survey which reported that undergraduate students significantly preferred in-class lectures using PowerPoint slides (Kim, Kim, and Li 2013). The authors of this multiyear study concluded that undergraduate landscape architecture students are “more likely to rely on the help from instructors or classmates rather than to prefer individual or independent learning” (ibid. 95).

Differences in learning styles between undergraduate and graduate cohorts have also been reported in the context of “e-learning” outside of the landscape architectural discipline. Novice undergraduate e-learners significantly differed from graduate e-learners in two indexed learning style domains, including information perception and information understanding (Willems 2011).

Given the context of the research within a graduate program, it was impossible for this study to make similar comparisons across learner cohorts. However, it is important to note that the course and its materials have been developed within a context of a first-professional curriculum. Therefore, the materials, concepts, and topics covered by this course do not dramatically vary whether it is offered on an undergraduate or graduate level. At either level, the learning objectives are largely dictated by accreditation standards and professional expectations.

For this study, we administered a total of 24 digital mastery quizzes throughout a single semester in a pre-lecture/post-lecture format. Specifically, online quizzes were developed using the university’s learning management system (Moodle). Each quiz was composed of low-level objective questions (true/false and multiple choice) and higher-level graphic problems (short-answer). The short-answer questions required students to solve a given problem presented in a graphic image. Thus, short-answer questions are classified here as requiring higher-level thinking skills because they required the students to “apply” concepts covered in previous lectures. In comparison, the lower-level questions simply asked students to “recall” new concepts presented in the week’s assigned reading.2

Six pre-lecture and six post-lecture quizzes were given at the start and end of each class prior to the mid-term examination. An additional six pre-lecture and six post-lectures quizzes covered the second half of the term and the material leading up to the final exam. In total, the quizzes accounted for 10% of each student’s final grade. All quizzes were announced prior to each lecture.

To limit cheating and manage the classroom schedule, a time limit was set on each quiz. The online quizzes were administered and taken by the students in the physical classroom at the start and end of each lecture and questions were randomly selected from a weekly question bank. Each class period was scheduled for three hours per week and allowed for ample time to implement the quiz format. Students were required to have laptops for every class meeting.


Given the limited sample size (n=11), it was meaningless to test for any correlation between quiz and exam performance. However, as depicted in figure 1, the descriptive statistics reveal that there was a consistently higher post-lecture average quiz score. All quizzes were based on 10 available points. The average pre-lecture mastery quiz for the semester was 71.8%. In comparison, the post-lecture average score was 88.9%.

Quiz/week Number

Pre-lecture Mean Score (SD)

Post-lecture Mean Score (SD)

Exam Mean (SD)


8.32 (1.01)

9.22 (1.02)


6.26 (2.14)

8.35 (1.37)


6.50 (2.06)

9.12 (1.34)


6.45 (2.35)

8.45 (2.10)


6.80 (2.34)

9.32 (0.83)


7.88 (1.64)

9.54 (0.89)

Midterm Exam

84.73 (14.33)


8.36 (2.01)

9.82 (0.57)


7.36 (1.49)

8.09 (1.73)


8.09 (1.73)

10.00 (0.00)


4.27 (2.56)

6.32 (2.83)


8.55 (1.08)

9.45 (0.78)


7.30 (2.44)

9.06 (0.91)

Final Exam

87.73 (5.06)

Figure 1. Pre-lecture and post-lecture quiz averages compared to exam scores.

All quiz scores are out of a possible 10 points. All exams scores are out of a possible 100 points.

Using an ANOVA analysis of variance, we found that the mean pre-lecture and mean post-lecture scores varied significantly (F = 66.086, p < .001). Furthermore, a given week did not predict the difference between pre- and post-lecture scores (F= 0.899, p > .05). Given the controlled testing environment, these results begin to suggest a positive outcome in terms of the students’ understanding of the material. This finding is supported by the results of the course evaluation, which indicated all respondents (n=9) believed that the quizzes had aided in their learning of the course material. The majority of respondents also agreed that the mastery quizzes aided in their identification of new topics. In addition, students believed that quizzes aided in their review of course topics and encouraged good reading habits.

While the findings are not generalizable, this preliminary data suggests that positive learning outcomes could be measured between the pre- and post-lecture average quiz scores. The question of whether weekly mastery quizzes actually increases learning cannot be asked of this contextual data. As others have pointed out, many factors influence test scores, including the wording and formatting of individual questions (Urtel et al. 2006). Therefore, a more appropriate question for this type of data is: “How can weekly mastery-quiz results inform classroom instruction?”

Assessing the Effectiveness of Individual Lectures

As alluded to previously, the course data generated by the mastery quiz format can also be used to gauge teaching effectiveness. By graphically charting and comparing each weekly mean, it is possible to visualize the relative effectiveness of each course lecture (see Figure 2). This technique is especially useful in smaller courses with limited enrollment where more robust statistical analysis is not possible.

Figure 2. Graphic comparison of pre-lecture and post-lecture mean scores

Figure 2. Graphic comparison of pre-lecture and post-lecture mean scores.

Figure 2 displays the consistent improvement suggested earlier by the data in previous table. Obviously, the post-lecture average scores are greater than the pre-lecture averages throughout the semester. More importantly, the distance between the charted lines begins to depict the degree of effectiveness of each face-to-face lecture. In short, the degree of student improvement — and (arguably) the effectiveness of a given week’s lecture, material, and planned activities — is revealed in the space between the charted averages.

From a theoretical perspective, this simple interpretation of the descriptive statistics allows us to more closely assess the quality of the instruction as opposed to student performance. We would argue that this chart begins to identify which specific weeks of instruction need the greatest improvement. This concept can be more clearly depicted by charting the difference between pre-lecture and post-lecture scores against the semester average improvement of the mean scores (see Figure 3).

Figure 3. Average weekly improvement in post-lecture scores as compared to the semester’s average improvement on weekly quizzes.

Figure 3. Average weekly improvement in post-lecture scores as compared to the semester’s average improvement on weekly quizzes.

On average throughout the semester, students scored 1.72 points greater on a post-lecture quiz as compared to a pre-lecture quiz. Figure 3 shows which weeks drop below this average. Thus, this analysis helps the instructor to identify specific weeks in their lesson plans that should be targeted for improvement. The efficiency of this course assessment strategy is mirrored by the promptness by which students receive feedback from the digital quiz format.

Based on our experiences with technique, we would argue that the digital mastery quiz format is a useful course assessment strategy that can guide instructional efforts. In addition, the efficiency of a digital format and the speed which feedback is generated outweigh for us any remaining concerns regarding the statistical significance between quiz results and student exam performance. Thus, the following section highlights additional software techniques that aid in the interpretation of the data that is generated by digital mastery quizzes.

Google Motion Charts

As noted earlier, both pre-lecture and post-lecture quiz scores were recorded in the university’s learning management system and charted as a series of averages. While it proved to be a convenient way to track quiz scores, it created several challenges for analyzing individual student progress over time. Also, given the relatively small sample size, it became important to consider the data from multiple aspects in order to make full use of it. With these two issues in mind, a search for another tool to analyze the data proved necessary. After experimenting with several different visualization tools, Google Motion Chart was selected, based on its ability to provide animation and a multi-dimensional analysis in a interactive, easy-to-understand way.

In addition, the Google Motion Chart was a freely available gadget within Google Docs (now Drive), making it an easy and viable tool for us, and others, to use. In 2007, Google acquired the software Trendalyzer, used by Hans Rosling. It was incorporated as a Google Gadget, an option that can be inserted into any Google Spreadsheet.3 In its essence, the motion chart is a Flash-based chart used to explore several indicators over time. Again, this made it the ideal tool to explore as it provided up to four dimensions for analysis. As illustrated in Figures 4 and 5, the parameters we used for analysis were pre-lecture quiz scores (x-axis); post-lecture quiz scores (y-axis); and the difference between the pre- and post- scores for individual students (color).

Figure 4. Data formatted in Google Spreadsheet and converted to a Google Motion Chart.

Figure 4. Data formatted in Google Spreadsheet and converted to a Google Motion Chart.

Once the data is converted to a Google Motion Chart, a “play” button appears in the lower left of the chart. When clicked, this button sets the data in motion. Optional “trails” or lines feature assist in tracking individual student progress over time (see Figure 5). The Google Motion Chart allows these variables to be quickly modified as needed by choosing a different variable from the drop down list provided. Once the chart is set in motion, it becomes easy to focus on one aspect of the data set.

Figure 5. A Google Motion Chart illustrating a student’s scores over the course of the semester.

Figure 5. A Google Motion Chart illustrating a student’s scores over the course of the semester.

In our example, we focused on the difference between the pre-lecture and post-lecture scores represented by color and charted by the gadget over time. The more blues and greens represented, the smaller the difference between pre- and post- scores (see Figure 6), while the more yellows and reds, the greater the difference (see Figure 7). This quickly provides a way for instructors to gain a general sense of the gap differences and thus student performance on the quizzes.

Figure 6. A Google Motion Chart (captured as a movie) illustrating less positive learning outcomes as evident by the cooler colors

Figure 7. A Google Motion Chart (captured as a movie) illustrating more positive learning outcomes as evident by the warmer colors


Anecdotally, we found that the mastery quizzes did encourage regular, punctual attendance. All quizzes were electronically “opened” and “closed” to students based on the precise timing of the physical class meetings. The learning management software does allow the instructor to restrict access based on an IP address; thus in larger courses these settings could potentially increase attendance. However, the question of whether the quizzes actually promoted the completion of reading assignments prior to class warrants further investigation.

The consistent improvement in post-lecture averages suggests that the mastery quiz format guided the students’ understanding of the material by signaling important concepts. Students overwhelmingly expressed favorable attitudes towards the mastery quizzes in their evaluation of the technique. These results seem to suggest positive learning outcomes. However, the findings are not generalizable and outcomes between different learner types should be considered in future studies.

Our experience highlights the usefulness of the digital mastery format as a course assessment strategy. The efficiency and clarity in which digital quizzes provide feedback to the instructor regarding his or her relative success in the classroom seems to present compelling justification for the implementation of this strategy in other courses. The electronic benefits of the digital quiz format are further enhanced by the abilities of current web-based software to aid in visualization and analysis the data.

The selection of the Google Motion Chart as our visualization tool for the data provided a unique opportunity to not only see the changes in both individual and class performance over time, but more importantly, it allowed the instructor to monitor quiz results concurrently for each given week of the semester. As additional data is added to the Google Spreadsheet, the motion chart should provide the instructor with a means for quick analysis of student progress over time, making this a useful retrospective tool to help inform teaching decisions. This is just one simple interpretation of the data; we feel, however, there is value in being able visualizing data in this manner. It can provide instructors with the ability to see classroom trends and patterns over time. However, we do not want these assessment benefits to overshadow the perceived pedagogical value of the quizzing format as a teaching technique.

The digital mastery quiz format also presented equally important instructional opportunities. The pre-lecture quizzes were designed not to provide additional feedback to the students; they only notified the students if they had answered the question correctly or not. This aspect of the quiz design was implemented in an attempt to focus the students’ attention on specific content they seemingly did not understand. Anecdotally, this design detail of the assessment strategy seemed to significantly increase the number of questions at the start of the lecture and increased the overall engagement of the students during class as compared to previous semesters in which the course was taught.

From this perspective, digital mastery quizzes presented a valuable catalyst for class discussion. As suggested by Connor-Greene (2000), assessment and testing can become a more dynamic process rather than a static measure of student knowledge if it is used to generate classroom conversation. Therefore, we believe that the blended nature of the digital mastery format, as it was implemented in our study, was critical in meeting our educational objectives. Specifically, the pre-lecture quiz administered at the start of each class combined the efficiency of online quizzing with the opportunity for immediate and collaborative discussion in the physical classroom. This approach to quizzing seemingly encouraged students to “test their knowledge” and then use the scheduled classroom period as an opportunity follow up with questions in a more interactive and personal forum.

Finally, daily quizzes can also be a catalyst for multiple levels of thinking if more robust question types are included in the design of the quiz. We included both “recall” and “applied” short-answer questions within our quiz design. In hopes of increasing additional levels of higher-order or critical thinking, future development of the mastery quiz format should focus on the quality and extent of thinking required by distinct question types. Assessment strategies and techniques must be consistent with the level of thinking an instructor is attempting encourage in the classroom. In terms of digital and online quizzes, new electronic question types such as “drag-and-drop” responses are increasingly allowing instructors to develop higher-level assessment. Therefore, all educators could benefit if future research increases our understanding of the relationship between digital question types, quiz outcomes, and Bloom’s (1956) Taxonomy.


Angus, Simon, and Judith Watson. 2009. “Does regular online testing enhance student learning in the numerical sciences? Robust evidence from a large data set.” British Journal of Education Technology no. 40 (2):255-272.

Bloom, Benjamin. 1956. Taxonomy of educational objectives, Handbook I: The cognitive domain. New York: David McKay.

Brothen, Thomas, and Cathrine Wambach. 2001. “Effective student use of computerized quizzes.” Teaching of Psychology no. 28 (4):292-294.

———. 2004. “The Value of Time Limits on Internet Quizzes.” Teaching of Psychology no. 31 (1):62-64.

Connor-Greene, Patricia. 2000. “Assessing and promoting student learning: Blurring the line between teaching and testing.” Teaching of Psychology no. 27 (2):84-88.

Daniel, David, and John Broida. 2004. “Using web-based quizzing to improve exam performance; Lessons learned.” Teaching of Psychology no. 31 (3):207-208.

Haberyan, Kurt. 2003. “Do weekly quizzes improve student performance on general biology exams.” The American Biology Teacher no. 65 (2):110-114.

Haigh, Martin. 2007. “Sustaining learning through assessment: An evaluation of the value of a weekly class quiz.” Assessment & Evaluation in Higher Education no. 32 (4):457-474.

Johnson, Bethany, and Marc Kiviniemi. 2009. “The effect of online chapter quizzes on exam performance in an undergraduate social psychology course.” Teaching of Psychology no. 36:33-37.

Kibble, Jonathan. 2007. “Use of unsupervised online quizzes as formative assessment in a medical physiology course: Effects of incentives on student participation and performance.” Advances in Physiology Education no. 31:253-260.

Kim, Young-Jae, Jun-Hyun Kim, and Ming-Han Li. 2013. Learning vehicle preferences and web-enhanced teaching in landscape architecture construction studios. Paper read at Council of Educators in Landscape Architecture Conference: Space, Time/Place, Duration, March 27-30, 2013 at Austin, Texas.

Li, Ming-Han. 2007. “Lessons learned from web-enhanced teaching in landscape architecture studios.” International Journal on E-Learning no. 6 (2):205-212.

Maki, William, and Ruth Maki. 2001. “Mastery quizzes on the web: Results from a web-based introductory psychology course.” Behavior Research Methods, Instruments, & Computers no. 33 (2):212-216.

Narloch, Rodger, Calvin Garbin, and Kimberly Turnage. 2006. “Benefits of prelecture quizzes.” Teaching of Psychology no. 33 (2):109-112.

NCAT, The National Center for Academic Transformation. 2012. Program in course redesign; The supplemental model 2012 [cited October, 17 2012]. Available from

Nevid, Jeffrey, and Katie Mahon. 2009. “Mastery quizzing as a signaling device to cue attention to lecture material.” Teaching of Psychology no. 36:29-32.

Urtel, Mark, Rafael Bahamonde, Alan Mikesky, Eileen Udry, and Jeff Vessely. 2006. “On-line quizzing and its effect on student engagement and academic performance.” Journal of Scholarship of Teaching and Learning no. 6 (2):84-92.

Willems, Julie. 2011. “Using learning styles data to inform e-learning design: A study comparing undergraduates, postgraduates and e-educators.” Australasian Journal of Educational Technology no. 27 (6):863-880.


About the Authors

Kyle Beidler is an Assistant Professor at Chatham University in the Landscape Architecture Program. His research and teaching interests include design education, neighborhood planning, sustainable site engineering practices and the integration of digital technologies with design communication. Kyle received his PhD in Environmental Design and Planning from Virginia Tech and recently completed Chatham’s Faculty Technology Fellows Program from which this project and article originated.

Lauren Panton is the Manager of Instructional Technology and Media Services for Chatham University. She leads the Faculty Technology Fellow Program, which supports faculty with technology-enhanced projects in teaching, learning, and scholarship. Her academic interests include the scholarship of teaching, as well as technologies related to data visualization, multiple modalities and blended learning.

  1. In the context of this paper, a supplemental model of blended learning is conceptualized as a pedagogical strategy that retains the “basic structure of the traditional course and uses technology resources to supplement traditional lectures and textbooks” (NCAT 2012).
  2. For a complete discussion regarding the relationship between quiz questions and Bloom’s Taxonomy of Educational Objectives, please see Connor-Green (2000).
  3. Google has announced that in 2013 it will be deprecating Gadgets in Google Spreadsheets, however the motion chart type will be incorporated as a regular chart option (to insert one of these charts, from the Insert menu, select Chart.). No specific date has been announced; please check the Google Drive support site for additional information.

Learning Technology as Infrastructure for Innovation

April 7, 2013

Trevor Owens, Digital Archivist at The Library of Congress and Digital History Instructor at American University
Marjee Chmiel, Educational Technology Specialist at The Smithsonian Science Education Center

In this review, concepts introduced in Steven Johnson’s book Where Good Ideas Come From are used to determine how educational systems and learning technologies can best foster innovation. Read more… Learning Technology as Infrastructure for Innovation

Skip to toolbar