Daily Archives: June 16, 2016

133

Teaching Literature Through Technology: Sherlock Holmes and Digital Humanities

Abstract

How do we incorporate technology into the contemporary classroom? How do we balance the needs of teaching literature with teaching students to use that technology? This article takes up “Digital Tools for the 21st Century: Sherlock Holmes’s London,” an introductory digital humanities class, as a case study to address these questions. The course uses the Holmes stories as a corpus on which to practice basic digital humanities methodologies and tools, including visualizations, digital archives and editions, mapping (GIS), and distant reading, in order to better understand the texts themselves. This approach lets students find new patterns in well-known texts, explore the function of space in literature, and historicize their own technological moment.

 

 

As Digital Humanities materializes in undergraduate classrooms, faculty face the problem of how to teach introductory methods courses: the umbrella term digital humanities covers such a wide array of practices—from building digital editions and archives to big data projects—that even defining the term is no easy task. For anyone trying to create an interdisciplinary digital humanities class, the challenges multiply: the course needs to be applicable to students in such diverse fields as History, English, Anthropology, Music, Graphic Design, or Education, all of which examine different corpora; yet it still needs a unifying concept and corpus so the students can see how applying concepts such as digital mapping or distant reading can spark new insights.

To address these issues, I created “Digital Tools for the 21st Century: Sherlock Holmes’s London,” which uses Sir Arthur Conan Doyle’s Sherlock Holmes stories as a corpus on which to practice basic digital humanities methodologies and tools. The Holmes stories provide the perfect set of texts for a DH class, as they are flexible enough for us to use them in every unit: we use visualization tools (such as Voyant and word trees) to look for patterns in words and in sentence structure within a story, build a digital archive of Holmes artifacts, make TEI-encoded digital editions of Holmes stories, create maps of where characters travel, and topic model all 56 short stories to find thematic patterns. With this structure, students learn some of the most important digital humanities methodologies, analyze the Holmes stories from multiple perspectives, and use the character of Holmes as a model for both humanistic and scientific inquiry.

The stories also facilitate an interdisciplinary approach: they touch on issues of gender, class, race, the arts, politics, empire, and law. This ensures that students from almost any field can find something relevant to their major. Perhaps most helpfully for this class, Holmes solves his cases not just through his quasi-supernatural cognitive abilities, but also through his mastery of Victorian technology, including the photograph, railroad, and daily periodicals. This focus on technology enables students to address the similarities between the industrial and digital revolutions: the anxieties that accompany the rise of blogs and Twitter echo Victorian concerns about the proliferation of print and periodicals, as both audiences were wary about the increased public voice such technologies could invite. These connections help students historicize their own technological moment and better understand both the Victorian period and the discourses around modern technology. The course begins with close-reading and discussion of four Holmes stories to introduce students to the central themes of the class and of Victorian studies, and we use these stories as our core texts with which we practice digital humanities methodologies, so students can see first-hand how visualizations, maps, archives, and distant reading can lead us to new interpretations.

Teaching these methodologies, from digital archives to mapping, requires a tripartite structure that I have dubbed “Read, Play, Build.” First, students read articles from books and blog posts about the pros and cons of each methodological approach. They then examine current projects to discuss the ways each approach enhances scholarly fields and poses new research questions. Each unit concludes with an in-class lab component, in which students build small projects on the Holmes stories using a well-known tool and analyze the result. This structure ensures that students receive both theoretical and practical experience with each methodology and can see first-hand its strengths and weaknesses.

This structure is particularly successful in the digital archives unit, in part because the Holmes stories themselves show the benefit of compiling archives: Holmes maintains an archive, or “index,” which he consults regularly. Watson explains its significance in the story “A Scandal in Bohemia,” which we read in class: “For many years he had adopted a system of docketing all paragraphs concerning men and things, so that it was difficult to name a subject or a person on which he could not at once furnish information” (Conan Doyle [1891] 2006, 5). Holmes’s compulsion to categorize and preserve is a central humanist task—we use and compile archives of our own (whether digital or physical) in the course of our research all the time—and in this story, we see Holmes using the archive to solve cases. After seeing the archive in action, used and compiled by one of the great fictional geniuses of Western literature, the students are more ready to build their own archives.

Before building, however, we begin by examining a seminal work of archive theory: Jerome McGann’s 1996 essay, “Radiant Textuality.” From it, students learn how digital archives preserve works in danger of disintegrating, grant access to works from all over the world to make scholarship more equitable, and enable interdisciplinary and multimodal scholarship by including audio and video in ways that conventional print scholarship cannot. Students then examine McGann’s famous digital project mentioned in the article, The Rossetti Archive, which includes scans of every known painting, sketch, poem, manuscript, and translation produced by Pre-Raphaelite artist Dante Gabriel Rossetti, as well as essays on the importance and critical history of each item. Once the students have learned how to evaluate archives from their study of The Rossetti Archive, they apply this knowledge themselves by adding to a class archive of Holmes artifacts at holmesiana.net. This archive is powered by the content management system Omeka: a tool that lets people easily create websites without needing web design or development experience. Omeka is particularly useful for this project because, unlike other content management systems, it is specifically designed for curating digital collections of objects that resemble online museum exhibits. For this assignment, students choose three items related to Holmes (e.g. images, video or audio clips, or websites), upload them to the archive, group them into a collection, and then create an “exhibit,” or an essay that uses the items as illustrations. These exhibits range from analyses of portrayals of Irene Adler across multiple adaptations to discussions of the soundtrack in the Robert Downey Jr. Sherlock Holmes movie. This assignment does more than just teach students how to use a tool: it helps them see the broad appeal of Holmes stories, their role in contemporary popular culture, and, most importantly, the potential for digital tools to change the medium in which we make our scholarly arguments.

We conclude our archive unit by using Book Traces, a tool invented by Andrew Stauffer at the University of Virginia. Book Traces collects examples of nineteenth-century marginalia, or traces of previous readers—including dedications, inscriptions, pressed flowers, newspaper clippings—from nineteenth-century books found in the stacks (not Special Collections) of college and university libraries. When users find these traces, they upload images of the traces (and transcriptions if possible) to booktraces.org, thus contributing to a crowdsourced archive of how nineteenth-century readers interacted with books. We work with Stephan Macaluso, a librarian at SUNY New Paltz, who teaches the students how to recognize Spencerian handwriting (as opposed to Copperplate or the Palmer Method) so students can figure out if the notes in the books were written before or after 1923. Students also learn how to identify notes written in steel-point pen or fountain pen compared with ballpoint to further aid them in dating the traces, before they are let loose in the stacks to find their examples. Even though only 2000 books in our library are from before 1923, my students have had great success finding items for Book Traces. For instance, one student found the book Shakespeare: The Man and his Stage with the inscription “To Barry Lupino . . . .a souvenir, Theatre Royal Huddersfield, July 16, 1923 from Alfred Wareing”: with some research, she was able to determine that Lupino was a British actor, and Wareing, a theatrical producer with a reputation for producing demanding productions and creating the Theatre Royal. From this, my student concluded that this book had been a gift from the producer to an actor in the production, and that the people involved in the production had used this book to influence their Shakespeare productions. During the course of this project in Fall 2015, my students uploaded the 400th unique volume into Book Traces, and were thanked by Andrew Stauffer himself over Twitter.

This project has multiple benefits. First, it introduces students to the library. Many of my students are in their first semester of college and have never done research, been in the stacks, or looked for a book by call number, and Book Traces turns a library day into a fun and educational scavenger hunt. Book Traces also teaches students about the importance of libraries, even in a digital world: since so many books are digital and since shelf space is expensive, many libraries are forced to sell or destroy books that have free copies online. This project demonstrates why each book is important, and why it is not sufficient to have an online edition only. The assignment also encourages students to rethink their definition of a book: it is not merely the words of a story or poem, but the physical object itself, with all the marks that tell its history and highlight the differences in how nineteenth-century and modern readers used books and understood works of literature. Finally, the project lets students participate in a high-profile digital humanities project: they apply their learning outside the confines of the classroom by crowdsourcing, collaborating with each other and with a librarian, communicating directly with a well-known scholar, and creating new knowledge that will further scholarship on the nineteenth century.

The class also uses simple visualization tools to learn more about Sherlock Holmes stories. For example, as an introduction to visualizations, students read articles about the pros and cons of word clouds. For any readers new to this phenomenon, word clouds are visualizations of word frequency in texts, in which words are larger the more times they appear. Students make word clouds of Holmes stories, write blog posts on their findings, and then discuss the results with their classmates. They have made several interesting observations, particularly with “A Scandal in Bohemia.” “Scandal” is the only work in the Holmes corpus involving Irene Adler, the only woman who outwits Holmes. Adler’s ingenuity causes Holmes to reevaluate his opinion of women: as Watson writes at the story’s end, Holmes “used to make merry over the cleverness of women, but I have not heard him do it of late” (Conan Doyle [1891] 2006, 15). One might imagine that a story that revolves around a woman and the worth of women would mention words related to women (such as “woman,” “women,” or “Miss”) at least as frequently, if not more so, than words relating to men, especially since the story begins and ends by foregrounding Adler’s gender.[1] However, the word cloud actually shows that, although much of the narrative revolves around finding Adler and her photograph, the story contains far more references to men than to women: “men,” “man,” “Mr.,” and “gentleman” occur 45 times, whereas “woman,” “women,” “lady,” and “miss” occur 27 times. This difference suggests that, while the text focuses on one woman’s femininity, its sentences themselves focus more on the actions of men.

A word cloud of “A Scandal in Bohemia” in which “Holmes” is the largest word, closely followed by “man” and “photograph.”

Figure 1: Word Cloud of “A Scandal in Bohemia”

 

Students wanted to investigate gender in “A Scandal in Bohemia” at a more sophisticated level than a word cloud allows, so they switched to “word trees,” which, as Wattenberg and Viégas (2008) have explained, are “graphical version[s] of the traditional ‘keyword-in-context’ method [that] enable[…] rapid querying and exploration of bodies of text.” Word trees provide a more granular display of sentence construction and patterns by showing how particular words appear in context: users upload a text, search for a word, and are shown a visualization of the words that appear immediately before or immediately after that word in the text. My students uploaded the text of “A Scandal in Bohemia” into Jason Davies’s word tree tool to compare how Conan Doyle used the words “Holmes” and “Adler.” For example, a search for “Holmes” demonstrates that his name is often followed by verbs of action: Holmes “whistled,” “caught,” “laughed,” “scribbled,” “dashed,” “rushed,” “staggered,” etc. Essentially, then, Holmes is characterized by his movements and actions in the story, even though his actions end up being fruitless, as Adler evades him.

2. A screenshot of Jason Davies’s Word Tree tool showing the word “Holmes” and all words that come immediately after it in the story “A Scandal in Bohemia”: “Holmes” is sometimes followed by punctuation (e.g. a period or comma), but usually by verbs (“whistled,” “caught,” “laughed,” etc.)

Figure 2: Word Tree with “Holmes” from “A Scandal in Bohemia”

 

The word “Holmes” is often preceded by “said,” “remarked,” “asked,” or by the rest of his name and title (e.g. “Mr. Sherlock Holmes”). Syntactically, then, throughout the story, Holmes is associated with his actions and speech, and has a high degree of agency.

3. A screenshot of Jason Davies’s Word Tree tool showing the word “Holmes” and all words that come immediately before it in the story “A Scandal in Bohemia”: “Holmes” is most often preceded by “Mr. Sherlock” or “said.”

Figure 3: Word Tree with “Holmes” from “A Scandal in Bohemia”

 

A search for “Adler,” however, shows that her name is only followed by a verb once: the word “is” (in the phrase “is married”). Even though she, like Holmes, takes action repeatedly in the story, those actions are not syntactically associated with her name, and in fact, her action is technically associated with losing her name (as she becomes Norton rather than Adler). “Adler” is most frequently followed immediately by punctuation: a question mark, commas, and periods. Unlike Holmes, then, her name is syntactically associated with pausing and stopping, and thereby with silence and passivity.

4. A screenshot of Jason Davies’s Word Tree tool showing the word “Adler” and all words that come immediately after it in the story “A Scandal in Bohemia”: “Adler” is usually followed by punctuation (comma, period, or question mark), and only once by a verb (“is”).

Figure 4: Word Tree with “Adler” from “A Scandal in Bohemia”

 

When we reverse the tree and see what comes before “Adler,” we see that every instance is about her name: “Adler” is always preceded by “of Irene,” “Irene,” “Miss,” or “née” (as in “Irene Norton, née Adler). Again, unlike Holmes, whose name was prefaced by words associated with speech (and only sometimes with his full name), Adler is linked to her name only, and not her words, which again decreases her agency and actions.

5. A screenshot of Jason Davies’s Word Tree tool showing the word “Adler” and all words that come immediately before it in the story “A Scandal in Bohemia”: “Adler” is usually preceded by “of Irene,” “Irene,” “Miss,” or “nee.”

Figure 5: Word Tree with “Adler” from “A Scandal in Bohemia”

 

From these visualizations, we can see that, while the story does feature a proto-feminist plot, in which an intelligent, remarkable woman defeats Holmes, its sentence structure does not endorse that standpoint. Instead, the sentence construction renders Adler as a passive figure whose agency is only exercised in marriage. Word trees help us complicate the conventional feminist interpretation of the text.

Visualization technologies can illuminate more than patterns in sentences: they can also provoke new insights about geography in texts. Holmes stories lend themselves to spatial analysis, as they are very invested in locations: Holmes, Watson, and the criminals they track down often travel across London and beyond, and the stories painstakingly describe the paths they traverse. “The Blue Carbuncle,” for example, mentions the exact streets Holmes and Watson walk through en route to Covent Garden. However, many students have never been to London, and they know even less about areas of London in the nineteenth century. To address this, students read articles about GIS (Geospatial Information Systems) to understand how digital mapping projects require plotting data on maps and looking for patterns, rather than just digitizing historical maps. Then, they learn how to use a number of nineteenth-century mapping projects: “The Charles Booth Online Archive” is a digitized, searchable version of Booth’s poverty maps that let users see the income level of each area of London; “Locating London’s Past” lets users map crime data from the Old Bailey archive in addition to data on coroners’ records, poor relief, and population data; “London Buildings and Monuments illustrated in the Victorian Web” includes images and descriptions of various London landmarks; and “Mapping Emotions in Victorian London” maps positive and negative emotions associated with streets and landmarks in London from passages in nineteenth-century novels.

Students then choose a location from one of the Holmes stories we discussed in class, research it using previously mentioned websites to compare Conan Doyle’s fictionalized account of the area with the historical spot, and then map the locations with either Mapbox or Google Maps and look for patterns. They analyze the connections and often come up with interesting results. For example, one student considered the significance of Leadenhall Street in “A Case of Identity.” In that story, a wine importer (Mr. Windibank) disguises himself in order to court and then abandon his stepdaughter at the altar so she would be heartbroken and never marry, and he could live off of the interest of her inheritance. The fake fiancé claimed to work in Leadenhall Street and had the stepdaughter address her letters to him to the post office there. One student’s research showed that Leadenhall Street was the headquarters of the East India Company until it was disbanded in 1861, and this prompted her to observe parallels between the former East India Company and Mr. Windibank: both were importers of foreign goods, both were very invested in profits, and both acted cruelly to obtain those profits and to control others.[2] Consequently, what initially seemed to be a small, insignificant geographic reference turned into a detail that illuminated a character’s moral failings and also a larger political argument.

These small-scale research projects are equally illuminating for fictional locations: students also realized that, in “A Scandal in Bohemia,” all streets or locations most associated with Irene Adler are fictional, whereas all other landmarks in the story existed. This discovery prompted a passionate discussion about geography and feminism in the text: students debated whether Conan Doyle was trying to avoid being sued for libel, to claim that an intelligent women like Adler could never really exist, or to dramatize Adler’s ability to slip through Holmes’s fingers by having her slip through our own (since she can never be mapped). Without these mapping technologies, students wouldn’t realize the ways in which questions of geography are inextricably related to class, gender, and other political issues.

The Sherlock Holmes focus is useful beyond its application to visualization and spatial analysis: the character of Holmes himself provides a valuable introduction to the history of technology and close-reading. In “A Case of Identity,” for instance, Holmes proves that the stepfather is the criminal and that the stepfather and the missing fiancé are the same person by examining typewritten letters sent by both, in which certain print characters had the same idiosyncrasies due to wear and tear on the keys: as Holmes says, “a typewriter has really quite as much individuality as a man’s handwriting. Unless they are quite new, no two of them write exactly alike. Some letters get more worn than others, and some wear only on one side. . . [I]n this note . . . in every case there is some little slurring over of the ‘e,’ and a slight defect in the tail of the ‘r’” (Conan Doyle 1891, 14). This quotation not only shows how technology solves the case, but also how, even within the mechanical, the human (and humanist) shines through. Other Holmes stories likewise unite Victorian technology and writing: in “The Blue Carbuncle,” Holmes tracks down the owner of a Christmas goose (and missing bowler hat) by advertising in widely-circulated Victorian newspapers, specifically the “Globe, Star, Pall Mall, St. James’s, Evening News Standard, [and the] Echo” (Conan Doyle 1892, 8), papers whose distribution was only made possible by advances in paper and printing technology. Throughout the Holmes canon, then, cases revolve around Victorian technological advances because of how dramatically these innovations changed forms and methods of communication. To modern readers, and especially to students, these inventions hardly seem to count as technology, because “the technological” today is so often synonymous with “the digital.” The Holmes stories function as a corrective to that attitude: they encourage students to rethink their definition of “technology” to better understand the Victorian period as well as their own time.

Holmes’s facility with technology also furthers our image of Holmes as an expert thinker: his observational skills and “deductive reasoning” are already famous in popular consciousness. He repeatedly insists on the scientific method and the importance of unbiased data gathering, famously saying in “A Scandal in Bohemia,” “It is a capital mistake to theorize before one has data. Insensibly one begins to twist facts to suit theories, instead of theories to suit facts” (Conan Doyle [1891] 2006, 3). Consequently, Holmes becomes a useful model for a digital humanist, especially when students may be unused to thinking about data in a humanities context.

In spite of his emphasis on scientific reasoning, Holmes is, at heart, quite the humanist. In addition to his love of the violin, opera, theater, French literature and archiving, Holmes also provides students with a metaphorical model of a close reader; this can be especially useful when teaching an interdisciplinary class full of students for whom close reading is a confounding, magical process. One particular passage, from “The Blue Carbuncle,” helpfully dramatizes Holmes’s analytical abilities. When examining a bowler hat, he first makes pronouncements about it, and then breaks down the close-reading process, explaining each step: “This hat is three years old. These flat brims curled at the edge came in then. It is a hat of the very best quality. Look at the band of ribbed silk and the excellent lining. If this man could afford to buy so expensive a hat three years ago, and has had no hat since, then he has assuredly gone down in the world” (Conan Doyle 1892, 4). To Watson (in this metaphor, the non-English major), it seems almost magical that someone could get so much meaning from such a small object (in this metaphor, the paragraph). And yet, by observing the style, the band, and later the hair and the size, and comparing those observations to his mental repository of expected characteristics and actions for people, Holmes figures out who must have owned the hat, much as we observe imagery patterns, sentence construction, and other narrative devices and compare them to our repository of genre or stylistic expectations to come up with a reading of a paragraph (or work as a whole).

Sherlock Holmes, then, brings together the humanities, the sciences, and the technological; the local and the global; and, in the classroom, the past and the present. Although we only read a small subset of the Holmes stories, they provide the perfect corpus for working across disciplinary boundaries, including History, Literature, Gender and Sexuality Studies, Sociology, Law, and Anthropology. As they are short stories, they are the perfect length for courses that contain students from majors beyond English, and their comparative brevity enables us to devote more time to the theoretical and practical import of digital humanities methodologies. Teaching Holmes stories with digital tools lets students build on the traditional humanities skills of close-reading, noting patterns, and using archives. It augments that scholarly toolkit by guiding students to a better understanding of rhetorical patterns and spatial significance, while also introducing them to techniques that have broad applications in their own fields beyond the nineteenth-century focus of this class. The stories’ own investment in technology lays the groundwork for the course’s own technological focus, and this enables us to thematically tie together the digital humanities methodologies with the Victorian works we study.

Naturally, this project has its challenges: for instance, students from the so-called “digital native” generation are often anything but. While some have specialized computer skills, including video editing or programming, most have only minimal experience, such as word processing or social media. Few have built their own projects, experimented with currently existing digital tools, or thought about how digital technologies can challenge conventional traditions of scholarship. Most students find digital technology intimidating, as they are aware of the limits of their knowledge and are afraid to experiment enough to figure out how programs work—an aspect of trial and error necessary to excel in digital humanities. To overcome these hurdles, I provide my students with detailed instructions, both written and verbal, for every lab day, and I meet outside of class with any students who need additional support. I also model trial and error throughout the semester: if a tool or project produces errors the first time, I explain the reasons I think it might be glitching and then walk them through the troubleshooting process (which often involves Google searches) so they can have the skills and confidence to try to address any stumbling blocks that might arise. I also update the syllabus each time I teach the course: for instance, I cut a unit on topic modeling the Holmes stories and graphing the results when I realized that students needed a comfort level with history and complex visualizations that was greater than I could provide in two weeks. I look forward to making additional alterations to the course as the field of digital humanities changes and new tools and methodologies become available.

The most recent incarnation of “Digital Tools for the 21st Century: Sherlock Holmes’s London,” from Fall 2015, has a course website that includes the syllabus, assignments, grading rubric, and student blogs; I hope they inspire others to borrow or build on my ideas. The approach can easily be adapted to a wide range of literary periods and authors, and is especially suited to works that reference real places and historical events, and works that have become part of pop culture. This structure teaches students history, literature, and technology in greater depth and with a greater degree of interaction and intellectual curiosity than often occurs in traditional classroom.

Appendix: Sample Assignments for “Digital Tools in the 21st Century: Sherlock Holmes’s London”

Online Assignment #2: Omeka Archive

Instructions:

As a class, we’re building an archive of items related to Sherlock Holmes stories (http://holmesiana.net/). Each student will contribute three related items, one collection (with your 3 items), and one exhibit (with a 300-word essay on the items) to the archive.

Due:

  1. Due 9/21 by 9:30am: Accept the Omeka invitation, create an account, and bring three related digital items for inclusion in a Holmes archive.
  2. Due9/25 by 8pm: Add three items, 1 collection, and 1 exhibit that contains a 300-word essay using the objects you added as images. Post links to the items, collection, and exhibit to the class blog.

Items:

These items can be anything at all: illustrations from the original Holmes stories, photographs of locations mentioned in the stories or of props in the movies, images of movie or TV show posters of adaptations, pictures of games based on Holmes, audio clips of theme songs of different Holmes adaptations, video clips of the credit sequences of different adaptations, etc. The only requirements are that the items involve Holmes and that you have the permission to put them online.

Copyright:

You CAN’T: upload a clip/poster from a TV show/movie/CD that you or a random YouTube user made

You CAN: use material from an official account (BBC, MGM) as long as you include a link (the image/clip URL) to the source.

Item Metadata:

For each item, you must write down the Title, Subject, Description, Creator, Publisher, a URL, and Item Type.

Getting Started:

  1. Check your email and look for an email from “A Study in Holmesiana Administrator vs hotrods.reclaimhosting.com <swafforj@newpaltz.edu>” with the subject heading is “Activate your account with the A Study in Holmesiana repository.” NOTE: Check your spam folder. It will probably be there.
  2. Open the email and click the link inside. You will be asked to create a password.
  3. Type your username (written in the email) and password to log in.

How to Add an Item:

(Modified from http://omeka.org/codex/Managing_Items_2.0)

  1. From your items page (net/admin/itemsclick the “Add an Item” button.
  2. This takes you to the admin/items/add page where you see a navigation bar across the top pointing you to different stages of adding an item.
  3. The first tab shows the Dublin Core metafields. Enter the Title, Subject, Description, Creator, and Publisher information.
  4. The Item Type Metadata tab lets you choose a specific item type for the object you are adding. Once you choose the type by using the drop-down menu, relevant metadata fields appear for you to complete.
    • If you have an image, select “Still Image” and put the URL of your image in the “External Image URL” field.
    • If you have a video, choose “Moving Image” and paste the embed code for the player in the “Player” field. Don’t know how to get the embed code? Follow these instructions.
  5. The Files tab lets you upload files to an item.
    • If you have an image, upload your image by clicking “Choose File” and selecting the file.
    • If you have a video from YouTube, you’ll need to make an image. Copy and paste the YouTube URL from the top of the webpage into this website, right-click on the image labeled “Normal Quality,” save it to the Desktop, and then upload it to Holmesiana.net through the Files menu.
  6. The Tags tab allows you add keyword tags to your item.
  7. IMPORTANT: Click the “Public” checkbox under “Add Item.”
  8. Click “Add Item” to add your item to Holmesiana.net

How to Build a Collection:

(Modified from http://omeka.org/codex/Managing_Collections)

  1. Next, you’ll group your items into a collection based on a similar theme.
  2. Come up with a name for your collection (e.g. “Irene Adler in Adaptations.”)
  3. Click on the “Collections” tab in the /admin interface top navigation bar. Any collections you create will be listed on the /admin/collections page.
  4. Click “Add a Collection.”
  5. Name and describe your collection.
  6. Click the “Public” checkbox to make this collection visible to the public.
  7. Be sure to click “Save Collection” to save your newly created collection.
  8. Next, assign items to a collection:
    1. Open an item you want to add to the collection.
    2. To the right of the page, under the “Add Item” button is a drop-down menu where you can assign your item to a collection. Remember, items can only belong to one collection.
  9. Be sure to click the “Add Item” button to save your data.

How to Build an Exhibit:

(Modified from http://omeka.org/codex/Plugins/ExhibitBuilder)

  1. Click on the Exhibits tab in the top nav bar of the Admin interface, and click the “Add Exhibits” button on the right side. You will arrive at an Exhibit Metadata page. Fill in the empty fields.
    • Exhibit Title: This is the title of the entire exhibit (e.g. “Irene Adler in Adaptations”).
    • Exhibit Slug (no spaces or special characters): This is the exhibit name as it appears in the website URL. “For example, “music” is the slug in http://holmesiana.net/exhibits/show/music
    • Exhibit Credits: These will appear with description on the public site. Put your name here.
    • Exhibit Description: Write a brief introduction to the entire exhibit that appears on the public site.
    • Exhibit Tags: Tags help associate exhibits with other items in your archive.
    • Exhibit is featured: Leave this blank.
    • Exhibit is public/not public: Check “Public.”
    • Exhibit Theme: By default, “Current Public Theme” is selected. DO NOT CHANGE THE THEME.
    • Use Summary Page: Uncheck the box. This will get rid of a title page for your exhibit.
  1. To proceed with your exhibit, you must create a page, so click“Add Page.”
  2. Give your page a title and a slug (e.g. “Adler Adaptations” for the title and “adler” for the slug).
  3. Next, you choose the layout for your exhibit. Since you’ll be writing 300 words about the objects in your archive, you’ll want to select “File with Text.” This lets you include images and a description about them.
  4. Select “Add New Content Block.”
  5. Now, you’re ready to add your text and items.
  6. Click the “Add Item” button, select your first item, click “Select Item,” give it a short caption (under a sentence), and click “Apply.”
  7. Type your text about the first item in the box labeled “Text” below.
  8. Click on the arrow next to the words “Layout Options.” Adjust the drop-down menus to change the position and size of your image.
  9. Repeat steps 5-9 for each item in your exhibit. Remember to click “Save Changes” frequently.
  10. Congrats! You made an exhibit! Click “Save Changes” one last time, then go to http://holmesiana.net, click “Browse Exhibits,” and find yours. Click on it and make sure you’re happy with how it looks.

Online Assignment #3: Book Traces

Instructions:

  1. Search the library catalogue for books published between 1820-1923 from the library stacks (not special collections) based on a certain topic (see list of topics below). Look through these books to find one with marginalia (annotations or marks) or inserts from the 19thcentury. Take pictures of up to five instances of marginalia or inserts in the book, fill out the information about your book and its marginalia, including the “description” field, on http://www.booktraces.org/, upload your photos, and submit your entry.

NOTE: If you have looked through 20 books from the 19th century without finding marginalia, you may stop searching, but you must still write a blog post.

  1. Write a 300-word blog post about the traces you found; provide information about the book it occurs in (title, author, publisher, etc.), describe the traces, include pictures of them, and include the link to your book on Book Traces. Explain what the traces have to do with the topic of the book and the pages on which they appear. If a trace includes a name or date, look up the person or date and provide any relevant info you can find (use Google). Here are sample blog posts about Book Traces written by undergraduates at Davidson College: http://sites.davidson.edu/dig350/category/prompts/prompt-3

If you can’t find any traces of previous readers, explain what search terms you used to find 19th century books, give some sample titles of books you looked at, and give some guesses about why books with your selected topic do not include any evidence of earlier readers.

The blog post and Book Traces submission are due by 9/30 by 8pm (6% of final grade).

USEFUL TIP: Book Traces states that participants have had the most luck finding markings in books from the PR or PS call numbers (British and American Literature), but it also recommends French and Spanish literature, history, religious texts, and philosophy.

Sample Traces:

Marginal notes, inscriptions, owner’s names, any other type of writing, drawing, bookmarks, inserts, clippings pasted into the book, photos, original manuscripts, letters.

Search Terms (grouped by category) in STL catalogue (http://library.newpaltz.edu):

  1. Literature, Poetry (or “Poetical Works”), Shakespeare, Impressions
  2. Women
  3. Antiquities
  4. Music (or Songs), Theater (and Theatre),
  5. Art, Fashion
  6. Travel, Empire, England,
  7. Science, Mathematics
  8. Law
  9. Religion
  10. History

Online Assignment #8: Victorian London Locations

Instructions:

In class, you’ll be split into three groups that correspond to the four Holmes stories we’ve read this semester. Choose a location from the Holmes story you’ve selected, research it using online scholarly sources on Victorian London, and write a blog post about how and why that background information about the location affects our understanding of the story.

Due:

11/8 by 8pm

Details:

  1. Choose one spot (not Baker Street) mentioned in the Holmes story you signed up for.
  2. Do a search for your street this digital map of Victorian London (http://maps.nls.uk/geo/explore/ – zoom=12&lat=51.5021&lon=-0.1032&layers=163) and then zoom in and take a screenshot.
  3. Use a combination of the following sites to learn about the area you have chosen:
  1. Write a blog post about what you’ve learned about the area and how it relates to the Holmes stories
  2. Include 3-4 specific details about the location
    b. Explain the location’s importance to the Holmes story, in terms of plot and theme
    c. Include screenshots and any other pictures you’ve found that you think will be helpful.

[1] The story begins with the sentence “To Sherlock Holmes she is always the woman” (1) and concludes with a restatement of that opening idea: “And when he speaks of Irene Adler, or when he refers to her photograph, it is always under the honorable title of the woman” (15)

[2] Conan Doyle again critiqued the East India Company’s unethical imperial project in a later work, The Mystery of Cloomber, set during the First Afghan War. In it, a Major-General in the East India Company massacres a group of Afridis on holy ground, and is punished for his crime years later.

Bibliography

Conan Doyle, Arthur. 1892. “The Adventure of the Blue Carbuncle.” The Strand Magazine 3 (13): 73-85. Internet Archive. Accessed October 15, 2015. https://archive.org/details/StrandMagazine13.

Conan Doyle, Arthur. 1891. “A Case of Identity.” The Strand Magazine 2 (9): 248-259. Internet Archive. Accessed October 15, 2015. https://archive.org/details/StrandMagazine9.

Conan Doyle, Arthur. (1891) 2006. “A Scandal in Bohemia.” Facsimile reproduction of The Strand Magazine 1: January 27, 2006. Stanford: Stanford Continuing Studies. Accessed October 15, 2015. http://sherlockholmes.stanford.edu/pdf/holmes_01.pdf.

McGann, Jerome. 1996. “Radiant textuality.” Victorian Studies: An Interdisciplinary Journal of Social, Political, and Cultural Studies 39 (3): 379.

Wattenberg, M., and F. B. Viégas. 2008. “The Word Tree, an Interactive Visual Concordance.” IEEE Transactions on Visualization and Computer Graphics 14 (6): 1221–28. doi:10.1109/TVCG.2008.172. Accessed October 15, 2015. http://hint.fm/papers/wordtree_final2.pdf.

Joanna Swafford is the Assistant Professor for Interdisciplinary and Digital Teaching and Scholarship at SUNY New Paltz, specializing in Digital Humanities, Victorian Literature and Culture, Sound, and Gender Studies. Her articles have appeared or are forthcoming in Debates in Digital Humanities, Provoke!: Digital Sound Studies,Victorian Poetry, and Victorian Review. She is the project director for Songs of the Victorians (http://www.songsofthevictorians.com/), Augmented Notes (http://www.augmentednotes.com/), and Sounding Poetry, and is the co-founder and coordinator of DASH (Digital Arts, Sciences, and Humanities) Lab at SUNY New Paltz. She is also Head of Pedagogical Initiatives for NINES.org (Networked Infrastructure for Nineteenth-Century Electronic Scholarship).

39

Making Reading Visible: Social Annotation with Lacuna in the Humanities Classroom

Abstract

Reading, writing, and discussion are the most common—and, most would agree, the most valuable—components of a university-level humanities seminar. In humanities courses, all three activities can be conducted with a variety of digital and analog tools. Digital texts can create novel opportunities for teaching and learning, particularly when students’ reading activity is made visible to other members of the course. In this paper, we[1] introduce Lacuna, a web-based software platform which hosts digital course materials to be read and annotated socially. At Stanford, Lacuna has been collaboratively and iteratively designed to support the practices of critical reading and dialogue in humanities courses. After introducing the features of the platform in terms of these practices, we present a case study of an undergraduate comparative literature seminar, which, to date, represents the most intentional and highly integrated use of Lacuna. Drawing on ethnographic methods, we describe how the course instructors relied on the platform’s affordances to integrate students’ online activity into course planning and seminar discussions and activities. We also explore students’ experience of social annotation and social reading.

In our case study, we find that student annotations and writing on Lacuna give instructors more insight into students’ perspectives on texts and course materials. The visibility of shared annotations encourages students to take on a more active role as peer instructors and peer learners. Our paper closes with a discussion of the new responsibilities, workflows, and demands on self-reflection introduced by these altered relationships between course participants. We consider the benefits and challenges encountered in using Lacuna, which are likely to be shared by individuals using other learning technologies with similar goals and features. We also consider future directions for the enhancement of teaching and learning through the use of social reading and digital annotation.

Introduction

Though reports of the death of the book have been greatly exaggerated, reading and writing are increasingly taking place on screens (Baron 2015). Through these screens, we connect with each other and to the media-rich content of the Web. Within university courses, however, there remain open questions about appropriate tools for students to collaboratively and critically engage with—rather than just view or download—multimedia course materials. The most popular platforms and media are generic tools that are not specifically designed to support the learning goals of humanities or reading-intensive courses. If there were a platform designed specifically to support college-level reading, what features should it have? How would such a platform alter the teaching and learning opportunities in a college humanities course?

In this article, we introduce one such platform, Lacuna, and consider its impact on teaching and learning in a seminar-style literature course. Lacuna is a web-based software platform designed to support the development of college-level reading, writing, and critical thinking. Sociocultural educational theories locate learning in the behaviors and language of individuals as they become adept at participating in the practices of a particular community (Lave and Wenger 1991, Collins et al. 1991, Vygotsky 1980). In addition to providing access to educational content, learning technologies can be designed to make existing expert practices in the community more accessible to novices (Pea and Kurland 1987). In particular, the interactive features in a learning technology can be designed as an embodiment of expert behaviors—for example, the strategies that skilled readers use when they engage with texts, in both print and digital form.

The key example of an expert inquiry practice for our purposes is annotation. Annotation here refers to any kind of “marking up” of a print or digital text, including underlining, highlighting, writing comments in the margins, tagging sections of text with metadata, and so on. Annotation is a practice that may not come as naturally to college students as their instructors would hope. And even when students (and instructors) do engage in annotation, they may not be cognizant of how different kinds of inscribing practices on a text affect their learning.

On Lacuna, course syllabus materials are digitized and uploaded to the platform. These materials can be organized by topic, class date, and other metadata such as medium (text, video, or audio). When students and instructors open up materials, they can digitally annotate selections from any text. Annotation on Lacuna is a social as well as an individual practice, leveraging the participatory possibilities of web-based technologies (Jenkins 2009). Lacuna users can choose to share annotations with one another and hover over highlighted passages to reveal others’ comments or questions. Social annotation makes explicit and visible for students the broad array of annotation practices within an interpretive community such as a classroom and helps students co-create interpretations of texts. Students’ annotation activity on Lacuna is also made visible through a separate instructor dashboard, which helps instructors track engagement throughout the course (using D3.js dynamic javascript visualizations of annotation data). Finally, annotations can be connected across texts using the “Sewing Kit” in order to support intertextual analyses.

Since 2013, the technologists and researchers on the Lacuna team in the Poetic Media Lab have designed and developed the platform collaboratively with humanities instructors, based on the theories of learning and expert reading practices described in the following sections of this article. During this time, Lacuna has been used in over a dozen courses at Stanford and other universities, primarily in the humanities and social sciences. Across the courses, the primary authors of this article (Schneider and Hartman) have used ethnographic approaches, including classroom observations, student surveys and interviews with instructors and students, in order to understand the ways that Lacuna mediates relationships among course participants and course content.[2]

In this paper, our primary goal is to examine the shifts in pedagogical practices, and the related learning experiences, that are enabled by social annotation tools like Lacuna when in the hands of willing and engaged instructors. Learning takes place in a complex system of relationships, resources, and goals (Cole and Engestrom 1993, Greeno 1998). Across the courses which have used Lacuna, instructors have chosen to integrate the tool to various degrees. This was unsurprising, as decades of educational research have shown that introducing a new technology, no matter how well-designed, is an insufficient condition for change unless it is intentionally integrated it into pedagogical practices (Cuban 2001, Collins et al. 2004, Brown 1992, Sandoval 2014). In this paper, we present a case study of a course taught by Amir Eshel and Brian Johnsrud, the co-directors of the Lacuna project, which exemplifies the classroom dynamics that become possible when social annotation is woven into the fabric of the course. While Eshel and Johnsrud were the original designers and first users of Lacuna, they were not involved in the present analysis of their own teaching. Within the course examined in this study, we present the full spectrum of the teaching and learning experience, from the time instructors spend preparing for class to perspectives from the students.

A secondary goal in this paper is to introduce Lacuna to other practitioners and researchers who may be interested in using the tool. As a web-based educational software platform, Lacuna is licensed by Stanford University for free and open-access use. Lacuna is run on the content management system Drupal, and the Stanford Poetic Media Lab has made Lacuna available to download with an installation profile on GitHub. Like other learning management systems, such as edX or Moodle, colleges, universities, or other institutions need to sign an institutional agreement taking responsibility for their use of the software, and students and other users agree to the Terms of Use when creating an account. Lacuna is also an ongoing open-source development project. Collaborating universities, such as Dartmouth and Princeton, are currently building out their own features and contributing them to GitHub, so the platform has ongoing refinement based on code submissions from different partners.

Our final goal for this paper is to develop broader questions about and insights into social annotation practices that could apply not only to Lacuna but also to other, similar tools. We hope that some of these questions and insights will come from readers of this article who are themselves exploring the relationship of technology, pedagogy, and learning in the humanities. Our article opens by describing the design of Lacuna in great detail, and then uses a similarly detailed approach to analyze a specific use of Lacuna. In providing these “thick descriptions” (Geertz 1973) of both the technology and its use, we hope that our readers will have the opportunity to reflect on and compare their experiences, goals, and tools to ours. By so doing, we can increase our collective knowledge about the benefits and tradeoffs of social annotation in the humanities classroom, with implications for other reading-intensive courses beyond the humanities.

Annotation as an Individual and Social Practice

As a reader, annotation serves a very personal role—we make marks in the margin or between the lines as an extension of our reactions at the moment of encountering a text. Annotations are also part of our process in preparing to write a paper, a “scholarly primitive” which becomes a building block of our observations about texts (Unsworth 2000). Annotation is one of the central practices used for critical reading in an academic context, as we identify, interpret, and question the layers of meaning in a single text and across multiple texts (Flower 1990, Scholes 1985, Lee and Goldman 2015). In humanities and seminar-style courses, we hope that our students are actively reading by interacting with texts in this way. Focusing on specific parts of a work, and then articulating why the selected passage is interesting, important, or confusing, are essential steps for students in constructing their own understanding of a text (Bazerman 2010, McNamara et al. 2006). By externalizing their thought processes through annotations, it becomes more likely that students remember what they have read and gives them an artifact to work with later on.

With digital texts, annotations can be shared and made visible to other readers—annotation becomes a social act. While this may cause tensions with the personal nature of the annotation process, social annotation also opens up new channels for learning through dialogue and observation of others’ reading and interpretive practices. One hallmark of the humanities broadly, and seminar-style courses in particular, is the “dialogic” nature of the discussion: students are encouraged to explore multiple perspectives on contemporary issues and the texts under scrutiny (Bakhtin 1981, Morson 2004, Wegerif 2013). Each course member has the opportunity to use academic language and express their own ideas, leading to increasing command over new conceptual frameworks and allowing each student to participate more effectively in a “discourse community” (Graff 2008, Lave and Wenger 1991). The instructor guides negotiation between perspectives without insisting on consensus interpretations. Though there is little rigorous research on the impact of dialogic instruction in university courses, these principles have been associated with higher student performance in multiple large-scale studies of middle and high school language arts courses (Applebee et al, 2003, Nystrand 1997, Langer 1995).

With social annotation, dialogue moves from the classroom (or an online discussion forum) to the moment of reading itself. Multiple perspectives and voices become available on the text, both before the class meets and in subsequent re-readings of the texts. The visibility of these perspectives provides opportunities for students to engage productively with difference and reflect on their own practices. Through the dynamism of these differences emerges the co-construction of meaning, wherein the perspectives of each member, and the negotiations among these perspectives, contribute to a shared understanding of the meaning of the texts and topics under discussion (Morson 2004, Suthers 2006). A sense of my stance, my analyses, my strategies for dealing with difficult texts, can also become more salient in contradistinction to other visible stances (Gee 2015, Lee and Goldman 2015). The asynchronous nature of the online dialogue through annotations can also shift the dynamics of whose voices are heard within the discourse community of the class. Particularly when annotations are mandatory, even a typically quiet student or a non-native English speaker can use annotations to voice their perspective or to show to instructors that they are engaging deeply with texts and ideas.

Social annotation technologies like Lacuna have been an ongoing fascination of researchers and technology developers since networked computing became common in the 1990s. University classrooms were particularly fertile ground for experiments in social annotation, especially as computer science professors at the cutting edge of developing digital systems found themselves in the position of teaching undergraduates through traditional, non-digital means. For example, CoNote was an early social annotation platform developed over twenty years ago at Cornell (Davis and Huttenlocher 1995). Aspects of the interface design and students’ ability to access CoNote were, of course, a product of the time—annotations were only allowed on pre-specified locations in a document, and nearly half of the students used CoNote in a computer lab because their dorms were not yet wired for the Web. The anecdotal experience of these students, however, foreshadows our own design goals with Lacuna. Students successfully used CoNote annotations as a site of document-centered conversations and collaborations. Frequently, the students were able to help each other more quickly than the course assistants. Students also self-reported in surveys that they felt better about being confused about course topics because they could see through annotations that other students were also confused (Davis and Huttenlocher 1995, Gay et al. 1999). The major lesson from this early work is the potential for peer support and community-building when conversations are taking place on the text—at the site where work is actually being done—rather than through other means such a discussion forum. (See also van der Pol, Admiraal and Simons 2006 for an experiment demonstrating that discussions taking place through annotations tended to be more focused and topical, compared to the broad-ranging conversations on a course discussion forum).

Since the 1990s, a large number of social annotation tools have been developed, both as commercial ventures and as academic projects (e.g. Marshall 1998, Marshall and Brush 2004, Farzan and Brusilovsky 2008, Johnson, Archibald and Tenenbaum 2010, Zyton et al. 2012, Ambrosio et al. 2012, Gunawardena and Barr 2012, Mazzei et al. 2013; other systems, such as AnnotationStudio at MIT and MediaThread at Columbia University, have not published any peer-reviewed research on their platforms). Research conducted on these social annotation platforms has largely focused on the experiences of students or on reading comprehension outcomes tested through short reading and writing assignments. These results have ranged from positive to neutral (see Novak et al. 2012 for a meta-analysis), with major themes of students benefiting from one another’s perspectives, being motivated by annotating, and using annotations to guide their exam studying.

Other research has examined specific aspects of the social annotation dynamic in more detail. For example, Marshall and Brush (2004) examine the moment when an annotator chooses to share her annotation, finding that students chose to shared ten percent or less of the annotations that they made on each assignment. When students did choose to share their annotations, they often cleaned them up before making them public—transforming shorthand notes to self into full sentences that would be intelligible to others in the class. These moves demonstrate a level of self-consciousness about the other readers in the course as members of a group conversation. Of course, social norms for sharing online may well have shifted since the early 2000s when the study was conducted. Another key moment in social annotation is when a reader chooses to read someone else’s annotation. Wolfe (2000, 2002, 2008) ran multiple studies manipulating the annotations that students can see, with a focus on exploring the influence of positive or negative (critical) annotations. As would be expected, her subjects paid more attention to the annotated passages than the unannotated parts of the text. Moreover, with positive annotations or unannotated passages, students were more likely to focus on comprehending the text without questioning it. When faced with conflicting annotations on the same passage, however, students were more likely to work to develop their own evaluation of the statement in the text. The fact that annotations help prompt deeper responses to the reading was borne out in other studies on students’ writing from the annotated text. Freshman students who wrote essays based on an annotated text were more likely to seek to resolve contradictions in their essays, and less likely to simply summarize the text. In these studies, the presence and valence of annotations clearly altered students’ sensemaking processes and understanding of the texts.

Finally, from a pedagogical perspective, social annotations can open up new possibilities for instruction. While these possibilities are underrepresented in the prior literature, one exception is Blecking (2014), who used ClassroomSalon to teach a large-scale chemistry course. Her research reports that students’ annotations helped her and her teaching assistants diagnose student misconceptions and make instructional changes in response. In humanities courses where reading strategies are often an instructional goal, instructors can monitor students’ annotations in order to give direct feedback on students’ reading strategies and textual analysis. Instructors can, of course, also enter the dialogue on the text themselves, using annotations to guide students to specific points in the text. Additionally, social annotations can serve as an accountability mechanism for completing assigned reading in a timely fashion, because instructors will see students’ activity on the text and students will know that instructors can see this activity.

One might ask—as colleagues have asked us during talks about Lacuna—why there have been so many social annotation tools recently, and why we need another one. One major reason is that many of these tools have been used for STEM courses, with an emphasis on the question-answer interaction as students help each other comprehend concepts in the text. This type of interaction, with an emphasis on a single correct answer, lends itself to different interface interactions than the type of dialogic sensemaking in humanities courses. Even among tools that lent themselves to the goals of humanities courses, there appeared to be a lack of support for exploring intertextuality and synthesis. When the Poetic Media Lab first began designing Lacuna, there were no interfaces that allowed students to filter, order, sort, and group their annotations across multiple texts. Moreover, most existing digital annotation platforms did not have a way to conveniently make student activity throughout the course visible to instructors, as Lacuna’s instructor dashboard does. Finally, no platform that Lacuna’s initial design team surveyed included features that allowed students to write and publish work on the site. As discussed below, by including these features, Lacuna is designed to support an integrated reading and writing process, allowing students to sort, organize, and visualize their annotations, and then write and publish prose or media in the form of short responses or final papers, with a built-in automatic bibliography creator for materials hosted on the course site.

From a research perspective, prior work has included limited investigations about the day-to-day experiences of teaching with a social annotation platform, and connecting the experience of learners as a result of particular instructional decisions. Learning takes place in a complex system of relationships and resources (Cole and Engestrom 1993, Greeno 1998) and introducing new technologies can lead to unforeseen tensions as well as the expected opportunities. Understanding these dynamics in detail is vital for critically considering the possibilities and trade-offs in practice that social annotation platforms, like Lacuna, introduce. This is the goal of the empirical work presented in the “Teaching with Lacuna” and “Learning with Lacuna” sections, which follow after the in-depth introduction of the platform in the next section.

What Does Lacuna Look Like?

Lacuna is an online platform for social reading, writing, and annotation. Like Blackboard, Canvas, and other familiar learning management systems, Lacuna serves as a central organizing space for a course. Instead of hosting readings to be downloaded, however, Lacuna provides a set of shared texts and other media that students and instructors read and annotate together on a web-based interface.[3] In the vocabulary of software design, Lacuna has a number of “affordances,” platform features that create or constrain possibilities for interaction (Norman 1999). These affordances shape, though do not dictate, the central interactions of the digital learning process, namely learners’ interaction with content and interpersonal interactions among learners and instructors (Garrison, Anderson and Archer 1999; 2010).

This section introduces the reader to the affordances of Lacuna in terms of three central practices of humanities and seminar-style courses: critical reading, dialogue, and writing. Through literature reviews and conversations with our faculty collaborators, the project team identified critical reading, dialogue, and writing as vital to the humanities and thus a shared goal—explicit or implicit—of the majority of courses using Lacuna. As researchers and designers, framing the platform in terms of the major goals of the discipline helps us better understand what we might hope for in teaching and learning activities and learning outcomes.

Annotation as Critical Reading and Dialogue

As discussed above, annotation is one of the central practices that experts use for critical and active reading in an academic context. Research on the reading practices of faculty and graduate students has shown that these readers make arguments about the rhetorical and figurative form of texts, usually by connecting the text to other pieces of literature and theory. As they read, faculty and students annotate the text with observations about potential themes, building evidence across specific moments in the text (Lee and Goldman 2015, Levine and Horton 2015, Hillocks and Ludlow 1984). Learning technologies can be designed to embody expert practices in a way that makes those practices more accessible to novices (Pea and Kurland 1987), which is why annotation is central to the design of Lacuna.

Figure 1 below shows the annotation prompt that appears when a reader on Lacuna highlights a passage.[4] Readers may choose to make a comment or to simply highlight the passage. Lacuna instructors frequently require students to produce a minimum number of written annotations per week towards their participation score in the course.

This image shows the annotation prompt that pops up when a reader highlights a passage of text on Lacuna. Three lines are highlighted in blue, and the annotation prompt includes a text box that has been filled with the reader’s comment on the highlighted passage: “insights into human nature”. Below the text comment, there are four possible categories that can be selected by the reader to categorize her annotation activity: Comment, Question, Analyze and Connect. There is also a line for adding tags to the annotation, and a box that may be checked if the reader wants to make the annotation public to others in the course.

Figure 1: Selecting and Annotating a Passage on Lacuna

 

Lacuna gives students the option to keep their annotations private or share them with the class. When students choose to share their annotations, they are contributing to a form of online dialogue that can also be extended into the classroom (see figure 2). Readers can use the Annotation Filter to choose whether to see one another’s annotations. Faculty who use Lacuna often make note of students’ annotations and adapt their classroom instruction to meet students’ interests or struggles with texts. In the “Teaching with Lacuna” section, we will examine how this blurring of the line between the classroom and the online preparation space affected the experience of the instructors in preparing for and teaching one specific humanities seminar.

Screenshot shows three annotations on the same passage, from three separate students. On the text, the green used to highlight annotated passages is darker where more students have annotated. The reader has moused-over the highlighted passage to reveal the three annotations, which range in length from two words to multiple sentences. Two of the annotations are categorized as a Comments and a third is categorized as Analyze. To the right of the text appears the Annotation Filter box, where the reader can choose whether to see all the annotations in the class, just their own annotations, or no annotations. The reader can also filter by specific users, or specific metadata on annotations in the form of tags or categories. In this screenshot, “all” annotations are selected on the filter.

Figure 2: Multiple Students Annotating the Same Passage in Lacuna

 

One of the features that sets Lacuna apart from other social annotation platforms is the “Annotation Dashboard,” which provides an aggregate visualization of students’ use of the platform (see figure 3). The dashboard is updated in real-time and is interactive to allow for multiple ways of viewing the annotation data. Currently, there are three different types of analysis offered by the dashboard. “Filter by Time” is a bar graph that illustrates the relative number of annotations made on any given day of the course. “Annotation Details” shows via pie chart how many of each category of annotation there are, how long the annotations are, and how many of them are shared versus private. Finally, “Network” is broken down further into “Resources” and “Students”; this section allows instructors to see how many annotations each resource received and by which students.

In this screenshot we can see the instructor dashboard for Lacuna. The dashboard is split into three different areas. In the top-left area, there is a blue bar graph labeled “Filter by Time.” The y-axis is labeled with numbers of annotations and the x-axis is labeled with dates. This section also contains a “Reset” button and a “View all annotations” button. Below the Time Filter, in the bottom-left area, there is is the “Annotation Details” section. This contains three pie charts: “Category,” “Length,” and “Sharing.” Finally, on the the right-hand side of the screen there is the “Network” section, with “Students” on the left and “Resources” on the right. Student names are obscured in this screenshot to preserve anonymity. Selections made in the Time Filter and Annotation Details section will dynamically affect the data displayed in the Network section - for example, selecting only the dates of Week 2 of the course in the Time Filter will cause the Network to show only the annotations made during that time period. In the Network section, there are pie charts for each student and each resource showing the number of annotations that each student has made on each resource and the total number of annotations on each resource. There is also a web of connections linking the student pie charts to the resource pie charts to show the number of annotations a student made of a particular resource. One of these connections has been moused-over to reveal that the student has made 81 annotations on the selected resource.

Figure 3: The Instructor Dashboard on Lacuna, showing student annotation activity throughout the Futurity course

 

Each of the dashboard visualizations interacts with all of the others. For example, clicking on a student name in the “Network” section causes only her data to appear in all three categories. We can then see which texts a student annotated most heavily, how many of her annotations were highlights and how many were comments or questions, and when she did the bulk of her highlighting. Clicking the “View annotations” button not only tells us how many annotations she made in total, it takes us to a table in which we can view all of them. The dashboard therefore makes it quite easy to see not only if students have met a required number of annotations, but also which texts they have found most worthy of annotation, whether students are highlighting or engaging through commenting/questioning, and when students tend to do their reading. As we will see shortly, having this information has a significant impact on the instructor’s experience of teaching the course.

Annotation as Part of the Writing Process

Lacuna also includes features that position the annotating and critical reading process as part of a longer-term project of understanding multiple texts or writing a paper about them. Reading in humanities courses is usually part of an integrated reading-and-writing process, where students produce their own texts about the texts they have read or about the issues raised in the texts (Biancarosa and Snow 2004, Graham and Herbert 2010). Expert readers look for patterns, mapping out a text and drawing explicit connections to other texts they have read (Snow 2002, Lee and Goldman 2015). In Lacuna, annotation metadata allows readers to tag and categorize their annotation as a visible record of the mapping and connection processes (see figure 4). For example, readers can tag annotations with a particular theme or topic (e.g “World War II”, “definition”). Lacuna readers can also categorize their annotations by the activity on the text (e.g. as a “Comment” or a “Question” or “Analysis”). Through these tags and categories, Lacuna readers begin to develop a structured characterization of the text. Tags on Lacuna can be suggested by students, or pre-specified by the instructor. By using both open and pre-specified tags, instructors can guide students’ reading while still allowing students to engage in personalized processes of intellectual discovery.

The screenshot shows an annotation box that pops up when a user highlights a passage. The user is tagging the annotation with a tag that begins with the letters “con”, and Lacuna suggests “conceptual models,” “connected learning,” and “content” as possible tags to select from.

Figure 4: Tagging a Passage on Lacuna, with Auto-Suggested Tags

 

In addition to tags, critical reading in Lacuna is linked with the writing process through two features: Responses and the Sewing Kit. “Responses” are pieces of student writing shared on the Lacuna platform. Responses can be directly linked to the texts and annotations that they reference. Lacuna also lets students annotate Responses, allowing their work to be interacted with in the same way as the work of established authors that is hosted elsewhere on the site. Enabling student writing to be annotated and commented on also creates the ability for peer-review by other students or real-time feedback on student work by the instructor.

The Sewing Kit allows for the automatic aggregation and sorting of all annotations in one place. Students can explore the Sewing Kit based on tags or keywords and create collections, called “Threads,” of quotations organized by theme (see figure 5). Threads can be used by individual readers as a thought-space for initial analyses. They can also be developed collaboratively to compile passages and annotations from multiple readers that are relevant to a theme discussed by the class.

The screenshot shows a Sewing Kit “Thread” from Futurity, in which multiple student annotations on one document have been collected around a single theme of “Memory.” The authors of the annotations are different students (names are obscured to preserve anonymity). The Sewing Kit shows annotations according to five types of metadata: Author of the annotation, the Annotation Text (the actual annotation), Category, Quote (the excerpt from the document), Tags, and Annotated Document. The annotations are all from the same document (a piece called “Putting the Pieces Together Again.) The annotations shown in this screenshot are either Comments or Questions.

Figure 5: A Sewing Kit “Thread” from Futurity, in which multiple student annotations on one document have been collected around a single theme of “Memory.”

 

The Sewing Kit is one of the most unique features of Lacuna, with few equivalents in other digital annotation tools. From a pedagogical perspective, manipulating online texts in a way that makes the complementary nature of reading and writing visible can support increased metacognition about the relationship between reading, annotating, analysis, and writing. The usefulness of being able to sort and search annotations across many texts will be apparent to anyone who has ever had to organize a large amount of reading for a project. Moreover, the visibility of each of these steps on Lacuna can be used to assess students’ developing understanding of texts, as well as their skills in interpreting and arguing for a particular interpretation of a text.

The features of Lacuna were designed in accordance with the pedagogical ideals of the humanities classroom: close reading, the exchange of ideas through discussion, and analytical writing that is anchored in the text itself. It was the hope of the research and design team that Lacuna would encourage certain expert practices in student users. In the following section, we will provide an in-depth analysis of one use of the tool in a humanities seminar that was co-conducted by Lacuna Co-Directors Amir Eshel and Brian Johnsrud. In this analysis, we will consider in detail the impact of Lacuna on both faculty instructional practices and student learning.

Findings

This section presents two complementary perspectives on the integration of Lacuna into an upper-level literature course. First, we describe the faculty perspective and provide a snapshot of how social annotations can be integrated into a classroom discussion. Second, we describe the student experience, drawing on surveys and interviews with two students in the case study course.

Teaching with Lacuna

There is no single way to teach using Lacuna—or any social annotation tool, for that matter. Of the dozen or more instructors who have used the Lacuna at Stanford and other institutions, each has made his or her own instructional design choices about how deeply to integrate the platform into course activities. On the “light integration” end of the spectrum, some instructors used Lacuna as the equivalent of a course reader. In these classes, students were asked to read and annotate in the shared online space, but there were no clear expectations that they would interact with one another online through their annotations. There was also little acknowledgment of their online activities during class sessions. On the “deep integration” end of the spectrum, instructors read students’ annotations and responses in advance of class and integrated them into class discussion; in these courses, a minimum number of annotations per week were often expected and counted towards a participation score.

In this section, we will closely consider a “deep integration” course: “Futurity: Why the Past Matters”, co-taught by Amir Eshel and Brian Johnsrud, Co-Directors of Lacuna. The integration of Lacuna was evident both in how the instructors prepared for class and in activities and discussion during class. In many ways, “Futurity” exemplifies the ways in which social annotation tools like Lacuna can be intentionally used by instructors to create a more student-centered and learning-centered humanities seminar. By examining in detail the instructional and classroom experience in Futurity, we hope that our readers will have the opportunity to reflect on and compare their experiences, goals, and tools to ours.

The “Futurity” Course

“Futurity” is a comparative literature course deeply concerned with contemporary culture’s engagement with the past in order to imagine different futures. Focusing on specific historical moments of the last sixty years, the course topics explored the relationship between narrative, representation, interpretation, and agency. The course materials included fiction, non-fiction, film, television, and graphic novels[5], making use of Lacuna’s multimedia capabilities and allowing the class to consider how different media representations shape our understandings of the past.

Futurity was first taught using Lacuna in Winter 2014, using an early version of the platform. This article will focus on the 2015 iteration of the course, but it is worth noting that the 2014 version of Futurity played a crucial role in the development of Lacuna itself. Based on feedback from students, features which are often found in online and hybrid learning settings—a wiki and discussion forums—were eliminated in favor of discussion through annotation within the texts. The content of the course also shifted, based partially on the annotations left by students in 2014, which gave the faculty insight into which texts were most generative for discussion.

The 2015 course required 20 annotations per week from each student. This was reduced from the previous year’s requirements based on student feedback indicating that higher requirements led to annotating in order to get a good grade, rather than annotating as a way of increasing comprehension and engagement. The student population of the 2015 class was a small seminar, yet remarkable for its diversity of academic backgrounds and ages of students. Across 10 students, the course participants ranged from postdoctoral fellows in philosophy, to graduate students in comparative literature, to undergraduates in the interdisciplinary Science, Technology, and Society (STS) major, under which Futurity was cross-listed.

Integration of Lacuna into the “Futurity” Course

A picture of the Futurity classroom, taken by one of the authors (Hartman). Eleven students are seated around tables arrayed in a horseshoe fashion. They face a double screen at the front of the room. One screen has a PowerPoint projected onto it and one screen has Lacuna projected on to it. The instructors flank the screens, one sitting and one standing. Four students have laptops open, but all appear to be engaged in the conversation.

Figure 6: Teaching with Lacuna in Futurity. Lacuna is projected on the right-hand screen

 

One of our key research questions was how the visibility of students’ reading with Lacuna changed instructor practices. For Eshel and Johnsrud, a simple yet powerful shift was the ease of ascertaining which students had done the reading and how well they had understood the texts—questions which, as many instructors know, can consume considerable classroom time and assessment work (such as reading quizzes or short reading response papers). With Lacuna, the instructors could easily see whether students had annotated and how they had reacted to the readings. This meant that preparing for a class session of Futurity was significantly different from preparing for courses that did not use Lacuna. In an interview, Eshel noted that it made “class preparation and [his]… intimate knowledge of [his] students” much easier, and that the experience of teaching was intensified. Both Johnsrud and Eshel emphasized that having their students’ thinking rendered visible by the platform ahead of time increased their own engagement with the course. Students also appeared to be more prepared for class. This resulted, according to Eshel, in a “quicker pace” and in conversation that was “more intense and more meaningful.” Based on students’ annotations and written responses to the reading, the instructors were able to immediately dive into lecture or discussion.

Visible annotations also changed the focus of class preparation. Johnsrud described his and Eshel’s process of preparing for class with Lacuna as akin to drawing a Venn diagram, where one circle represented the students’ interests, as evidenced by annotations and responses, and the other was the topics that the instructors wanted to cover. Johnsrud and Eshel generally tried to focus the class discussion and any lecture material on the overlapping area. This approach could be challenging, however, simply for logistical reasons: the students in Futurity were just as likely to complete their reading at the last minute as any other group of students, which meant that not all annotations could be incorporated into the discussion. Other Lacuna instructors have dealt with this by setting a reading deadline twenty-four hours before class. In terms of topics, sometimes students’ interests and questions diverged from the themes that Eshel and Johnsrud wanted to cover. Incorporating students’ perspectives thus required considerable flexibility from Johnsrud and Eshel, as well as a willingness to cede some control of the classroom discussion agenda to the students’ questions or interests as reflected in their annotations.

Examining students’ online work in advance of class sessions was a task primarily taken on by Johnsrud and the course’s teaching assistant (TA). The TA would send emails to Johnsrud and Eshel that included information such as “hot spots” in the reading (that is, places where students had annotated heavily), trouble spots where students had visibly struggled with the text, interesting annotations or responses for starting a conversation, and overall trends he observed in their annotations. The TA frequently used the Sewing Kit to aggregate the annotations of multiple students under themes relevant to the course content, such as “Agency” or “Memory.” This took about 2-4 hours each week (the course met for 1.5 hour sessions, twice a week). Both instructors noted in interviews that such an approach could be demanding without a teaching assistant.

Eshel and Johnsrud also used annotations to get to know their students as readers and thinkers. Johnsrud said, “After Week 1, I could tell you so much about each student, how they think, what they struggle with, what kind of level they are at, that had nothing to do with any class behavior.” In order to bridge online and offline dialogue, Johnsrud or Eshel often focused discussion on a “hot spot” in the text, addressing overall themes in students’ comments. At times, Eshel or Johnsrud would ask a student to expand verbally upon a particular annotation they had written before class. Eshel and Johnsrud generally let the students know ahead of time if they were going to be using one of their annotations to generate discussion, so that the student did not feel they were being cold-called and had time to prepare a few thoughts.

These practices and their pedagogical outcomes are illustrated particularly well by a class session that took place on during the 5th week of the 10-week quarter. For this class, students read a 1989 essay about the dissolution of Communism. Although Futurity was primarily a literature course, Eshel and Johnsrud often paired a literary text with a theoretical one and pushed students to place the two texts in dialogue with each other. For this class session, students annotated the essay 164 times, with just over half of the annotations (92) including comments (the remainder were highlights, which are a signal of engagement with the text, but engagement that may be less reflective than annotations). In their annotations, students took issue with the author’s ideas, particularly as they related to class and race in Western culture. The students’ disagreement with the author led to particularly rich annotations. Two examples of such annotations include:

“This seems completely outlandish and impractical. I disagree with Kojeve… how can he theorize on such a ‘universal homogenous state’ when all of history is speaking against such a utopia. If one can even call it that; isn’t it our differences and varying opinions which make the world fascinating? His theory seems impossible” (Jenna[6])

“This is a highly debatable and suspect statement. I wouldn’t say that US society is a class-less society. Granted its’ [sic] class structure is different from the class structure of, say, India. But there definitely is a class system, which many individuals do not even want to acknowledge. Consider a city like Baltimore and how even its city planning is based on a class categorization.” (Amanthi)

Prior to class, Johnsrud and Eshel had agreed upon certain annotations and themes that they wished to address. They spent the first twenty minutes on a mini-lecture contextualizing the importance of 1989 as a turning point in the end of the Cold War. They then gave students five minutes to look over their own annotations, re-clarify their thoughts about the text, and come up with a few points they wished to discuss. Ryan, a doctoral student, chose to focus on an annotation he had written in which he questioned the author’s phrase “the end of ideological evolution.” Ryan expanded upon his critique of the phrase in class, and Eshel pushed back, asking if an argument that is “wrong” or inaccurate can yet be a productive tool. There followed a discussion between Ryan and Eshel not only about the author’s ideas, but also about how to discuss a piece of criticism that might be at once useful and problematic. Eventually, Ryan welcomed James, an undergraduate in Comparative Literature, into the discussion by way of one of James’s annotations that he made before class, and which Ryan had viewed: “James had a great annotation about that,” Ryan said. James picked up the conversation from there. In this dialogue, both the instructors and the students had an awareness of one another’s online activity, which was elaborated upon during the in-person discussion. As many instructors of discussion-based courses know, one of the most difficult aspects of discussion can be encouraging students to respond to each other, and not solely to the instructor. In this case, Ryan’s awareness of his peers’ ideas prior to entering the classroom encouraged him to expand the conversation beyond his exchange with Eshel.

In addition to encouraging responses and dialogue among and between students, deliberate integration of online discussion into the classroom also appeared to have a democratizing effect. Later in the discussion, Eshel asked Amanthi, a doctoral student in comparative literature, to weigh in on the discussion. Drawing on her annotations, Amanthi neatly summarized her three main problems with the author’s argument about the notion of socioeconomic class. Eshel responded by contextualizing the author’s remarks in terms of the time the piece was written. Both instructors had wished to address the questions about class raised by Amanthi in her annotations, and they were able to do this by asking her to expand upon her online work. While the instructors may have been able to bring the topic into the discussion without looking to a student, doing so served to acknowledge the work the students did while reading and emphasize that the discussion was a dialogue between equals with valid perspectives.

This particular in-class discussion illustrates a few of the practices of integrating social annotations into the classroom. By using Lacuna as a window into students’ reading, Eshel and Johnsrud were able to pinpoint the exact places in the text that generated the most frustration, confusion, or disagreement in their students. While they were not necessarily surprised by students’ reactions to the text, as they had taught this essay previously to students who found it problematic, they were able to use specific criticisms, attached to individual claims and sentences on the text, as a springboard for discussion. To get the conversation rolling, the instructors were able to call on students they knew to have annotated heavily and thought deeply about the text. Those students were, in turn, able to manage the discussion themselves, such as when Ryan asked James to talk about his annotation. Students whose comments built on their annotations were often succinct and articulate, perhaps because they were better prepared to contribute than they would have otherwise been. Finally, the integration of students’ online ideas into the classroom had an equalizing effect; although both instructors had points they wished to raise, they were able to do so by calling on students who had themselves already raised those points in their annotations.

This Week 5 class session also demonstrates a type of negotiation that can take place between the students’ interests and the instructors’ instructional agenda in classes that integrate Lacuna into the classroom conversation. Throughout the conversation, the instructors attempted to steer the conversation away from the shortcomings of the essay and toward the reasons they had had the students read it. Eshel noted early on in the discussion that, “A text like this is nothing but a tool . . . a tool we use to do all kinds of other things.” Eshel stated explicitly that he wanted the students to consider whether the author might be wrong and productive at the same time. But it was clear, from both the students’ annotations and the ensuing discussion, that many of them were resistant to this perspective on the text. The instructors acknowledged and built upon the work that students had done already, thereby creating more authentic dialogue; but the students, being aware of how much work both they and their peers had already done on the text, appeared at times to be less willing to follow where an instructor might lead them. While students’ initial interpretations of a text may also be codified before class with a print text, there is a possibility that digital and social annotation may prime in students more fixed interpretations before class. This trade-off between guidance and discovery will be discussed more thoroughly in the concluding remarks.

Learning with Lacuna

From the foregoing analysis, it is clear that instructors can deliberately leverage students’ online activity with Lacuna to promote intellectual engagement and dialogue within their classrooms. What is the online reading experience like for students? Across surveys of students in Futurity and six other courses using Lacuna (N=45), digital annotation with Lacuna appears to have both benefits and drawbacks. Here, we briefly discuss student survey results before presenting an in-depth analysis of one-on-one interviews with students in the 2015 Futurity course.

For most of the students surveyed, annotation was a familiar strategy which they used frequently, according to self-reported habits. When asked about their goals in annotating, students largely described using annotations to meet a particular goal, such as when they did not understand something as they were reading or to return to at a later point. They also used highlighting and underlining to mark parts of the text that they wanted to remember or which simply seemed notable for their language. When it came to the physical experience of reading and annotating, it is worth noting that over half of the students surveyed expressed a preference for reading on paper, citing eyestrain and the freedom to make multiple types of marks (such as lines, circles, or arrows) as the main benefits. But when comparing Lacuna to other digital reading experiences, students remarked favorably upon the ease of annotating, particularly in contrast with the poorly-scanned PDFs that they had encountered in other courses. They also appreciated the organizational benefit of all-in-one access to online texts.

It was social annotation, however, that emerged through the surveys as the most salient aspect of Lacuna, compared to both paper and digital reading environments. In open text responses describing their experiences, students reported an appreciation of the opportunity to hear one another’s perspectives and learn from one another as well as from the instructor. This was particularly true for less advanced students in courses such as Futurity, which included graduate students along with both major and non-major undergraduates. Students described that seeing others’ annotations drew attention to particular aspects of the text, clarifying aspects of the writing or helping them see what questions would be useful to ask of the text. In a course similar to Futurity, where the instructor frequently brought students’ annotations into class, several students commented appreciatively on the “continuity” between reading before class and the subsequent class discussion.

Survey respondents also emphasized that timing matters when it comes to the social experience. For example, one student said he was usually the first to read and comment, so he didn’t have the opportunity to experience others’ annotation unless he took the time to return to the text after class. On the flip side, one student honestly shared that he appreciated others’ annotations drawing attention to aspects of the text when he was reading last-minute before class. Multiple students preferred exploring others’ comments on a second read-through of the text, rather than the first, so they would have the chance to form their own impressions of the text. The annotation filter in Lacuna facilitates these modes of reading, allowing students and faculty to choose whether to see no annotations; only their own annotations; selected users’ annotations; or annotations from everyone in the course. (See figure 2, above.)

Surveys can provide a high-level perspective on the experience of a group, but interviews accompanied by work products—in this case, annotations on Lacuna—are a powerful research tool for going more deeply into the nuances of an experience. Reflecting the emphasis on social annotation in the surveys, the following section draws on interviews with two students in Futurity, “Jenna” and “Allegra,” in order to explore the processes by which social annotation creates opportunities for peer learning. Jenna and Allegra were selected to be interviewed as part of a larger research project looking across multiple courses using Lacuna. Based on recommendations of faculty and their observed levels of platform and classroom engagement, we felt that Jenna and Allegra were representative of students who were highly engaged with the course and the platform.

Exploring Social Annotation from the Student Perspective

Jenna and Allegra were both seniors at the time they were interviewed. As humanities majors, Jenna and Allegra were experienced annotators, building on years of instruction in high school and use of annotation in previous undergraduate courses. With Lacuna, however, they each noted that the platform allowed them to annotate more extensively than they were accustomed to doing on paper. The “endless” virtual margin and the speed of typing meant that for both students, the material features of the platform augmented aspects of a pre-existing individual practice. Even more salient, however, were the ways that the platform created a stronger sense of community and new opportunities for social learning. Jenna eloquently expressed the connection to other course participants that the platform enabled her to feel: “It’s like all of our head space is kind of in the same area. […] I’ll just be like oh, this is what Amanthi was thinking when she read this part. How interesting, it’s a Sunday afternoon and we’re both reading this. […] It’s like there is constant fluidity, between when I’m in class and outside of class.” Just as the instructors sought to connect online and offline activity, students like Jenna were making these connections themselves.

The collegial nature of the course community appeared to be a crucial element for supporting peer learning. “I have learned just as much from my peers in the course as [from] my instructors,” Jenna noted at two different points in her interview. She described social reading as an additive process, where her own understanding of the text was enhanced by the perspectives of others: “That’s the beauty of it. It’s because we have all of these minds bringing together these very fragmented understandings of the text. Then it just only adds to yours.” Pointing to examples from the course, Jenna clarified that these “understandings” can be references—to a film or to a Bible passage, for example—as well as interpretative statements. Moreover, each of these understandings, including her own, is incomplete – “fragmented” across multiple annotations and across multiple minds. Together, however, they represent a more complete understanding of the text than a single reader would be able to generate by herself.

Unpacking the social annotation process that enables this more complete understanding, however, reveals multiple opportunities for an individual to engage socially, or alternatively, remain solitary in their interpretive process. As explored in the Marshall and Brush (2004) research, the first decision in social annotation whether to share at all. For some students this appears to be a more sensitive issue than for others, with concerns about looking stupid—or, as expressed by some graduate students in surveys and informal conversations, the fear of not looking sufficiently clever and impressive. But as the quarter progressed in Futurity, sharing was the norm, rather than the exception. This was due in part to the default setting of “public” on annotations, which meant that students needed to check a box to intentionally opt out of sharing each time they hit “save” on an annotation. Over time, students also had more practice exposing their opinions without negative feedback. Another incentive may have been the instructors’ use of annotations and students’ written responses in the classroom discussion. As Allegra noted, “It definitely feels good [when they mention my annotations in class]. They acknowledged that you did a good job […] and they also teach the class, like, in accordance to some extent with what you said about the text, which is also really cool.”

Other reasons that students shared their annotations were because they “didn’t care” (Jenna) if someone saw what they wrote—perhaps a typical perspective from the social media generation —or if they had a specific audience in mind. In particular, our interviewees looked for opportunities to provide new information that would enhance the reading experiences of their peers. Allegra explained that she was far more likely to annotate rather than highlight if she was pointing out something that was not “obvious” in the text, such as references to outside texts or events: “[W]ith the Mrs. Dalloway annotation, for example […] I felt the need to point that out to people who might not have made that connection.” Allegra exhibits a relatively high level of awareness of what her peers are likely to know, as well as what kinds of insights count as novel rather than rudimentary. Jenna framed her contributions in a slightly more personal and conversational way. In her interview, she gave examples of annotations that felt important for her to make public on texts that she “disagreed with,” noting that she “really want[ed] people to know” about this opinion so it would “add something to the class discussion.”

The second aspect of social annotation is choosing to read others’ annotations. In the interviews, it became clear that in the dialogue taking place through social annotation, not all utterances are necessarily “heard” by others. If the student is reading early in the week or in the hour before class, there will be a different version of the text with different amounts of annotations available. Moreover, the annotations which are at the time of reading available can be shown or hidden using filters on the text. Then, even if the reader chooses to show annotations with the filter, it is up to that reader to read any particular annotation by hovering over the text to show the annotation. Finally, once an annotation is read, the reader may choose to reply to it or make another note in parallel—or, they can simply notice what the other annotator has written and then move on, rather than actively engaging with it. Each annotator has their own preferences about this, which may also vary by text. Describing their approaches generally, our interviewees had slightly different perspectives. Jenna reads others’ annotations when she gets “curious about what other people wrote on a given page, […] I try to do that pretty often.” Allegra said that she “always makes sure to click ‘all annotations’ [on the filters], when I’m reading so I can see what people have said already. That often informs the way I look at things in the text.” From these students’ experience, it is not clear whether different strategies for reading others’ annotations would be more or less effective for different kinds of texts, or for interpretive practices with different goals.

In discussing what made a “good” annotation, Jenna and Allegra generally focused on the informational content and novelty of the annotation. As an example of a beneficial annotation, Allegra pointed to an annotation on Ian McEwan’s novel Saturday, in which Jason had noted that McEwan is “orientalizing” the word “jihad,” creating distance between the reader and Arabic culture. “That wasn’t something I had thought about,” she explained. Jason’s interpretation added another lens for Allegra to analyze the work being done by the text and the choices made by the author. Even though the annotation was not addressed directly to her, it was another perspective that she could build on in her own interpretation of the text. Sometimes, however, Jenna and Allegra did not view other students’ annotations were not as particularly useful. For example, Allegra described somewhat disparagingly the “pointless,” single-word annotations that some students made, which were a reaction to the text without adding specific analytical detail. Jenna exhibited a similar response to “obvious” annotations, describing “a couple of times where people have been, like, this is a recurring trope, and I’m like…yeah. You didn’t need to tell me that.” Nearly in the same breath, however, both Allegra and Jenna acknowledged that others in the class could have benefited from the annotations that they did not find personally useful at the time. Jenna noted, for example, “Maybe for other people, they didn’t think of that as a trope […] So, it could definitely help someone else.” The unique knowledge and interests of each annotator, who are each readers of one another’s annotations, means that it may be difficult to find annotations that are useful to all readers—a challenge not unique to social annotation but shared with all annotated editions of texts.

With these examples, a vital aspect of social annotation becomes evident: the act of annotating has multiple goals and as a result, there are multiple ways to understand whether annotation is a productive utterance in the online discourse community. Social annotation is a way of reading simultaneously for oneself and for the community. The individual reader, traditionally ensconced in a paper book, thinks entirely of himself. With social annotation, a diverse audience emerges—an audience including an instructor who is in a position of evaluation and other students who can be “told” new information. Moreover, both instructors and students are fellow participants in a dialogue which can be carried out in class as well as online. Finally, the reader is also an audience member herself, for the performances of others in her class. The mental model of the activity of social annotation, then, is multifaceted, requiring a level of self-awareness (and other-awareness) significantly beyond that of being a private reader.

Concluding Remarks

By equipping learners to engage individually and collectively with texts across media, Lacuna and other social annotation platforms are designed to encourage critical thinking and sensemaking, skills which are at the core of disciplinary work in the humanities and vital to 21st-century citizenship. Critical reading has long been a hallmark of the humanities and a skill which the traditional seminar has sought to foster in its students; however, the practice itself has often been all but invisible to instructors. By transforming reading into an activity that is done socially, rather than in solitude, Lacuna created a bridge between the physical classroom and online reading space in Futurity.

Social annotation in the Futurity course allowed the instructors to get to know their students better and to incorporate student perspectives more fully into the dialogue of the course. By glimpsing their peers’ interpretations of a text during class preparation, students were able to start engaging in dialogue before they entered the classroom. They became more comfortable with one another and had increased opportunities to learn from each other as well as from the professor, developing a multi-faceted perspective on texts. These changes in instructor and peer learning practices appear to have created strong student investment in the course and more authentic dialogue during class discussions. The social annotation affordances of Lacuna rendered students’ reading visible to instructors and other students, and thus expanded the dialogic space of the course.

But dialogue isn’t always easy. Social annotation appears to create new demands on students and instructors alike to negotiate one another’s perspectives and reflect on the goals of their participation and practices. For students, this negotiation and self-reflection largely takes place during reading. Encountering a chorus of voices on a text means that these voices must be sorted through, accepted, questioned, or ignored. Being a member of that chorus means constantly choosing whether to sing or be silent. These choices build on skills that students have likely have developed through in-person discussions, as well as pre-existing solitary reading strategies, but combines them in new ways. In educational research, this type of self-monitoring and intentional use of resources is known as “self-regulation” (e.g. Bandura 1991, Schunk and Zimmerman 1994). Self-regulation is a relatively sophisticated set of competencies, which must be taught, practiced, and discussed. Similarly, social annotation is an activity which will likely function best when self-reflection about the practice is encouraged and there are ongoing conversations in a course about how to best engage in it.

Instructors working with social annotation tools like Lacuna are presented with the opportunity to incorporate students’ interests and struggles with texts into teaching, which can include the potentially discomfiting need to cede to the students some measure of control. Even if faculty are comfortable with this, it highlights the tension that must be negotiated between the desire to allow students the space for intellectual discovery and the desire to guide their learning along a pre-specified path. While the tension between student-led discovery and instructor-led guidance is present to some degree in any seminar, pedagogical opportunities to support discovery are heightened by the ways that Lacuna makes reading practices and student voices more visible on the text itself. To balance these goals, instructors who use Lacuna, or similar software which emphasizes student perspectives, would be well-served to reflect on their desired learning outcomes for the class and adjust their use of the platform accordingly. Such self-reflection is also useful when considering how much time an instructor wishes to spend combing through student annotations for use in the classroom; student annotations are effectively an additional text that an instructor needs to prepare each week, and the learning goals of a specific course will dictate how much time an instructor will wish to spend preparing that text.

Generally, the influence of Lacuna on the course dynamics of Futurity appeared to be positive. We observed and heard about high levels of student preparedness, active reading habits, and deep engagement in course topics among both students and instructors. While these changes were certainly shaped by the design and affordances of the platform, they cannot be regarded as given for all users of Lacuna or other social annotation tools. It is likely no coincidence that, of the dozen or so courses that have utilized Lacuna in recent years, the course with the deepest integration of the platform was the only one in which Lacuna was used two years in a row. The lessons learned from the first year of teaching were critical in shaping both the technological changes made to the Lacuna platform and the ways that Eshel and Johnsrud chose to leverage the platform when they taught the course again the following year. This illustrates the importance of intentionality, reflection, and iteration in both the design of the platform and instructors’ use of it—lessons which go beyond Lacuna and social annotation tools to learning technologies broadly. For designers, it is essential to think of instructional technologies as dynamic, rather than static; they must adjust to the pedagogical needs and goals of instructors. Instructors, in turn, must carefully consider how best to use a platform to achieve their goals. Thoughtful and reflective design of the technology, and thoughtful and reflective use of the tool in the classroom, are equally important to achieving a deep level of pedagogical impact.

Future Directions

Our case study has surfaced themes of authority, agency, and new forms of relationships in courses where technology makes student activity visible to instructors. We plan to investigate these themes further as we continue to research and develop the Lacuna platform and engage with researchers investigating comparable learning technologies. While the current study focused on classroom dynamics, a vital question that needs further consideration is the specific way in which student learning is influenced by the pedagogical moves that Lacuna enables. To pursue this avenue of research, we are in the process of developing rubrics for characterizing the reading strategies expressed in online annotations. Using annotations as evidence of critical reading and dialogic practices is an opportunity that is relatively unique to digital learning environments which capture traces of student activity. These data provide critical insights into student thinking, both on an individual and collective level, and can be used as a type of formative assessment for tracking learning over time (Thille et al. 2014).

At Stanford, Lacuna continues to be used for seminar-style courses similar to Futurity, as well as in courses in other departments and larger, lecture-style courses. Lacuna is also being used at a variety of other universities—visit www.lacunastories.com for a full list of our collaborators. Each of these collaborators are doing exciting work to make the platform their own. We are particularly pleased to be supporting local community college instructors who teach composition, as well as reading and writing courses at the basic skills level. In these partnerships, we are building on the insights from this case study and other unpublished case studies and observations. For example, we encourage active reflection about annotation practices and goals. This includes strategies for gradually increasing the level of integration of Lacuna into homework assignments and classroom activities, in order to give both instructors and students the opportunity to adjust their habits.

In our current research and partnerships, we continue to iteratively refine the design of Lacuna, while building our theoretical conceptions of the co-creation of meaning through social annotations. Throughout this work, we seek to support learning and instructional practices in a way that balances the strengths of participatory digital media with the strengths of in-person human interactions.

Notes

[1] Note about authorship and affiliation: This paper presents a case study of a course taught by Amir Eshel and Brian Johnsrud, the co-directors of the Lacuna project in the Poetic Media Lab. While Eshel and Johnsrud were the original designers and first users of Lacuna, they were not involved in the present analysis of their own teaching. Rather, all interviews, surveys, and classroom observations—as well as the subsequent analysis of that qualitative data—were conducted exclusively by the primary authors (Schneider and Hartman). As members of the Poetic Media Lab, Schneider and Hartman are participant-observers who have served as instructional designers to help instructors plan their courses and have analyzed research data to contribute to the ongoing improvement of the platform. This level of involvement is typical for researchers in the “design-based research” paradigm of the learning sciences (Brown 1992, Collins et al. 2004, Sandoval 2014). Some level of bias is inherent in participating in and observing a project at the same time. Nevertheless, in any form of participant-observation, it is always the hope that any considerations that may be overlooked due to close proximity is more than compensated for by the first-hand observations of practice that such inquiry affords.

[2] Please see note 1 above on authorship and affiliation to learn more about the participant-observer relationships of Schneider, Hartman, Eshel, and Johnsrud to the analyses presented in this paper.

[3] The site can also be used for films, videos, audio, and images. The vast majority of media in the course syllabi of our faculty reflect, however, the traditional focus of the academy on written texts. To reflect this trend and maintain clarity in our writing, we will use the term “reading” throughout the paper. But when we say “reading,” note that these claims may be equally important for viewing, listening, etc.

[4] Figure 1 and other screenshots in the paper are from the version of Lacuna used in the case study course described in this paper. The most recent version of Lacuna refines the privacy settings for annotations to allow readers to only share their annotations with an instructor or to share annotations with a specific group of peers, in addition to keeping annotations private or sharing them with the entire class. These changes were made in response to feedback from students and instructors who wanted more fine-grained control over who could see their annotations.

[5] A common question about Lacuna is the copyright status of materials. Lacuna supports the uploading of any digitized course or syllabus material, such as text, images, video, or audio files. As with any Learning Management System (LMS)—such as Canvas, Blackboard, edX, etc.—instructors are responsible for the copyright status of materials they upload. With each upload, instructors are asked to indicate the copyright status of the material, such as open access, Creative Commons, limited copyright for educational purposes, etc. Because the platform has secure logins limited to students enrolled in courses, instructors at Stanford have had a good deal of success getting free or reduced copyright fees for course materials that do not fall under fair use for educational purposes. Publishers seem particularly accepting to digitized materials on Lacuna because they are not easily downloaded and disseminated as PDFs, which is the way that many other LMSs deliver content.

[6] All student names in this article are pseudonyms.

Bibliography

Ambrosio, Frank, William Garr, Eddie Maloney and Theresa Schlafly. 2012. “MyDante: An Online Environment for Collaborative and Contemplative Reading,” Journal of Interactive Technology and Pedagogy, no. 1.

Applebee, Arthur N., Judith A. Langer, Martin Nystrand, and Adam Gamoran. 2003. “Discussion-based approaches to developing understanding: Classroom instruction and student performance in middle and high school English.” American Educational Research Journal 40, no. 3: 685-730.

Bakhtin, Mikhail M. 1981. The dialogic imagination: Four essays by MM Bakhtin (M. Holquist, Ed.; C. Emerson & M. Holquist, Trans.).

Bandura, Albert. 1991. “Social cognitive theory of self-regulation.” Organizational behavior and human decision processes 50, no. 2: 248-287.

Baron, Naomi S. 2015. Words onscreen: The fate of reading in a digital world. Oxford University Press.

Bazerman, Charles. 2010. The Informed Writer: Using Sources in the Disciplines, 5th Edition. Fort Collins: The WAC Clearinghouse.

Biancarosa, Gina, and Catherine E. Snow. 2004. Reading next: A vision for action and research in middle and high school literacy. Alliance for Excellent Education.

Brown, Ann L. 1992. “Design experiments: Theoretical and methodological challenges in creating complex interventions in classroom settings.” The Journal of the Learning Sciences 2, no. 2: 141-178.

Cole, Michael, and Yrjö Engeström. 1993. “A cultural-historical approach to distributed cognition.” Distributed cognitions: Psychological and educational considerations, 1-46.

Collins, Allan, John Seely Brown, and Ann Holum. 1991. “Cognitive apprenticeship: Making thinking visible.” American educator 15, no. 3: 6-11.

Collins, Allan, Diana Joseph, and Katerine Bielaczyc. 2004. “Design research: Theoretical and methodological issues.” The Journal of the Learning Sciences 13, no. 1: 15-42.

Cuban, Larry. 2009. Oversold and underused: Computers in the classroom. Harvard University Press.

Davis, James R., and Daniel P. Huttenlocher. 1995. “Shared annotation for cooperative learning.” In The first international conference on Computer Support for Collaborative Learning, pp. 84-88. L. Erlbaum Associates Inc.

Farzan, Rosta, and Peter Brusilovsky. 2008. “AnnotatEd: A social navigation and annotation service for web-based educational resources.” New Review of Hypermedia and Multimedia 14, no. 1: 3-32.

Flower, Linda. 1990. “Introduction: Studying cognition in context.” Reading-to-write: Exploring a cognitive and social process, 3-32.

Garrison, D. Randy, Terry Anderson, and Walter Archer. 1999. “Critical inquiry in a text-based environment: Computer conferencing in higher education.” The Internet and Higher Education 2, no. 2: 87-105.

Garrison, D. Randy, Terry Anderson, and Walter Archer. 2010. “The first decade of the community of inquiry framework: A retrospective.” The Internet and Higher Education 13, no. 1: 5-9.

Gay, Geri, Amanda Sturgill, Wendy Martin, and Daniel Huttenlocher. 1999. “Document‐centered Peer Collaborations: An Exploration of the Educational Uses of Networked Communication Technologies.” Journal of Computer‐Mediated Communication 4, no. 3.

Gee, James Paul. 2015. Literacy and Education. Routledge.

Graff, Gerald. 2008. Clueless in academe: How schooling obscures the life of the mind. Yale University Press.

Graham, Steve, and Michael Hebert. 2010. Writing to read: Evidence for how writing can improve reading: A report from Carnegie Corporation of New York. Carnegie Corporation of New York.

Greeno, James G. 1998. The situativity of knowing, learning, and research. American psychologist, 53 no. 1: 5-10.

Gunawardena, Ananda and John Barr. 2012. “Classroom salon: a tool for social collaboration.” In Proceedings of the 43rd ACM technical symposium on Computer Science Education, pp. 197-202. ACM.

Hillocks, George, and Larry H. Ludlow. 1984. “A taxonomy of skills in reading and interpreting fiction.” American Educational Research Journal 21, no. 1: 7-24.

Jenkins, Henry. 2009. Confronting the challenges of participatory culture: Media education for the 21st century. MIT Press.

Johnson, Tristan E., Thomas N. Archibald, and Gershon Tenenbaum. 2010. “Individual and team annotation effects on students’ reading comprehension, critical thinking, and meta-cognitive skills.” Computers in Human Behavior 26, no. 6: 1496-1507.

Langer, Judith A. 1995. Envisioning Literature: Literary Understanding and Literature Instruction. New York: Teachers College Press.

Lave, Jean, and Etienne Wenger. 1991. Situated learning: Legitimate peripheral participation. Cambridge university press.

Lee, Carol D., and Susan R. Goldman. 2015. “Assessing literary reasoning: Text and task complexities.” Theory Into Practice just-accepted.

Levine, Sarah, and William Horton. 2015. “Helping High School Students Read Like Experts: Affective Evaluation, Salience, and Literary Interpretation.” Cognition and Instruction 33, no. 2: 125-153.

Marshall, Catherine C. and AJ Bernheim Brush. 2004. “Exploring the relationship between personal and public annotations.” In Digital Libraries, 2004. Proceedings of the 2004 Joint ACM/IEEE Conference on, pp. 349-357. IEEE.

Mazzei, Andrea, Jan Blom, Louis Gomez, and Pierre Dillenbourg. 2013. “Shared annotations: the social side of exam preparation.” In Scaling up Learning for Sustained Impact, pp. 205-218. Springer Berlin Heidelberg.

McNamara, Danielle S., Tenaha P. O’Reilly, Rachel M. Best, and Yasuhiro Ozuru. 2006. “Improving adolescent students’ reading comprehension with iSTART.” Journal of Educational Computing Research 34, no. 2: 147-171.

Morson, Gary Saul. 2004. “The process of ideological becoming.” Bakhtinian perspectives on language, literacy, and learning, 317-331.

Norman, Donald A. 1999. “Affordance, conventions, and design.” Interactions 6, no. 3: 38-43.

Nystrand, Martin. 1997. Opening Dialogue: Understanding the Dynamics of Language and Learning In the English Classroom. New York: Teachers College Press.

Pea, Roy D., and D. Midian Kurland. 1987. “Cognitive technologies for writing.” Review of research in education, 277-326.

Sandoval, William. 2014. “Conjecture mapping: An approach to systematic educational design research.” Journal of the Learning Sciences 23, no. 1: 18-36.

Scholes, Robert E. 1985. Textual power: Literary theory and the teaching of English. Yale University Press.

Schunk, Dale H., and Barry J. Zimmerman. 1994. Self-regulation of learning and performance: Issues and educational applications. Lawrence Erlbaum Associates, Inc.

Snow, Catherine. 2002. Reading for understanding: Toward an R&D program in reading comprehension. Rand Corporation.

Suthers, Daniel D. 2006. “Technology affordances for intersubjective meaning making: A research agenda for CSCL.” International Journal of Computer-Supported Collaborative Learning 1, no. 3: 315-337.

Thille, Candace, Emily Schneider, René F. Kizilcec, Christopher Piech, Sherif A. Halawa, and Daniel K. Greene. 2014. “The future of data-enriched assessment.” Research & Practice in Assessment 9.

Unsworth, John. 2000. “Scholarly primitives: What methods do humanities researchers have in common, and how might our tools reflect this.” In Humanities Computing, Formal Methods, Experimental Practice Symposium, pp. 5-100.

van der Pol, Jakko, Wilfried Admiraal, and P. Robert-Jan Simons. 2006. “The affordance of anchored discussion for the collaborative processing of academic texts.” International Journal of Computer-Supported Collaborative Learning 1, no. 3: 339-357.

Vygotsky, Lev Semenovich. 1980. Mind in society: The development of higher psychological processes. Harvard University Press.

Wegerif, Rupert. 2013. Dialogic: Education for the Internet age. Routledge.

Wolfe, Joanna. 2008. “Annotations and the collaborative digital library: Effects of an aligned annotation interface on student argumentation and reading strategies.”International Journal of Computer-Supported Collaborative Learning 3, no. 2: 141-164.

Zyto, Sacha, David Karger, Mark Ackerman, and Sanjoy Mahajan. 2012. “Successful classroom deployment of a social document annotation system.” In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pp. 1883-1892. ACM.

Acknowledgements

Lacuna was built in the Poetic Media Lab, a digital humanities lab in Stanford’s Center for Textual and Spatial Analysis (CESTA). The platform’s development was overseen by Michael Widner, and conducted by him and a number of undergraduate and graduate research assistants at Stanford, with occasional assistance from external developers and project collaborators. The Lacuna project has received funding from the Wallenberg Foundation and the following departments and offices at Stanford University: the Vice Provost for Online Learning; the Vice Provost for Teaching and Learning; the Dean of Research; the Vice Provost for Undergraduate Education; the Division of Literatures, Cultures, and Languages; Stanford Community Engagement grants; and the Robert Bowman Denning Fund for Humanities and Technology. Additional support for Emily Schneider was provided by the Lytics Lab and the Anne T. and Robert M. Bass Stanford Graduate Fellowship.

About the Authors

Emily Schneider is a doctoral candidate in Learning Sciences and Technology Design at Stanford. She is the Director of Research and Pedagogy of the Lacuna project and a co-founder of Stanford’s Lytics Lab. Her work focuses on the design and evaluation of interactive online learning platforms. Currently, she is developing “critical reading analytics” for identifying and supporting the strategies used by learners when they critically engage with digital texts. More broadly, she is passionate about collaboration, open educational resources, and striking a balance between technology-enhanced and human-centered learning. Emily holds a B.A. in English Literature from Swarthmore College.

Stacy Hartman received her PhD in German Studies from Stanford University in 2015. Her dissertation explored the subversion and disruption of readerly empathy in post-1945 German novels and films. More broadly, she is interested in the relationship between reader and text, and in the ways in which readers construct texts both singularly and socially. It was this interest that led her to work on Lacuna as both researcher and instructional designer during her time at Stanford. Currently, she is a project coordinator at the Modern Language Association, where she works on initiatives related to humanities careers.

Amir Eshel is Edward Clark Crossett Professor of Humanistic Studies, Professor of German Studies and Director of the Department of Comparative Literature. His research focuses on contemporary literature and the arts, with emphasis on twentieth and twenty-first century German, Anglo-American and Hebrew. As the faculty director of Stanford’s research group on The Contemporary and of the Poetic Media Lab at Stanford’s Center for Spatial and Textual Analysis (CESTA), he is interested in the contemporary cultural imagination as it addresses modernity’s traumatic past with its philosophical, political and ethical implications. Most recently, he is the author of Futurity: Contemporary Literature and the Quest for the Past (The University of Chicago Press, 2013).

Brian Johnsrud is the Co-Director of the Poetic Media Lab at Stanford University, the digital humanities lab which initially designed and created Lacuna for academic and educational use. Brian received his grades 6-12 teaching certification, along with a Master’s endorsement in Library and Media Science for secondary education, and he has taught middle and high school at a variety of schools and educational settings. His doctoral research focused on how people engage with narratives across media in the 21st century.

Skip to toolbar