Tagged digital pedagogy

A spiral of books on library shelves appears almost as though a pie chart.

Supporting Data Visualization Services in Academic Libraries


Data visualization in libraries is not a part of traditional forms of research support, but is an emerging area that is increasingly important in the growing prominence of data in, and as a form of, scholarship. In an era of misinformation, visual and data literacy are necessary skills for the responsible consumption and production of data visualizations and the communication of research results. This article summarizes the findings of Visualizing the Future, which is an IMLS National Forum Grant (RE-73-18-0059-18) to develop a literacy-based instructional and research agenda for library and information professionals with the aim to create a community of praxis focused on data visualization. The grant aims to create a diverse community that will advance data visualization instruction and use beyond hands-on, technology-based tutorials toward a nuanced, critical understanding of visualization as a research product and form of expression. This article will review the need for data visualization support in libraries, review environmental scans on data visualization in libraries, emphasize the need for a focus on the people involved in data visualization in libraries, discuss the components necessary to set up these services, and conclude with the literacies associated with supporting data visualization.


Now, more than ever, accurately assessing information is crucially important to discourse, both public and academic. Universities play an important role in teaching students how to understand and generate information. But at many institutions, learning how to effectively communicate findings from the research process is considered idiosyncratic for each field or the express domain of a particular department (e.g. applied mathematics or journalism). Data visualization is the use of spatial elements and graphical properties to display and analyze information, and this practice may follow disciplinary customs. However, there are many commonalities in how we visualize information and data, and the academic library, at the heart of the university, can play a significant role in teaching these skills. In the following article, we suggest a number of challenges in teaching complex technological and methodological skills like visualization and outline a rationale for, and a strategy to, implement these types of services in academic libraries. However, the same argument can be made for any academic support unit, whether college, library, or independently based.

Why Do We Need Data Visualization Support in Libraries?

In many ways the argument for developing data visualization services in libraries mirrors the discussion surrounding the inclusion and extension of digital scholarship support services throughout universities. In academic settings, libraries serve as a natural hub for services that can be used by many departments and fields. Often, data visualization (like GIS or text-mining) expertise is tucked away in a particular academic department making it difficult for students and researchers from different fields to access it.

As libraries already play a key role in advocacy for information literacy and ethics, they may also serve as unaffiliated, central places to gain basic competencies in associated information and data skills. Training patrons how to accurately analyze, assess, and create data visualizations is a natural enhancement to this role. Building competencies in these areas will aid patrons in their own understanding and use of complex visualizations. It may also help to create a robust learning community and knowledge base around this form of visual communication.

In an age of “fake news” and “post-truth politics,” visual literacy, data literacy, and data visualization have become exceedingly important. Without knowing the ways that data can be manipulated, patrons are not as capable of assessing the utility of the information being displayed or making informed decisions about the visual story being told. Presently, many academic libraries are investing resources in data services and subscriptions. Training students, faculty and researchers in ways of effectively visualizing these data sources increases their use and utility. Finally, having data visualization skills within the library also comes with an operational advantage, allowing more effective sharing of data about the library.

We are the Visualizing the Future Symposia, an Institute of Museum and Library Services National Forum Grant-funded group created to develop instructional and research materials on data visualization for library professionals and a community of practice around data visualization. The grant was designed to address the lack of community around data visualization in libraries. More information about the grant is available at the Visualizing the Future website. While we have only included the names of the three main authors; this work was a product of the work of the entire cohort, which includes: Delores Carlito, David Christensen, Ryan Clement, Sally Gore, Tess Grynoch, Jo Klein, Dorothy Ogdon, Megan Ozeran, Alisa Rod, Andrzej Rutkowski, Cass Wilkinson Saldaña, Amy Sonnichsen, and Angela Zoss.

We are currently halfway through our grant work and, in addition to providing publicly available resources for teaching visualization, are also in the process of synthesizing and collecting shared insights into developing and providing data visualization instruction. This present article represents some of the key findings of our grant work.

Current Environment

In order to identify some broad data visualization needs and values, we reviewed three environmental scans. The first was carried out by Angela Zoss, who is one of the co-investigators on the grant, at Duke University (2018) based on a survey that received 36 responses from 30 separate institutions. The second, by S.K. Van Poolen (2017), focuses on an overview of the discipline and includes results from a survey of Big Ten Academic Alliance institutions and others. And the final report by Ilka Datig for Primary Research Group Inc (2019) provides a number of in-depth case studies. While none of the studies claim to provide an exhaustive list of every person or institution providing data visualization support in libraries, in combination they provide an effective overview of the state of the field.


The combined environmental scans represent around thirty-five institutions, primarily academic libraries in the United States. However, the Zoss survey also includes data from the Australian National University, a number of Canadian universities, and the World Bank Group. The universities represented vary greatly in size and include large research institutions, such as the University of California Los Angeles, and small liberal arts schools, such as Middlebury and Carleton College.

Some appointments were full-time, while others reported visualization as a part of other job responsibilities. In the Zoss survey, roughly 33% of respondents reported the word “visualization” in their job title.

Types of activities

The combined scans include a variety of services and activities. According to the Zoss survey, the two most common activities (i.e. activities that the most respondents said they engaged in) were providing consultations on visualization projects and giving short workshops or lectures on data visualization. After that other services offered include: providing internal data visualization support for analyzing and communicating library data; training on visualization hardware and spaces (e.g. large scale visualization walls, 3D CAVEs); and managing such spaces and hardware.

Resources needed

These three environmental scans also collectively identify a number of resources that are critical for supporting data visualization in librarians. One of the key elements is training for new librarians, or librarians new to this type of work, on visualization itself and teaching/consulting on data visualization. They also mention that resources are required to effectively teach and support visualization software, including access to the software, learning materials, but also ample time is required for librarians to learn, create and experiment themselves so that they can be effective teachers. Finally they outline the need for communities of practice across institutions and shared resources to support visualization.

It’s About the People

In all of our work and research so far, one important element seems worth stressing and calling out on its own: It is the people who make data visualization services work. Even visualization services focused on advanced instructional spaces or immersive and large scale displays, require expertise to help patrons learn how to use the space, maintain and manage technology, schedule events to create interest, and, especially in the case of advanced spaces, create and manage content to suggest the possibilities. An example of this is the North Carolina State University Libraries’ Andrew W. Mellon Foundation-funded project “Immersive Scholar” (Vandegrift et al. 2018), which brought visiting artists to produce immersive artistic visualization projects in collaboration with staff for the large scale displays at the library.

We encourage any institution that is considering developing or expanding data visualization services to start by defining skill sets and services they wish to offer rather than the technology or infrastructure they intend to build. Some of these skills may include programming, data preparation, and designing for accessibility, which can support a broad range of services to meet user needs. Unsupported infrastructure (stale projects, broken technology, etc.) is a continuing problem in providing data visualization services, and starting any conversation around data visualization support by thinking about the people needed is crucial to creating sustainable, ethical, and useful services.

As evidenced by both the information in the environmental scans and the experiences of Visualizing the Future fellows, one of the most consistently important ways that libraries are supporting visualization is through consultations and workshops that span technologies from Excel to the latest virtual reality systems. Moreover, using these techniques and technologies effectively requires more than just technical know-how; it requires in-depth considerations of design aesthetics, sustainability, and the ethical use and re-use of data. Responsible and effective visualization design requires a variety of literacies (discussed below), critical consideration of where data comes from, and how best to represent data—all elements that are difficult to support and instruct without staff who have appropriate time and training.


Data visualization services in libraries exist both internally and externally. Internally, data visualization is used for assessment (Murphy 2015), marketing librarians’ skills and demonstrating the value of libraries (Bouquin and Epstein 2015), collection analysis (Finch 2016), internal capacity building (Bouquin and Epstein 2015), and in other areas of libraries that primarily benefit the institution. 

External services, in contrast, support students, faculty, researchers, non-library staff, and community members. Some examples of services include individual consultations, workshops, creating spaces for data visualization (both physical and virtual), and providing support for tools. Some libraries extend visualization services into additional areas, like the New York University Health Sciences Library’s “Data Visualization Clinic,” which provides a space for attendees to share and receive feedback on their data visualizations from their peers (Zametkin and Rubin 2018), and the North Carolina State University Libraries’ Coffee and Viz Series, “a forum in which NC State researchers share their visualization work and discuss topics of interest” that is also open to the public (North Carolina State University Libraries 2015).

In order to offer these services, libraries need staff who have some interest and/or experience with data visualization. Some models include functional roles, such as data services librarians or data visualization librarians. These functional librarian roles ensure that the focus is on data and data visualization, and that there is dedicated, funded time available to work on data visualization learning and support. It is important to note that if there is a need for research data management support, it may require a position separate from data visualization. Data services are broad and needs can vary, so some assessment on the community’s greatest needs would help focus functional librarian positions. 

Functional librarian roles may lend themselves to external facing support and community building around data visualization outside of internal staff. A needs assessment can help identify user-centered services, outreach, and support that could help create a community around data visualization for students, faculty, researchers, non-library staff, and members of the public. Having a community focused on data visualization will make sure that services, spaces, and tools are utilized and meeting user needs. 

There is also room to develop non-librarian, technical data visualization positions, such as data visualization specialists or tool-specific specialist positions. These positions may not always have an outreach or community building focus and may be best suited for internal library data visualization support and production. Offering data visualization support as a service to users is separate from data visualization support as a part of library operations, and the decision on how to frame the positions can largely be determined by library needs. 

External data visualization services can include workshops, training sessions, consultations, and classroom instruction. These services can be focused on specific tools, such as Tableau, R, Gephi, and so on. They can be focused on particular skills, such as data cleaning and normalizing, dashboard design, and coding. They can also address general concerns, such as data visualization transparency and ethics, which may be folded into all of the services.

There are some challenges in determining which services to offer:

  • Is there an interest in data visualization in the community? This question should be answered before any services are offered to ensure services are utilized. If there are any liaison or outreach librarians at your institution, they may have deeper insight into user needs and connections to the leaders of their user groups.
  • Are there staff members who have dedicated time to effectively offer these services and support your users?
  • Is there funding for tools you want to teach?
  • Do you have a space to offer these services? This does not have to be anything more complicated than a room with a projector, but if these services begin to grow, it is important to consider the effectiveness of these services with a larger population. For example, a cap on the number of attendees for a tool-specific workshop might be needed to ensure the attendees receive enough individual support throughout the session.

If all of these areas are not addressed, there will be challenges in providing data visualization services and support. Successful data visualization services have adequate staffing, access to the required tools and data, space to offer services (not necessarily a data wall or makerspace, but simply a space with sufficient room to teach and collaborate), and community that is already interested and in need of data visualization services. 


The skills that are necessary to provide good data visualization services are largely practical. We derive the following list from our collective experience, both as data visualization practitioners and as part of the Visualizing the Future community of practice. While the following list is not meant to be exhaustive, these are the core competencies that should be developed to offer data visualization services, either from an individual or as part of a team. 

A strong design sense: Without an understanding of how information is effectively conveyed, it is difficult to create or assess visualizations. Thus, data visualization experts need to be versed in the main principles of design (e.g. Gestalt, accessibility) and how to use these techniques to effectively communicate visual information.

Awareness of the ethical implications of data visualizations: Although the finer details are usually assessed on a case by case basis, a data visualization expert should be able to interpret when a visualization is misleading and have the agency to decline to create biased products. This is a critical part of enabling the practitioner to be an active partner in the creation of visualizations. 

An understanding, if not expertise, in a variety of visualization types: Network visualizations, maps, glyphs, Chernoff Faces, for example. There are many specialized forms of data visualization and no individual can be an expert in all of them, but a data visualization practitioner should at least be conversant in many of them. Although universal expertise is impractical, a working knowledge of when particular techniques should be used is a very important literacy.

A similar understanding of a variety of tools: Some examples include Tableau, PowerBI, Shiny, and Gephi. There are many different tools in current use for creating static graphics and interactive dashboards. Again, universal expertise is impractical, but a competent practitioner should be aware of the tools available and capable of making recommendations outside their expertise.

Familiarity with one or more coding languages: Many complex data visualizations happen at the command line (at least partially) so there is a need for an effective practitioner to be at least familiar with the languages most commonly used (likely either R or Python). Not every data visualization expert needs to be a programmer, but familiarity with the potential for these tools is necessary.


The challenges inherent in building and providing data visualization instruction in academic libraries provide an opportunity to address larger pedagogical issues, especially around emerging technologies, methods, and roles in libraries and beyond. In public library settings, the needs for services may be even greater, with patrons unable to find accessible training sources when they need to analyze, assess, and work with diverse types of data and tools. While the focus of our grant work has been on data visualization, the findings reflect the general difficulties of balancing the need and desire to teach tools and invest in infrastructure with the value of teaching concepts and investing in individuals. It is imperative that work teaching and supporting emerging technologies and methods focus on supporting the people and the development of literacies rather than just teaching the use of specific tools. To do so requires the creation of spaces and networks to share information and discoveries.


Bouquin, Daina and Helen-Ann Brown Epstein. 2015. “Teaching Data Visualization Basics to Market the Value of a Hospital Library: An Infographic as One Example.” Journal of Hospital Librarianship 15, no. 4: 349–364. https://doi.org/10.1080/15323269.2015.1079686.

Datig, Ilka. 2019. Profiles of Academic Library Use of Data Visualization Applications. New York: Primary Research Group Inc.

Finch, Jannette L. and Angela R. Flenner. 2016. “Using Data Visualization to Examine an Academic Library Collection.” College & Research Libraries 77, no. 6: 765–778. https://doi.org/10.5860/crl.77.6.765.

Micah Vandegrift, Shelby Hallman, Walt Gurley, Mildred Nicaragua, Abigail Mann, Mike Nutt, Markus Wust, Greg Raschke, Erica Hayes, Abigail Feldman Cynthia Rosenfeld, Jasmine Lang, David Reagan, Eric Johnson, Chris Hoffman, Alexandra Perkins, Patrick Rashleigh, Robert Wallace, William Mischo, and Elisandro Cabada. 2018. Immersive Scholar. Released on GitHub and Open Science Framework. https://osf.io/3z7k5/.

LaPolla, Fred Willie Zametkin and Denis Rubin. 2018. “The “Data Visualization Clinic”: a library-led critique workshop for data visualization.” Journal of the Medical Library Association 106, no. 4: 477–482. https://doi.org/10.5195/jmla.2018.333.

Murphy, Sarah Anne. 2015. “How data visualization supports academic library assessment.” College & Research Libraries News 76, no. 9: 482–486. https://doi.org/10.5860/crln.76.9.9379.

North Carolina State University Libraries. “Coffee & Viz.” Accessed December 4, 2019. https://www.lib.ncsu.edu/news/coffee–viz

Van Poolen, S.K. 2017. “Data Visualization: Study & Survey.” Practicum study at the University of Illinois. 

Zoss, Angela. 2018. “Visualization Librarian Census.” TRLN Data Blog. Last modified June 16, 2018. https://trln.github.io/data-blog/data%20visualization/survey/visualization-librarian-census/.

About the Authors

Negeen Aghassibake is the Data Visualization Librarian at the University of Washington Libraries. Her goal is to help library users think critically about data visualization and how it might play a role in their work. Negeen holds an MS in Information Studies from the University of Texas at Austin.

Matthew Sisk is a spatial data specialist and Geographic Information Systems Librarian based in Notre Dame’s Navari Family Center for Digital Scholarship. He received his PhD in Paleolithic Archaeology from Stony Brook University in 2011 and has worked extensively in GIS-based archaeology and ecological modeling.  His research focuses on human-environment interactions, the spatial scale environmental toxins and community-based research.

Justin Joque is the Visualization Librarian at the University of Michigan. He completed his PhD in Communications and Media Studies at the European Graduate School and holds a Master of Science in Information (MIS) from the University of Michigan.

A screenshot of a highlighted section of the research essay; students’ annotations comment on the driving question and data collection in a word processor.

Visualizing Essay Elements: A Color-Coding Approach to Teaching First-year Writing

Ruth Li

In this piece, the author shares a strategy for teaching first-year writing in which students color-code and annotate sample rhetorical analysis and research-based essays for elements including citations, quotations, transition words, vocabulary, and structure.

Read more… Visualizing Essay Elements: A Color-Coding Approach to Teaching First-year Writing

A scan and transcription of a letter from Christopher Town.

Digital Paxton: Collaborative Construction with Eighteenth-Century Manuscript Collections


Digital Paxton is a digital collection, scholarly edition, and, most crucially for this issue, a burgeoning teaching platform devoted to the archives of Pennsylvania’s first major pamphlet war. In this co-authored piece, Will Fenton will introduce the massacre that sparked that debate, the limitations of the existing approach, and the affordances of his digital humanities project. Following Fenton’s comments on collaboration and acknowledgement, Kate Johnson and Kelly Schmidt will provide a case study in digital humanities pedagogy, demonstrating how they used a class transcription assignment as an opportunity to improve and expand the educational offerings of Digital Paxton. Through their analyses, Fenton, Johnson, and Schmidt will show how their collaboration demonstrates the value of digital projects and transcription assignments for students’ critical thinking and media literacy.

The Paxton Massacre

In December 1763, following years of backcountry warfare, a mob of settlers in the Paxton Township—just outside what is today Harrisburg—murdered twenty unarmed Conestoga Indians along the Pennsylvania frontier. Soon after, hundreds of these “Paxton Boys” marched on Philadelphia to menace a group of Moravian Indians who had, in response to the violence, been placed under government protection. Although the confrontation was diffused through the diplomacy of Benjamin Franklin, the incident ventilated long-festering religious and ethnic grievances, pitting the colony’s German and Scots-Irish Presbyterian frontiersmen against Philadelphia’s English Quakers and their Susquehannock trading partners.

Supporters and critics of the Paxton Boys spent the next year battling in print: the resulting public debate constituted one-fifth of the Pennsylvania’s printed material in 1764 (Olson 1999, 31). Pamphlets, which were inexpensive and quick to produce, were the medium of choice—hence the debate is often called the Paxton pamphlet war. But many other printed and unprinted materials circulated simultaneously, including broadsides, political cartoons, letters, diaries, and treaty minutes. Although this debate was ostensibly about the conduct of the Paxton vigilantes, it quickly migrated to other issues facing colonial Pennsylvania, including suspicions of native others, anxieties about porous borders, a yawning divide between urban and rural populations, and the proliferation of what we might today call “fake news.”

While most researchers explore the pamphlet war through John Raine Dunbar’s scholarly edition, The Paxton Papers (1957), much of the debate cannot be found in Dunbar’s edition.[1] There are dozens of alternate editions, answers, and responses to the pamphlets identified by Dunbar, and, if one examines the originals, one uncovers engravings, artworks, and other forms of materiality that could not be examined through textual transcriptions alone. Perhaps most importantly, the current approach to the Paxton debate, which prioritizes printed materials—namely pamphlets, broadsides, and political cartoons— inadvertently reinforces colonial and cosmopolitan biases. That is, much of the Paxton debate happened outside Philadelphia printers. If researchers are to reckon with the massacre’s geographic, ethnic, and class complexities, they ought to consider manuscript collections that give voice to backcountry settlers and the indigenous peoples at the center of this tragic episode.

Digital Paxton

Digital Paxton seeks to expand awareness of and access to such heterogeneous records. The project began as a digital collection of pamphlets available through the Library Company of Philadelphia and Historical Society of Pennsylvania. As partners in the project, those institutions are responsible for digitizing at their own expense more than half of the records available in Digital Paxton. Subsequent partnerships have brought scans of contemporaneous Pennsylvania Gazette issues at the American Antiquarian Society; Friendly Association correspondence from the Haverford College Quaker and Special Collections; letters from the John Elder and Timothy Horsfield Papers at the American Philosophical Society; and congregational diaries from the Moravian Archives of Bethlehem. Each expansion has underscored that the 1764 pamphlet war included much more than pamphlets.[2]

As important as the diversity of materials is the structure of the collection. The design of online publishing platform Scalar encourages researchers to draw connections between and across collections. Specifically, Scalar’s flat ontology enables all objects (images, transcriptions, sequences of images) to occupy the same hierarchy: no object is more of a subject than another object. In practical terms, this means that researchers encounter Governor Penn’s letters in the same pathway as they do letters between Quaker leaders and native partners, accounts of diplomatic conferences, and the writings of Wyalusing leaders. At a technical level, then, the platform supports the philosophical goals articulated by the editors of the Yale Indian Papers Project: the digital collection as a common pot, a “shared history, a kind of communal liminal space, neither solely Euro-American nor completely Native” (Grant-Costa, Glaza, and Sletcher 2012, 2). This is the allure of the digital edition: when thoughtfully structured, digital editions better accommodate a constellation of material forms, voices, and perspectives than traditional print editions.

Although Digital Paxton is foremost a digital collection, the project includes a scholarly apparatus similar to Dunbar’s Paxton Papers. However, whereas Dunbar’s introduction is singular and possesses the patina of definitiveness, this project is multi-authored, interdisciplinary, and less didactic. Practically speaking, each of the project’s twelve historical overviews, lesson plans, and conceptual keyword essays serve as freestanding entry points to the digital collection. That is, if a history student were interested in Conestoga Indiantown, she might choose to read Darvin Martin’s essay, “A History of Conestoga Indiantown,” use its links to explore the digital collection, and perform additional research using the various linked resources listed below further reading. Or, if a literature student wanted to think more carefully about what “elites” meant in the eighteenth-century, she might begin with Scott Paul Gordon’s essay, “Elites.”

Students may use the project’s introduction or interpretative pathways to traverse the project; however, rather than promoting a singular, definitive approach to the massacre and pamphlet war, Digital Paxton embraces what Adele Perry (2005) and others have called polyvocality. By layering materials and contexts, each is made less definitive, more partial, contingent, and subject to scrutiny. This approach guards against rote thinking: the Paxton massacre is a story of genocidal violence and indigenous dispossession, but it is also a story of identity politics, self-governance, resistance, and active peace-making.

As Chimamanda Ngozi Adichie argued in a famous TED talk, narrative multiplicity acknowledges the complexity and dignity of human experience. “Stories matter. Many stories matter. Stories have been used to dispossess and to malign, but stories can also be used to empower and to humanize. Stories can break the dignity of a people, but stories can also repair that broken dignity,” explained Adichie. “[W]hen we reject the single story, when we realize that there is never a single story about any place, we regain a kind of paradise” (Adichie 2009). While regaining paradise is well beyond the scope of this project, grappling with the complexities, erasures, and ambiguities of historical memory falls within its purview, thanks to the generous contributions of scholarly and archival collaborators.

Collaboration and Acknowledgement

Given that Digital Paxton is very much a bootstrap operation—cobbled together without any significant external funding—recognition of labor is the least that can be offered collaborators. To this point, the first two points of the “Collaborators’ Bill of Rights” have informed the project’s approach to collaboration and acknowledgement:

1) All kinds of work on a project are equally deserving of credit (though the amount of work and expression of credit may differ). And all collaborators should be empowered to take credit for their work.

2a) Descriptive Papers & Project reports: Anyone who collaborated on the project should be listed as author in a fair ordering based on emerging community conventions.

2b) Websites: There should be a prominent ‘credits’ link on the main page with primary investigators (PIs) or project leads listed first. This should include current staff as well as past staff with their dates of employment (Clement, Croxall, et al. 2011).

Digital Paxton is the fruition—however nascent—of contributions from dozens of archivists, curators, scholars, and technologists, whose labor is subsidized by archives, cultural institutions, research libraries, and universities. Although this project was sparked by personal research interests, little would be available today without the resources, labor, and expertise of those individuals and institutions. Acknowledgement, on the project’s Credits page and in the publications and talks, is one form of (admittedly paltry) recompense.

Collaborators take many forms, and there is perhaps no cohort more vital to this project’s future—and that of the humanities more broadly—than that of student-collaborators. This project embraces Mark Sample’s notion of “collaborative construction,” through which students produce new knowledge in concert with one another, their professor, and the project, broadly conceived. “A key point of collaborative construction is that the students are not merely making something for themselves or their professor,” explains Sample. “They are making it for each other, and in the best scenarios, for the outside world” (Sample 2011).

The second half of this article seeks to put this philosophy into practice using a case study. In the spring 2017, two faculty members, Benjamin Bankhurst (Shepherd University) and Kyle Roberts (Loyola University Chicago), who were co-teaching an undergraduate history course, “Digitizing the American Revolution,” sought to introduce students to digital humanities tools and methods. They opted to create an assignment through which students would learn to transcribe eighteenth-century letters using scanned manuscript materials from Digital Paxton. Each student was responsible for transcribing a page of manuscript. After Bankhurst or Roberts vetted students’ work, transcriptions were loaded into Digital Paxton, with a credit to each student-transcriber.[3]

The project was successful on several accounts. First, it expanded the number of transcribed (and searchable) resources in Digital Paxton. Second, it required teaching materials that can be repurposed in future transcription assignments. And third, it attracted a new community of researchers to the site. This interest is certainly measurable in the students who participated in the assignment, many of whom now regularly share Digital Paxton updates on social media platforms. Perhaps most importantly, Roberts’s graduate students—Kate Johnson, Marie Pellissier, and Kelly Schmidt—took ownership of the project in ways that made it both more effective and more scalable. Using their experience within the classroom and reviews of transcription pedagogy best practices, they offered recommendations on how to modify the Digital Paxton site to facilitate easier transcription, created documents guiding students through some of the hurdles in the transcription process, and offered feedback on improving the exercise as a classroom assignment. Johnson and Schmidt will now describe their experience with the transcription project, and the challenges and opportunities it provided.

A Case Study in Digital Pedagogy

As members of Roberts’s class, we were asked to transcribe a page from Digital Paxton’s digital collection. We enjoyed the process of learning how to identify and transcribe unfamiliar eighteenth-century characters consistently, as well as the sense that we were contributing to a larger project of significant historical value to scholars and the general public. However, along with our undergraduate classmates, we encountered challenges as we struggled to interpret the manuscripts. We felt that we could help expand the project by creating a guide for people planning to transcribe individually or in a crowdsourced or classroom setting.

The assignment began with an introduction to Digital Paxton from its creator, Will Fenton (via Skype). As a class we explored the site together and received a contextual overview of the Paxton pamphlet war. The contextual information helped us better understand the significance of our assignment in relation both to our course and to the work of historians more broadly. Moreover, the personal touch of talking to the website’s creator cultivated greater interest in the project.

The directions for the assignment were simple: transcribe one page assigned from the Friendly Association manuscripts (Haverford College Quaker and Special Collections) and write a three-paragraph essay about what the page contained, whose voice it was written in, and who it excluded, and how it felt to participate in this transcription process as a historian. Students did not use any transcription aids. We each viewed the manuscript page in a web browser (or printed it out), then typed transcriptions using a word processor. However, these seemingly simple directions proved more complicated to students who were uncertain how to format their transcription consistently or account for peculiar eighteenth-century abbreviations. Some students opted to peer-review one another’s assignments before turning them in, which helped improve consistency and their understanding of the materials with which they were working.

For some students, the public nature of the transcription increased their commitment to the assignment. In her essay on “Teaching the Digital Caribbean,” Kelly Baker Josephs discusses how adding the public as an audience for coursework creates a “performance” aspect that changes the course experience (Josephs 2018). We saw this with our class, as several students put more time and effort into the assignment, such as peer reviewing each others transcriptions, expressly because it would be shared publicly on a website.

Student Responses

Each student turned in a short essay detailing the content of their transcription, its biases, and their experience transcribing it. In addition, we had a class discussion on the greatest challenges in transcribing and practices that might improve the transcription process and make the final product more useful. One student, who described working with the source as both “tedious and exciting,” encapsulated the gist of most anonymous student responses to the assignment.[4] The most frequent obstacles identified were difficulty reading the handwriting, deciphering inconsistent capitalization and spelling, differentiating between vowels as well as lowercase “L’s” and “F’s,” and unfamiliarity with the long “S.” While the scans were clear, some students had trouble reading their assigned text because authors often used both sides of the page, the ink bleeding through from one side to the other. One student suggested that reading the text and then rereading it before transcribing made it easier to understand the content. Others said that they needed more knowledge not only of paleography and period syntax, but also context about the history of the time period, region, and specific event in which these papers were situated. Without such broader knowledge, students sometimes struggled to transcribe local place names, like “Minisinks,” and the names of subjects in the documents, especially Native Americans, such as “Scarroyada.”

Nevertheless, many of the same students who struggled to decipher the eighteenth-century English and handwriting still expressed an appreciation for, and a better understanding of, the work of historians. One student wrote, “I’m quite honored and impressed that I had the opportunity to participate in the understanding and detailing of history, especially in the turn of the Revolution.” Several others professed a “newfound respect for historians” and claimed that they felt like they were “doing the work of a real historian.”

Most of the students were not history majors, and for many, this was the first time they had engaged with primary sources. Most of their previous coursework in history had focused on secondary source readings about big ideas and events, which students assessed through essay-writing assignments. One respondent noted that, “working with primary sources feels much more immersive and enlightening, in terms of being able to see a glimpse of what their life was like and the issues they dealt with in their time.”

While the process of transcribing manuscripts was monotonous, students said that work with handwritten letters changed the way they engaged with materials. One student said, “It felt good to work with a primary source such as this letter, and be able to see the firsthand view of the writer and a glimpse of their world.” Several students also welcomed access to Native American voices, who are often silenced in settlement narratives. This recognition encouraged them to grapple with the possibility that some of these documents may not have been telling the whole truth about the event. One student even mused that soon historians might have to decipher audio sources rather than interpret handwriting.

These student responses align with pedagogical scholarship. Notably, William Kashatus posits that close analysis of primary sources gives students a more personal understanding of history. Because primary sources can “evoke emotional responses,” students are better able to “identify with the human factor in history, including the risks, frailties, courage, and contradictions of those who shaped the past” (2002, 7). According to Kashatus, students are better able to recognize the biases in historical records and assess their own contemporary biases, and those of modern-day media, when they have engaged with close-readings of historical sources in the classroom (2002, 7–8). Student feedback from our classroom assignment reflects that students felt they gained a sense of intimacy with historical writers. Avishag Reisman and Sam Wineburg, writing about the new common core standards, have argued that working with primary source materials challenges students to think carefully about what does and does not count as evidence. Reisman and Wineburg argue that primary source materials compel students to “interrogate the reliability and truth claims” rather than to simply “cull” evidence (2012, 25-26). Through transcription work, students must read the text word-for-word, compelling them to think more critically about what is being expressed and not to take a document’s message at face value.

Gathering Survey Data

Although the students’ comments were helpful, we realized we needed more feedback before we pursued any future crowdsourced transcription projects. To that end, we administered an anonymous one-page survey to the participants of a transcribe-a-thon event at Loyola organized by the Center for Textual Studies and Digital Humanities in conjunction with a nationwide event. Approximately 70 students, staff, and faculty attended, 43 of whom elected to complete the survey. Additionally, we administered the survey to 21 students enrolled in a 100-level “Interpreting Literature” class. For the transcribe-a-thon, participants used a subscription-based transcription program called FromThePage.

The survey consisted of nine total questions, with six multiple choice and three open-ended questions. Questions solicited feedback on the ease of participants’ use of the transcription program and the experience of transcribing itself. Two questions asked about the participants’ perceived value of the experience of transcribing. At the end, participants were asked to provide an email if interested in future transcription projects. The anonymous survey results highlighted what elements of transcription work most engaged participants and what challenges or barriers thwarted their participation. Thus, the survey offered concrete data to support ideas that emerged from student feedback in the Loyola/Shepherd assignment. From these conclusions we gained insights into what would make a successful transcription project for interacting with digitized early American documents, and those insights informed the guides we created for Digital Paxton.

One key difference was experiential: students preferred the communal work of a transcribe-a-thon to the solitary work of a for-credit assignment. While the majority of both sets of students said that they found the experience valuable, more transcribe-a-thon participants recorded satisfaction. Additionally, a much higher percentage of transcribe-a-thon participants expressed interest in future transcription projects (82% of event participants compared to 38% of classroom participants).

We evaluated these discrepancies using responses to the open-ended questions, which included a question about what was the most valuable part of the experience. The classroom included some but not all of the additional contextualizing elements that were included in the event, such as the talks and recitations of historical speeches and songs. These elements, combined with the celebratory atmosphere of the event (held as a birthday celebration for Frederick Douglass), helped to affirm the sense that participants were both learning and contributing to a living project. The survey results and our experience with the transcribe-a-thon show us that transcription projects not only get students working with primary materials, contributing to scholarly work, and learning to use digital tools, but they also inspire students to participate in future projects.

Translating Feedback into Practice

Student feedback and survey responses provided some clear takeaways for Digital Paxton. Although incorporating a transcription project into a class’s curriculum and awarding class credit and public access incentivized students’ contributions, assignments needed to be structured to foreground both historical and logistical context for transcriptions. Additionally, assignments needed to emphasize the importance of student transcriptions to the long-term goals of the project. When we began contributing transcriptions to Digital Paxton, the project did not have guidelines for transcriptions or a built-in transcription platform.

We developed a “Transcription Best Practices” guide for Digital Paxton, now available in both the Transcription and Pedagogy sections of the site for educators who want to introduce similar assignments in their classrooms. In it, we attempted to anticipate contextual questions that might arise during an assignment. We used the feedback from the Loyola/Shepherd assignment to pinpoint the most important contextual clues needed. We included images of eighteenth century writing conventions, such as the elongated “s” and the shortening of common words like “which” to “w/ch.”By equipping potential transcribers with the materials they need to understand the papers in their historical and cultural context—the guidelines, site introduction, and historical overviews—we met a need expressed in our survey results.

Digital Paxton’s overview of the conflict provides contexts for an event with which students are only vaguely familiar, but it does not necessarily supply students with definitive answers. Students build intimacy with the text by describing it, having to assess as they go along the choice of language and style used. Writing out the text seemed to improve students’ reading comprehension. By adding transcription guidelines, we further sought to help students avoid getting bogged down by the complication of language or handwriting. In their response essays, students use the text they transcribed as “evidence” about where the author stood ideologically within the conflict and how the conflict unfolded. As one student described, the source was a piece to their understanding of the larger puzzle.

Selecting a platform and developing a process through which future cohorts could contribute to the project were more complicated. After all, our approach—toggling between a web browser and word processor—would not work well for larger classes or transcription projects. We had three key stipulations for a prospective transcription platform: it had to be easily accessible to and usable for transcribers, well-supported, and interoperable with Scalar. We identified two platforms that met most of our requirements: Scripto and FromThePage. Both enabled users to record transcriptions alongside scanned pages, a priority, for students in the “Digitizing the Revolution” course. Scripto offered a free, open-source transcription tool, but it was not being fully supported by the developers, and we did not know if it would continue to be supported in the future. Moreover, Scripto required scanned pages to be migrated from Scalar to Omeka. We selected FromThePage because it was well-supported, did not require an Omeka installation, and Fenton could use his university library’s subscription (Fordham University).

On a logistical side, the survey responses also helped us understand the barriers to using online transcription tools. The most prevalent issue was readability of the scanned text, followed by challenges navigating the transcription platform. While there are limitations to how much can be done to address manuscript readability, especially when it comes to eighteenth-century manuscript material, we took the latter concern into account when we created “Using FromThePage.” In that documentation we sought to create clear, concise instructions on how to use FromThePage in conjunction with Digital Paxton. This effort included screenshots illustrating how to register as a user and how to locate pages available for transcribing, a key issue for participants at the transcribe-a-thon. By anticipating user experience issues, we hope to enable students to lose themselves in the rich texts and contexts on Digital Paxton, rather than spend valuable time and energy troubleshooting the mechanics of the process.

Future Collaborations

While our experiment in student manuscript transcription was not without its limitations, the process of pursuing student involvement and recording student feedback have made Digital Paxton a more effective teaching tool. Thanks to the labors of Kate Johnson, Kelly Schmidt, and Marie Pellissier, the project now includes best practices for transcribing eighteenth-century manuscripts (Transcription Best Practices), an assignment for integrating a similar exercise into a university classroom (Transcription Assignment), and a platform through which any educator may bring Friendly Association manuscripts into her classroom (Transcriptions).

From our research and practical experience, we have found that transcription of primary sources encourages students to read texts more closely, to view writers as human beings (rather than detached historical figures), to confront archival gaps, silences, and erasures, and to view their work as contributory to a collaborative project. In her recent post at the National Archives, Meredith Doviak wrote that with increased digital access to collections, students now have more opportunity to become “active critics and curators of those literary productions rather than mere explicators of them” (2017). Transcription projects can serve as vehicles through which students act as participants in knowledge creation, honing valuable critical thinking skills and a historically-informed sense of media literacy that will serve them well inside and outside the classroom.


[1] Nearly every study of the Paxton crisis cites Dunbar’s 60-year-old edition, and for good reason: it collects 28 noteworthy pamphlets and provides a useful introduction to the debate. Time has, however, revealed the edition’s limitations, foremost, its narrow selection of materials. Alison Gilbert Olson (1999) has since identified at least 63 pamphlets and 10 cartoons, and the distinction between pamphlets and political cartoons is itself ambiguous, given that many cartoons were nested inside of pamphlets, many of which circulated in multiple editions.
[2] Students can surface new perspectives from indigenous peoples and backcountry settlers by attending to a diverse set of records, all of which are available as open-access, print-quality images. Today, the project features more than 2,500 images, including 16 artworks, three books, 17 broadsides, 128 manuscripts, 26 newspaper and periodical issues, 69 pamphlets, and nine political cartoons, many of which have never before been digitized.
[3] For example, visitors will find a credit to Emina Hadzic at the bottom of her transcription of “Various Memoranda” (http://digitalpaxton.org/works/digital-paxton/various-memoranda-1760—1-1). She was also acknowledged (and tagged) in social media posts on Facebook, Twitter, and Instagram.
[4] Quotations in this section come from anonymous student answers to a course survey and are reproduced with names withheld by mutual agreement. “Explore Common Sense Survey,” administered by Kate Johnson, Marie Pellissier, and Kelly Schmidt. February 1, 2018.


Adichie, Chimamanda Ngozi. 2009. “The Danger of a Single Story.” Filmed July 2009 at TED Global. TED video, 18:43.

Clement, Tanya, Brian Croxall, et al. 2011. “Collaborators’ Bill of Rights.” Off the Tracks: Laying New Lines for Digital Humanities Scholars. Media Commons Press.

Doviak, Meredith. 2017. “Teaching from the Archives.” Education Updates(blog). National Archives. February 9, 2017.

Grant-Costa, Paul and Tobias Glaza, and Michael Sletcher. 2012. “The Common Pot: Editing Native American Materials.” Scholarly Editing: The Annual of the Association for Documentary Editing 33: 1–17.

Josephs, Kelly Baker. 2018. “Teaching the Digital Caribbean: The Ethics of a Public Pedagogical Experiment.” The Journal of Interactive Technology & Pedagogy 13.

Kashatus, William C. 2002. Past, Present & Personal: Teaching Writing in U.S. History. Portsmouth: Heinemann Educational Books.

Olson, Alison Gilbert. 1999. “The Pamphlet War over the Paxton Boys.” The Pennsylvania Magazine of History and Biography 123, no. 1/2: 31–55. http://www.jstor.org/stable/20093260.

Perry, Adele. 2005. “The Colonial Archive on Trial: Possession, Dispossession, and History in Delgamuukw v. British Columbia.” Archive Stories: Facts, Fictions, and the Writing of History: 325–50. https://doi.org/10.1215/9780822387046-015.

Reisman, Avishag, and Sam Wineburg. 2012. “Text complexity in the history classroom: Teaching to and beyond the common core.” Social Studies Review 51, no. 1: 24–29.

Sample, Mark. 2011. “Building and Sharing (When You’re Supposed to Be Teaching).” Journal of Digital Humanities 1, no. 1 (Winter).

About the Authors

Will Fenton is the Director of Scholarly Innovation at the Library Company of Philadelphia, the Creative Director of Redrawing History: Indigenous Perspectives on Colonial America, funded by The Pew Center for Arts and Heritage, and the founder and editor of Digital Paxton. Will earned his Ph.D. at Fordham University, where he specialized in early American literature and the digital humanities. He is the recipient of prestigious fellowships from the American Philosophical Society; Haverford College Quaker and Special Collections; the Humanities, Arts, Science, and Technology Alliance and Collaboratory; the Library Company of Philadelphia; the Modern Language Association; and the Omohundro Institute of Early American History & Culture. His writings have appeared in American QuarterlyCommon-Place, and ESQ and in numerous public platforms, including Inside Higher Ed, Slate, and PC Magazine.

Kelly Schmidt, co-creator of ExploreCommonSense.comis Research Coordinator for the Slavery, History, Memory, and Reconciliation Project, co-sponsored by Saint Louis University and the Jesuits of the Central and Southern United States. She is a PhD candidate at Loyola University Chicago, where her research focuses on slavery, race, and abolition. Kelly has pursued her interests in museum work, public history, and digital humanities at several institutions, including the Heritage Village Museum, Cincinnati Museum Center, National Underground Railroad Freedom Center, Auschwitz-Birkenau State Museum, and the Colonial Williamsburg Foundation.

Kate Johnson is an archival assistant at the University of Northern Colorado’s Archives and Special Collections. She earned her M.A. in Public History from Loyola University Chicago, and her B.A. in History and German from the University of Northern Colorado. Her research interests are in women’s history, cultural history, and early America. She has worked in museums and public history institutions for over ten years, including holding positions at the Thomas Jefferson’s Monticello, The Women and Leadership Archives, and the Frances Willard House Museum. She is a co-creator of the site, ExploreCommonSense.com and also currently serves as an appointed member of the National Council on Public History’s Digital Media Group.

Distorted image of institutional logo

Born-Digital Archives in the Undergraduate Classroom


This case study describes a first-year seminar titled “Born Digital,” taught by a university library faculty member within a digital humanities curricular initiative at a small liberal arts college. This course explored the concept of “born-digital archives” and asked the following questions: How will future scholars understand the twenty-first century world of fragmented and fragile knowledge production and storage? What can creators do to ensure their content will continue to serve as record of their community? How do archivists adjust to a new paradigm where collecting decisions must be made in an instant?

The course embedded significant training in digital competencies and information literacy skills within a seminar on digital memory and archival theory. We examined issues related to the ethics of appraisal, privacy, digital obsolescence, underrepresented communities, media studies, and collective memory. A series of hands-on lab sessions gave students the technical skills to create their own web archives on the Archive-It platform. For undergraduates, a course on born-digital archives can provide a critical window into understanding modern archival practices and concerns, as well as our personal and collective responsibilities as media producers and consumers. This article addresses the lessons learned when adapting professional practices for an undergraduate audience.


“The average lifespan of a webpage is 100 days.” This striking statistic has made its way into several popular magazine articles in the last few years. These articles, published in places like The Atlantic (LaFrance 2015) and The New Yorker (Lepore 2015) are alarmist in tone, but they do dispel the notion that the web is a place of permanence. The mourning period for Geocities may be over, but the recent shuttering of Storify, and Photobucket’s “breaking of the Internet” by blocking image links for thousands of users following a subscription restructuring (Notopoulos 2017) remind us that our content will not be available in perpetuity. Even the source of this statistic was hard to track down due to link rot.[1]

It was experiences similar to this one—the troublesome journey through dead links to verify a citation—that inspired the creation of a first-year undergraduate seminar on the topic of born-digital archives, as a way to engage students in the realities of accessing and constructing a historical record. One of the exciting outcomes of the popularity of digital humanities projects in the undergraduate classroom is the increased engagement with the material and staff of local archives and special collections. For college students born in the twenty-first century, these DH projects create a tangible connection with a past where letters, ledgers, and newspapers were the primary modes of mass communication and record keeping. But what about the artifacts of our time? We produce millions of records on a daily basis in the form of email, social media, and the detritus of a 24-hour news cycle. Will these records even survive 100 days? How will future scholars understand the twenty-first century world of fragmented, fragile, and ephemeral knowledge production and storage? What can creators do to ensure their content will survive as a record of their community? How do archivists adjust to a new paradigm where collecting decisions must be made in an instant? Digital archivists are starting to figure out how to handle the vast volumes of data at risk. Just as importantly, they are working to establish best practices for ethical collecting. Is anything on the web fair game for capture? Is it right to ignore robots.txt? For undergraduates, a course on born-digital archives can provide a critical window into understanding modern archival practices, as well as their own responsibilities as media producers and consumers.

This View from the Field will describe a first-year seminar titled “Born Digital,” taught by a university library faculty member within a digital humanities curricular initiative at Washington and Lee University.[2] Since this course was taught at the introductory level in a multi-disciplinary environment, its methods and assignments could be adapted to a variety of classes. The course embedded significant training in digital competencies and information literacy skills within a seminar on digital memory and archival theory. We began with reflective conversations on the experience of being a “digital native,” and then moved on to exploring the concepts and skills necessary to create a born-digital archive using the Archive-It platform.[3] This case study will share the lessons learned while adapting professional archival practices for an undergraduate audience.

Course Design and Framing

How do born-digital objects and records change the way we approach teaching? There is an abundance of literature on teaching with archival material and digital technologies. A search for model courses returns digital history courses similar to Shawn Graham’s “Crafting Digital History”[4] and graduate-level courses on digital preservation from library and information programs. Creating a seminar on born-digital archives required adapting these graduate-level models to an undergraduate audience unfamiliar with the professional and methodological practices of archivists and historians.

Because our course explored new territory, it was essential to find readings that exposed students to the rich scholarly conversation around archival principles without weighing them down with jargon. Several texts met these criteria and were instrumental in shaping the course. Abbey Smith Rumsey’s When We Are No More (2016) provides a high-level view of our relationship with information. From the ancient Greeks to the development of modern science, Rumsey contextualizes the modern information revolution for students who were born after the invention of Google and reminds us that “we have a lot of information from the past about how people have made these choices before” (Rumsey 2016, 7). For the nuts and bolts of digital preservation, we relied on Trevor Owens’s Theory and Craft of Digital Preservation (2017), available as a pre-print at the time of the course. Not only is Owens well respected in the digital preservation world, his writing is engaging and approachable for undergraduates. Owens’s purpose for the text, offering “a path for getting beyond the hyperbole and the anxiety of ‘the digital’ and establish[ing] a baseline of practice” (Owens 2017, 6) fit well with the goals of the course. Our final course text, The Web as History: Using Web Archives to Understand the Past and Present (Brügger and Schroeder 2017), was essential for modeling the way scholars make meaning from born-digital archives. Ian Milligan’s chapter, “Welcome to the web: The online community of Geocities during the early years of the World Wide Web,” contextualizes Geocities in its time and provides examples of computational approaches to web archives (Brügger and Schroeder 2017).

The learning objectives for the course, listed below, drew from overlapping frameworks.

  • Students will learn and be able to apply the principles of archival theory and practice.
  • Students will think critically about the use and creation of digital records in their own lives and communities.
  • Students will analyze “born digital” archives through the lens of their chosen discipline(s).
  • Students will practice methods for collecting and preserving born-digital archives by conducting their own digital preservation project.

These objectives gesture toward the established digital humanities learning outcomes from A Short Guide to the Digital_Humanities[5] (Burdick et al. 2012), adopted by our curricular initiative. These outcomes emphasize the ability to assess information technologies and practice design thinking. The Association of College and Research Libraries’ Framework for Information Literacy for Higher Education served as this course’s backbone (Association of College and Research Libraries 2015).[6] Students were asked to think critically about information in every assignment. From writing an annotated bibliography to creating metadata for their web archive, students moved from savvy information consumers to thoughtful information producers. The lab exercises drew from Bryn Mawr’s Digital Competencies initiative and framework. Students developed “digital survival skills” like file structure navigation, troubleshooting, and digital writing and publishing skills like HTML and CSS (Bryn Mawr College n.d.).

Structure and Assignments

This course[7] took place during a twelve-week term in the winter of 2018. We met for ninety minutes twice a week and divided the week into discussion and lab days. Thematically, the course began with three weeks of introductions to the major concepts of the course: the idea of the “digital native,” collective memory, record keeping, and archives as institutions. The first assignment was a personal essay on these concepts and provided an initial indication of students’ comprehension and writing ability. Starting with this framing gave students an opportunity to share personal information and ultimately created a strong sense of community within the class.

In week four, we transitioned out of the personal sphere with a visit to the university library’s Special Collections and Archives department. After an introduction to the unit and its operations, students formed small groups and selected from a small pool of manuscript collections. For the second assignment, students unpacked each collection to learn about its creator, context, and provenance. The hands-on experience with archival sources readied them to consider individual archival principles like original order and respect des fonds (the idea that archival records should be grouped by creator). We even discussed the role and resources of the Special Collections and Archives department within our institutional context.

After week seven, we devoted each week to discussing one aspect of the records management lifecycle—appraisal, acquisition, arrangement and description, access, and outreach. Students worked toward their final project through a series of assignments: an annotated bibliography of existing born-digital collections and scholarly articles on a potential topic, a proposal for their born-digital collection, a process log, a short presentation, and a final reflection. Their final project was conducted through an educational partnership with Archive-It, a web archiving service. For a fee, we received 15GB of space in an Archive-It account and a live training session from an Archive-It staff member. Students selected ten websites on a topic of their choosing, from NFL protests to cryptocurrency.[8] They crawled each of their URLs to create a snapshot that would be preserved by the Internet Archive. The process log was the primary graded product to ensure that platform difficulties did not unevenly affect students.

Labs and Technical Skills

Throughout the course, we held a series of lab days to learn the technical skills necessary for the web archiving project. Lab days were relaxed and instructions were available on the course website so students could work at their own pace. Grouping students by operating system helped with peer-to-peer problem solving when technical errors occurred. On the first day, we built simple websites with HTML and CSS—essential languages for troubleshooting captured websites in Archive-It. Another lab session focused on the command line, using existing tutorials like “The Command Line Crash Course (Shaw n.d.).[9] This skill came in useful when a guest speaker led a workshop on Twarc, a command line tool for capturing social media data (specifically Twitter), created by Documenting the Now.[10] One of the most engaging lab days was spent making glitch art to complement our discussion of file fixity in digital objects. We modified images and audio by opening the files in a text editor and scrambling the content to demonstrate the fragility of digital files.

All of the labs contributed to improving computer and web literacies. Despite their reputation as digital natives, most of the first-year students did not know much about how the web worked. Working with HTML or the command line was an exciting look behind the curtain. Not only did the labs improve specific skills, they helped students become comfortable learning and troubleshooting digital tools.


Students successfully achieved the goals of this course. The primary challenge from the instructor’s point of view was translating professional concepts to a first-year audience. The projects and lab activities were essential in bringing archival principles to life. The opportunity to work with manuscript collections was a highlight for many students and let them experience the realities of archival work. By using the Archive-It platform, students created something that would live beyond them and the bounds of the course. Working with their own topic was both exciting and challenging. It created a strong level of investment, but required explicit training in generating an appropriate research agenda.

Overall, most students easily met the first two learning objectives of learning archival principles and thinking critically about their own digital footprint. Student performance was uneven regarding the more analytical objectives, such as analyzing existing born-digital archives and creating their own collection. Project-based assignments were new to these first-year students, as was the emphasis on process over product. Student evaluations were positive, with most citing the value of learning about an underrepresented field and gaining a new perspective. However, from the instructor perspective, the best method of assessment would be to track the information literacy practices of the students throughout their college career. As the digital humanities curriculum initiative transitions into a digital culture and information minor, hopefully this type of assessment will be possible.


A course centered on archival research, whatever form it may take, is an ideal vehicle for teaching a range of scholarly practices and content areas. It is important for current students to be able to assess and understand the digital content they consume and produce every day. A course on born-digital archives opens the possibilities beyond specific manuscript collections or institutional records to anything on the web. Students held a range of opinions on the trustworthiness of the government and private institutions as preservers of the cultural record, but they all recognized the value in taking ownership of your data and preventing gaps and biases in collections. Their reflections consistently mentioned the importance of community-created and -controlled archives. Hopefully this case study inspires other instructors to make use of born-digital archives in their teaching.


[1] “The Signal,” the Library of Congress’s blog on digital stewardship, cites a Washington Post article (Ashenfelder 2011) as the source for this statistic, but their embedded link results in a 404 for an individual’s blog. Tracking down the Washington Post article in a subscription-based newspaper database indicates that the quote was attributed to Brewster Kahle, founder of the Internet Archive, though no context or evidence is given.

[2] More information is available at https://digitalhumanities.wlu.edu/.

[3] Archive-It is a subscription-based web archiving service offered by the Internet Archive. The university library sponsored an “Educational Partnership” account for this course. Archive-It works with a variety of partners, including K-12 schools. They can be found at http://archive-it.org/.

[4] Available at http://site.craftingdigitalhistory.ca/.

[5] Available at http://jeffreyschnapp.com/wp-content/uploads/2013/01/D_H_ShortGuide.pdf.

[6] Available at http://www.ala.org/acrl/standards/ilframework.

[7] The course website is hosted on the GitBook platform and synced with the instructor’s GitHub account: https://mackenziekbrooks.gitbooks.io/dh-180-born-digital/content/.

[8] The final projects can be accessed here: https://archive-it.org/organizations/1374.

[9] Available at https://learnpythonthehardway.org/book/appendixa.html.

[10] Documenting the Now is a collaborative effort to build community and tools around social media preservation. It can be accessed at https://www.docnow.io/.


Ashenfelder, Mike. 2011. “The Average Lifespan of a Webpage” The Signal. November 8, 2011. http://blogs.loc.gov/thesignal/2011/11/the-average-lifespan-of-a-webpage/.

Association of College and Research Libraries. 2015. “Framework for Information Literacy for Higher Education.” February 9, 2015. http://www.ala.org/acrl/standards/ilframework.

Brügger, Niels, and Ralph Schroeder, eds. 2017. The Web as History: Using Web Archives to Understand the Past and the Present. London: UCL Press. http://discovery.ucl.ac.uk/1542998/1/The-Web-as-History.pdf.

Bryn Mawr College. n.d. “Digital Competencies” Accessed June 29, 2018. https://www.brynmawr.edu/digitalcompetencies.

Burdick, Anne, Johanna Drucker, Peter Lunenfeld, Todd Presner, and Jeffrey Schnapp, eds. 2012. Digital Humanities. Cambridge, Mass: MIT Press.

LaFrance, Adrienne. 2015. “Raiders of the Lost Web.” The Atlantic, October 14, 2015. https://www.theatlantic.com/technology/archive/2015/10/raiders-of-the-lost-web/409210/.

Lepore, Jill. 2015. “What the Web Said Yesterday.” The New Yorker, January 19, 2015. https://www.newyorker.com/magazine/2015/01/26/cobweb.

Notopoulos, Katie. 2017. “Photobucket Is Holding People’s Photos For ‘Ransom.’” BuzzFeed. July 7, 2017. https://www.buzzfeed.com/katienotopoulos/photobucket-just-killed-a-chunk-of-internet-history.

Owens, Trevor. 2017. The Theory and Craft of Digital Preservation. Baltimore: Johns Hopkins University Press. https://osf.io/preprints/lissa/5cpjt.

Rumsey, Abby Smith. 2016. When We Are No More: How Digital Memory Is Shaping Our Future. New York: Bloomsbury Publishing USA.

Shaw, Zed A. n.d. “Appendix A: Command Line Crash Course.” Learn Python the Hard Way. Accessed November 25, 2018. https://learnpythonthehardway.org/book/appendixa.html.

About the Author

Mackenzie Brooks is Assistant Professor and Digital Humanities Librarian at Washington and Lee University. There, she teaches in the Digital Culture and Information minor and coordinates Digital Humanities initiatives. Her research focuses on digital pedagogy, scholarly text encoding, and metadata.

Need help with the Commons? Visit our
help page
Send us a message
Skip to toolbar