Tagged digital humanities

Doing DH as/or Digital Scholarship: An Ethnography of Scholarship, Properly Done

Review of Amongst Digital Humanists: An Ethnographic Study of Digital Knowledge Production by Antonijević, Smiljana, Published by Springer, 2016. http://dx.doi.org/10.1057/9781137484185

Amongst Digital Humanists: An Ethnographic Study of Digital Knowledge Production (2015) by Smiljana Antonijević is among the first long-form ethnographic studies of theory and practice in Digital Humanities (DH) scholarship. This study seems in direct response to Christine Borgman’s 2009 call for a “social studies of digital humanities.” Borgman writes:

Why is no one following digital humanities scholars around to understand their practices, in the way that scientists have been studied for the last several decades? This body of research has informed the design of scholarly infrastructure for the sciences, and is a central component of cyberinfrastructure and eScience initiatives . . . The humanities community should invite more social scientists as research partners and should make themselves available as objects of study. In doing so, the community can learn more about itself and apply the lessons to the design of tools, services, policies, and infrastructure (Borgman 2009, para 76).

Borgman’s comments and Antonijević’s book provoke a central question, however: Why must DH invite social scientists to study DH practices? What’s the downside? Why can’t we study and learn more about ourselves, ourselves?

DH has been considering the state of the field, its infrastructures, its mode of training and sustainability for decades (see Bowles 1965; Hockey 1986; and Selfe 1988). In considering and explaining the field, DH scholarship has long included methods that Borgman might describe as “following digital humanities scholars around” such as interviews, observations, and surveys (Svensson 2009, 2010, 2011, 2012; Hayles 2012; Nyhan 2012; Keener 2015 to name several). Unlike the social science studies about cyberinfrastructure that Borgman cites as examples, however, the DH scholarship to which I am pointing is not presented as social science even when it employs similar methods of data collection and analysis. In part, this difference is due to the particularities of how analyzing and writing about field research and data collection as qualitative empirical study is done in the social sciences (Becker 1996). When Borgman refers to particular scholarly practices such as “social studies” in the academy, she is pointing to long-standing and rigorous theory-driven practices that require theoretical development, expertise, long years of study, and a deep immersion in the community under examination.

As a social scientific study, Amongst Digital Humanists provides a rich collection of ethnographic data about the daily practices of scholars who produce knowledge by employing digital technologies in the humanities. In the first chapter, Antonijević describes the history of DH and the many debates that inspired her study as well as her methods and her particular epistemological perspective. The three central chapters focus on her findings, which include a snapshot of how scholars from different fields inside and out of the humanities engage with digital technologies as a form of “capacity building” that is constantly impacted by organizational factors that underlie digital scholarship such as training, professionalization, and sustainability. In its call for “universal humanism” and “pluralistic futures” (156), Antonijević’s final chapter uses these findings to argue for a delineation between DH and digital scholarship (DS) that she maintains would ensure a more diverse and inclusive discourse community around DH/DS in higher education rather than the “exclusionary, accusatory, or dismissing discourses and actions” that DH is currently perpetuating (156). Her pointed critiques of DH include (a) that participants are undertrained; (b) that their scholarship lacks clear evaluation criteria and venues; and (c) that their once well-funded projects are unsustainable, all of which makes for a prevailing DH culture against which, she asserts, we must “fight” (155).

Unfortunately, Antonijević’s conclusions about DH seem to reflect how she framed her study and the behaviors and interactions of the homogenous communities she chose to follow rather than reflect a more comparative engagement with epistemes and disciplines across DH as she claims to pursue at the book’s onset. Ethnographers typically believe that if “properly done,” ethnographic methods are the best methods for better understanding “real-world social processes” (Forsythe 1999, 129), but to do ethnography “properly,” the ethnographer must use data-gathering methods that are framed by a particular philosophical stance and conceptual structure. Influenced by the writings of Strathern (2005) and Bourdieu (1988), who stress comparative approaches and understand the world as constructed and situated, Antonijević asserts that her methods promote a constructed and situated appraisal of DH (34), but Antonijević’s laudable call for pluralism and her desire to “challenge assumptions of epistemic cultural essentialism” (31) in DH are undercut by the study’s scope, which overlooks and underplays alternative DH communities, histories, and methods. Instead of a comparative approach, Antonijević props up a particularly narrow version of DH by willfully ignoring this wider range of DH work, meanwhile creating a convenient, straw-man version of DH to attack.

In particular, who and what she chooses to study helps prop up this limited view of DH. While rich in breadth of experience, her examples are narrowly situated both geographically – primarily Western European universities (with one US site) – and in terms of resources as each of these sites include robust and well-funded research programs.  In her attempt to give “an empirical basis for inquiry into the changing landscape of the humanities” (34), Antonijević describes with explicit detail her visits between 2010 and 2013 to 23 educational, research, and funding institutions in the US and Europe. She describes surveys, interviews, and observations with 258 participants including researchers, faculty, students, university administrators, librarians, software developers, policy makers, and funders. These projects include Alfalab: eHumanities Tools and Resources (Royal Netherlands Academy of Arts and Sciences or KNAW); Digitizing Words of Power (KNAW and the University of Amsterdam); Humanities Information Practices (KNAW, Oxford Internet Institute, and University College London); and Digital Scholarly Workflow (Penn State University). In the conditions she describes, DH does look like a cog in a vacuous political engine run by a terrible oligarchy of newly clothed emperors (Kirschenbaum 2014).

In Antonijević’s version of DH, alternate kinds of DH work are largely ignored. Examples of DH that could thwart some of her conclusions about DH include (but are not limited to) programs where greater importance is placed on teaching rather than research initiatives, such as Bard College’s Experimental Humanities Concentration and Initiative, which is situated within a small liberal arts college focused primarily on the arts, and the digital humanities community at Salem State University, which seeks “to create DH opportunities for underserved student populations and a model for building DH at regional comprehensive universities” (Risam, Snow, and Edwards 2017). Other examples include many of the international groups such as the Centre of Educational Technology at the University of Cape Town, where researchers think critically about educational technologies, or transnational groups such as RedHd (Red de Humanidades Digitales) in Mexico, which seeks to think about DH from a global perspective that requires DH to consider how access to computing power and technology on a small scale must be accommodated in our understanding of DH work across a large number of less wealthy global communities (Galina 2013).

In contrast to the narrow range of sites that Antonijević has chosen to study, the definition of DH work she uses to frame her study covers too broad a swath. In Antonijević’s study, the digital scholarship she describes has, at its core, a fundamental separation between digital tools and the research workflow. In her attempt to capture how digital technologies are being used in humanities research, for example, Antonijević begins her interviews asking respondents what digital tools they use in each of eleven research “phases” that she shows them on a visual prompt. These phases include research activities: collect, find, analyze, write, communicate, organize, annotate, cite, reflect, archive, and share. She writes about this initial tactic:

The use of the visual prompt enabled me thus to develop a more detailed overview of the variety of digital tools scholars use in their research practices, and to understand the influence these tools have on segments of scholarly practice that become routinized and thus invisible to analytical activity. (39)

Antonijević is including and describing scholars who use digital tools while they conduct research but she is interviewing scholars who do not consider the digital tools analytically as part of their research, reflecting a definition of the DH scholar that is in direct opposition to a prevailing notion in DH about what counts as good DH scholarship.

In contrast, many have shown that digital humanities is a field in which a critical awareness of how digital information technologies influence perspectives in and on research in the humanities is essential to rigorous DH scholarship. DH scholarship typically makes visible how codes and platforms (Chun 2013; Manovich 2013), media archaeology (Kirschenbaum 2007; Parikka 2012), digital scholarly communications (Fitzpatrick 2007), publishing (McPherson 2014), gaming (Flanagan 2009; Jagoda 2013), geospatial analysis (Elliott & Gillies 2009), interactive, multimedia design (Balsamo 2011), and machine learning (Heuser and Le-Khac 2012; Piper 2017), statistical analysis (Burrows 2004; Ramsay 2011), and visualization (Drucker 2011; D’Ignazio and Klein 2016) influence scholarship in the humanities. It is quite true that the scholars Antonijević interviewed and observed use digital tools, but it is not accurate or productive to say that the broad swath of participants in her study accurately portrays the “digital humanists” that the title Amongst Digital Humanists promises to better understand.

To be fair, Amongst Digital Humanists demonstrates well that research practices employing digital technologies in the humanities can be particularly difficult to study with ethnographic methods. Scholarly practices in the humanities can often occur privately, independently, idiosyncratically, and outside of the more public and regimented lab spaces that are traditionally studied in Science and Technology Studies (STS) (see, for example, Knorr-Cetina and Mulkay 1983; Knorr-Cetina 1999; Latour 1988; Latour and Woolgar 1986). In particular, Antonijević, Beaulieu (2004; 2010), and Borgman (2009) mention the difficulty of “rendering ‘public’ philosophical, historical, or literary knowledge” (Beaulieu 2004, 456) that is produced through research practices in the humanities. Doing social science studies—like doing DH—requires an epistemological framework and a community of practice and the kinds of ethnographic methods that Antonijević employs in her study are part of a century-old tradition that is changing and becoming increasingly more difficult as the methods of work that ethnographers study changes in the digital age (Forsythe 1999). Scholars in STS have shown that digital technologies can make certain kinds of work and infrastructures unobservable even when (as in Star 1999) there are well-meaning subjects who seek to describe their work for the ethnographer. Yet, given these difficulties, it is the study’s narrowness (in terms of projects) and breadth (in terms of defining DH) that ultimately weakens the foundation of her concluding arguments about DH.

Finally, with her limited sample of DH projects and communities and a narrow viewpoint of DH history, Antonijević misses the main point of and the opportunities present in the vast range of DH scholarship that seeks to expose and critique the bindings between knowledge production and intellectual communities or epistemes, technologies, and cultures. Whether one speaks of DH or DS, the politics of institutions that support digital technologies can never be untethered from how knowledge production happens in higher education. In response to a recent piece in the Los Angeles Review of Books (Allington, Brouillette, and Golumbia 2016) that positions DH as the site for a neoliberalist take-over of the humanities in higher education, for example, Alan Liu (amid a more general outcry by many scholars both in DH and without) revisits his own misunderstood critiques of DH in order to consider what he calls the “critical potential of DH” (Liu 2016b). Liu shakes an admonitory finger at the LARB authors for not seeing “that digital humanists have real critical goals too” (2016c). He then points to his newest book project where he lays out these goals:

I call for digital humanities research and development informed by, and able to influence, the way scholarship, teaching, administration, support services, labor practices, and even development and investment strategies in higher education intersect with society, where a significant channel of the intersection between the academy and other social sectors, at once symbolic and instrumental, consists in shared but contested information-technology infrastructures. (Liu 2016a).

Like the LARB authors, Antonijević sets up the contours of her study to portray DH as the site of what ails the humanities in higher education in general. Unlike Liu, Antonijević observes but does not see DH; as such, she chooses not to see its potential.

A social studies of the Digital Humanities could help us acknowledge truly rigorous DH scholarship as well as steer us away from scholarship poorly done, but straw man theories about the enemies among us distract us from focusing on the fundamental concerns around issues of academic rigor, professionalization, funding, and public engagement at the heart of the humanities in higher education today. Antonijević’s claim to understand the world as constructed and situated is in direct contrast to her assertion on the book’s last page that diversity and inclusivity in higher education means understanding that “regardless the size of any current disciplinary ‘tent,’ digital knowledge production is intellectually, technically, and culturally unbounded” (156). Indeed, the interviews and observations collected in Amongst Digital Humanists show that digital knowledge production in digital humanities is necessarily culturally bounded. Antonijević’s DH is a convenient, tactical, and situated version of DH that serves the book’s ultimate ends: to tell us about uncritical scholarship (scholarship done improperly). As a result, Amongst Digital Humanists provides a detailed snapshot of how different kinds of scholars in the humanities use digital technologies in well-resourced research communities in Europe and the United States on individual, disciplinary, and organizational levels, but Antonijević’s shortsighted version of DH shortchanges the impact such a rich and complex view of digital scholarly work in the context of humanities knowledge production could have made on better understanding the real-world, social processes of critical digital humanities.

 

Bibliography

Allington, Daniel, Sarah Brouillette, and David Golumbia. 2016. “Neoliberal Tools (and Archives): A Political History of Digital Humanities.” Los Angeles Review of Books. May 1, 2016. https://lareviewofbooks.org/article/neoliberal-tools-archives-political-history-digital-humanities/.

Balsamo, Anne. 2011. Designing Culture: The Technological Imagination at Work. Durham, NC: Duke University Press.

Beaulieu, Anne. 2004. “Mediating Ethnography: Objectivity and the Making of Ethnographies of the Internet.” Social Epistemology 18 (2–3): 139–63. https://doi.org/10.1080/0269172042000249264.

———. 2010. “From Co-Location to Co-Presence: Shifts in the Use of Ethnography for the Study of Knowledge.” Social Studies of Science 40 (3): 453–70. https://doi.org/10.1177/0306312709359219.

Becker, Howard S. 1996. “The Epistemology of Qualitative Research.” In Contemporary Field Research: Perspectives and Formulations, edited by Robert M. Emerson, 2nd ed. Prospect Heights, IL: Waveland Press.

Borgman, Christine L. 2009. “The Digital Future Is Now: A Call to Action for the Humanities.” Digital Humanities Quarterly 003 (4).

Bourdieu, Pierre. 1988. Homo Academicus. Stanford, CA: Stanford University Press.

Bowles, Edmund A. 1965. “The Role of the Computer in Humanistic Scholarship.” In Proceedings of the Fall Joint Computer Conference, AFIPS (American Federation of Information Processing Societies), 269–76. https://doi.org/10.1145/1463891.1463922.

Burrows, John. 2004. “Textual Analysis.” In Companion to Digital Humanities (Blackwell Companions to Literature and Culture), edited by Susan Schreibman, Ray Siemens, and John Unsworth, Blackwell Companions to Literature and Culture. Oxford: Blackwell Publishing Professional. http://digitalhumanities.org:3030/companion/view?docId=blackwell/9781405103213/9781405103213.xml&chunk.id=ss1-4-4.

Chun, Wendy Hui Kyong. 2013. Programmed Visions: Software and Memory. Cambridge, MA: The MIT Press.

D’Ignazio, Catherine, and Lauren F. Klein. 2016. “Feminist Data Visualization.” Presentation at the IEEE VIS Conference in Baltimore, MD, October 23-28, 2016. http://www.kanarinka.com/wp-content/uploads/2015/07/IEEE_Feminist_Data_Visualization.pdf.

Drucker, Johanna. 2011. “Humanities Approaches to Graphical Display.” Digital Humanities Quarterly 005 (1). http://digitalhumanities.org/dhq/vol/5/1/000091/000091.html.

Elliott, Tom, and Sean Gillies. 2009. “Digital Geography and Classics.” Digital Humanities Quarterly 003 (1). http://www.digitalhumanities.org/dhq/vol/3/1/000031/000031.html.

Fitzpatrick, Kathleen. 2007. “Kathleen Fitzpatrick.” Planned Obsolescence. Accessed June 2, 2007. http://www.plannedobsolescence.net/kathleen-fitzpatrick.

Flanagan, Mary. 2009. Critical Play: Radical Game Design. Cambridge, Mass: The MIT Press.

Forsythe, Diana E. 1999. “‘It’s Just a Matter of Common Sense’: Ethnography as Invisible Work.” Computer Supported Cooperative Work (CSCW) 8 (1–2): 127–45. https://doi.org/10.1023/A:1008692231284.

Galina, Isabel. 2013. “Is There Anybody Out There? Building a Global Digital Humanities Community.” Plenary at the Digital Humanities Conference in Lincoln, Nebraska, July 19, 2013. http://humanidadesdigitales.net/blog/2013/07/19/is-there-anybody-out-there-building-a-global-digital-humanities-community/.

Heuser, Ryan, and Long Le-Khac. 2012. A Quantitative Literary History of 2,958 Nineteenth-Century British Novels: The Semantic Cohort Method. Stanford Literary Lab Pamphlet 4.

Hockey, Susan. 1986. “Workshop on Teaching Computers and the Humanities Courses.” Literary and Linguistic Computing 1 (4): 228–29. https://doi.org/10.1093/llc/1.4.228.

Jagoda, Patrick. 2013. “Gamification and Other Forms of Play.” boundary 2: an international journal of literature and culture 40 (2): 113–44. https://doi.org/10.1215/01903659-2151821.

Keener, Alix. 2015. “The Arrival Fallacy: Collaborative Research Relationships in the Digital Humanities.” Digital Humanities Quarterly 009 (2).

Kirschenbaum, Matthew. 2007. Mechanisms: New Media and the Forensic Imagination. Cambridge, MA: The MIT Press.

———. 2014. “What Is ‘Digital Humanities,’ And Why Are They Saying Such Terrible Things About It?” differences 25 (1): 46–63. https://doi.org/10.1215/10407391-2419997.

Knorr-Cetina, Karin. 1999. Epistemic Cultures: How the Sciences Make Knowledge. Cambridge, MA: Harvard University Press.

Knorr-Cetina, Karin and Michael Mulkay, eds. 1983. Science Observed: Perspectives on the Social Study of Science. London: SAGE Publications.

Latour, Bruno. 1988. Science in Action: How to Follow Scientists and Engineers Through Society. Reprint edition. Cambridge, MA: Harvard University Press.

Latour, Bruno, and Steve Woolgar. 1986. Laboratory Life: The Construction of Scientific Facts. 2nd edition. Princeton, NJ: Princeton University Press.

Liu, Alan. 2016a. “Drafts for Against the Cultural Singularity (Book in Progress).” Alan Liu (blog). May 2, 2016. http://liu.english.ucsb.edu/drafts-for-against-the-cultural-singularity/.

Liu, Alan (@alanyliu). 2016b. “Draft excerpts from my book in progress bearing on critical potential of DH–what such critique could uniquely be: http://bit.ly/1rscrXj” Twitter. May 2, 2016. https://twitter.com/alanyliu/status/727293560068825088.

———. 2016c. “The LARB piece wants a monopoly on critique for its kind of critique. Won’t see nuance that digital humanists have real critical goals too.” Twitter. May 2, 2016. https://twitter.com/alanyliu/status/727292931078397952.

Manovich, Lev. 2013. Software Takes Command. New York: Continuum Publishing Corporation.

McPherson, Tara. 2014. “Designing for Difference.” differences 25 (1): 177–88. https://doi.org/10.1215/10407391-2420039.

Parikka, Jussi. 2012. What Is Media Archaeology? 1st edition. Cambridge, UK: Polity.

Piper, Andrew. 2017. “Think Small: On Literary Modeling.” PMLA 132 (3): 651–58. https://doi.org/10.1632/pmla.2017.132.3.651.

Ramsay, Stephen. 2011. Reading Machines: Toward an Algorithmic Criticism. Urbana: University of Illinois Press.

Risam, Roopika, Justin Snow, and Susan Edwards. 2017. “Building An Ethical Digital Humanities Community: Librarian, Faculty, and Student Collaboration.” College & Undergraduate Libraries 24 (2–4): 337–49. https://doi.org/10.1080/10691316.2017.1337530.

Selfe, Cynthia. 1988. “Computers in English Departments: The Rhetoric of Technopower.” ADE Bulletin, 63–67. https://doi.org/10.1632/ade.90.63.

Star, Susan Leigh. 1999. “The Ethnography of Infrastructure.” American Behavioral Scientist 43 (3): 377–91. https://doi.org/10.1177/00027649921955326.

Strathern, Marilyn. 2005. Comment on Carrithers, Michael. 2005. “Anthropology as a Moral Science of Possibilities.” Current Anthropology 46 (3): 433–56. https://doi.org/10.1086/428801.

Svensson, Patrik. 2009. “Humanities Computing as Digital Humanities.” Digital Humanities Quarterly 003 (3).

———. 2010. “The Landscape of Digital Humanities.” Digital Humanities Quarterly 004 (1).

———. 2011. “From Optical Fiber To Conceptual Cyberinfrastructure.” Digital Humanities Quarterly 005 (1).

———. 2012. “Envisioning the Digital Humanities.” Digital Humanities Quarterly 006 (1).

Watkins, Evan. 1989. Work Time: English Departments and the Circulation of Cultural Value. Stanford, CA: Stanford University Press.

 

About the Author

Tanya E. Clement is an Associate Professor in the School of Information at the University of Texas at Austin. She has a PhD in English Literature and Language and an MFA in fiction. Her primary area of research centers on scholarly information infrastructure as it impacts academic research, research libraries, and the creation of research tools and resources in the digital humanities. She has published widely in DH. Some of her digital projects include High Performance Sound Technologies for Access and Scholarship (HiPSTAS), In Transition: Selected Poems by the Baroness Elsa von Freytag-Loringhoven and BaronessElsa: An Autobiographical Manifesto.

1

Of Software and Sepulchers: Modeling Ancient Tombs from Oaxaca, Mexico

Abstract

More and more frequently, digital art history is a course on offer, or even required, in graduate and undergraduate art history programs. There are a burgeoning number of ways of “doing digital art history,” from narrative mapping with Omeka-Neatline[1] and other tools, to creating Wikis on art history topics,[2] to using or even creating virtual tours of museums or historic sites.[3] Yet one of the most valuable, although often daunting tools available to the art historian interested in working digitally, is that of 3D digital modeling. This article will discuss a long-term foray into 3D digital modeling conducted over the course of two summers by the authors—former Assistant Professor of Art History at Cornell College Ellen Hoobler and three undergraduate students who worked with her, Ve’Amber Miller and Catherine Quinn (summer 2014) and Arturo Hernández, Jr. (summer 2015).

This paper offers suggestions and information for professors who seek to use 3D modeling for their own work, particularly in conjunction with undergraduate students. Overall, the paper argues that the greatest benefits of many 3D modeling projects that involve students and faculty are achieved as the pedagogical “byproducts” gleaned through the process of working through a digital humanities product rather than the actual digital artifacts or “products” of the investigation. These pedagogical “byproducts” of 3D modeling are potentially the greatest benefits particularly for those projects undertaken with modest budgets or with less technical expertise (i.e. not in institutions with highly developed digital support programs, or undertaken by professors in computer science). This is not to negate such products; this work generated several dozen 3D models of ancient objects as .stl files; a visualization of an entire section of a tomb including some of those models; an interactive visualization of a tomb with several models inside it; a video illustrating the position of objects within a different tomb; and a website to highlight some of the findings of the two summers’ work. All of those would be usable by specialists in the field of Mesoamerican art history or potentially even professors teaching a survey course on the topic. However, the skills that the participants took away from this were myriad, and very likely more important than these products themselves. Even in an imperfectly realized project in 3D modeling, digital art history (or possibly digital humanities more broadly), students learned art-historical skills, such as close looking, archival research, and in this case, particularly how recreation of context will draw upon multiple fields of investigation and methodologies, which they may choose for themselves. The project also fostered important life skills including the basics of project management, forming and fostering relationships for collaboration, and learning how to begin and drive a project when there is no clear way forward.

This idea is influenced by a seminal article by Lisa Snyder of University of California, Los Angeles’s Visualization and Modeling team “Virtual Reality [VR] for Humanities Scholarship.” She discusses the important issue, one that the team kept returning to over and over again, of understanding whether one is working on a more process-based or product-based mode. “Process-based questions are addressed through the analytical act of creating the virtual artifact or environment with little or no expectations for the longevity of the data beyond the life of the project. Product-based questions may include process-based elements during the construction of the VR environment, but are more focused on interaction with the finished product and long-term public dissemination of the research” (Snyder 2012, 396). Hoobler began the project and outlined it to her collaborators with confidence that it would yield an important product, and did not fully understand the value of working through the process of making the 3D models when, in fact, process and the opportunities for thinking through ancient spaces ultimately was equally or more important than the models themselves. It is true that all projects are to some degree a balance between these two modes, as much as there is greater emphasis on process rather than product when students (and professors) are new to the software necessary to carry out such projects. Obviously, neither of these collaborations yielded enormous products on the scale of Snyder’s own interactive VR reconstruction of the 1893 Chicago World’s Fair (http://www.ust.ucla.edu/ustweb/Projects/columbian_expo.htm) or other well-funded large universities’ projects. However, work from the two summers’ projects did ultimately yield permanent products that will be helpful in future research.

The project that the collaborators undertook grew out of Hoobler’s dissertation research focused on tombs of the site of Monte Albán in Oaxaca, southern Mexico, built by the Zapotec peoples of the area from ca. 500 BCE–850 CE (see Figure 1, and Hoobler 2011 for more information). The tombs had never been fully published by their excavator, the archaeologist Alfonso Caso. During extensive archival research, Hoobler discovered and then digitized a trove of some 8,000 catalogue cards made by Caso and his collaborators in the 1930s and ‘40s. The cards detailed the position of all the objects Caso and his collaborators had excavated from a given tomb in centimeters from the back and side walls of the tomb (see Figure 2). (At the time Caso was working, all objects were removed from the tombs and ultimately sent to the National Museum of Anthropology in Mexico City.) Using these cards, Hoobler created two-dimensional diagrams of the tombs for her dissertation (see Figure 3). At the time when she finished graduate school at Columbia University in the Department of Art History and Archaeology in 2011, the free 3D modeling program Google Sketchup (now Trimble Sketchup) did exist, but there was no training and certainly no mandate for working on software for graduate students in art history at that time.

Fig. 1 – Large open plaza of archaeological site of Monte Albán, pyramid mounds close by and mountains visible in the distance.

Figure 1: Large open plaza of archaeological site of Monte Albán.

 

Fig. 2 – Large index card with typed information about an object – photograph of a bowl at top right and a watercolor of the same object below it.

Figure 2: Large index card with typed information about an object.

 

Fig. 3 – Simplified diagram of floor plan of a tomb with side niches. Numbered dots on plan show location of objects.

Figure 3: Simplified diagram of floor plan of a tomb with side niches. Numbered dots on plan show location of objects.

 

In summer 2013, after becoming an Assistant Professor at Cornell College in Iowa, Hoobler attended an NEH Advanced Topics in the Digital Humanities Summer Institute on Humanities Heritage 3D Visualization: Theory and Practice,[4] where she experimented with some of the many tools by then readily available, free or at low-cost, to those interested in 3D modeling. Based on working with the University of Arkansas’s dedicated, well-funded, and well-developed Center for Advanced Spatial Technologies (CAST) lab,[5] Hoobler’s original, perhaps overly ambitious goal for the project was fully product-based: to allow users a phenomenological experience of one of these burial chambers by creating a virtual tomb, including all its contents, with which the user would be able to interact. This goal proved unattainable during the two summers of work because it required computers with much greater computing power than were available on a typical small liberal arts college campus. Furthermore, such a project would have required much greater technological skill on the part of Hoobler and probably thousands of hours of work on the part of the team. Still, a great deal of progress was made. Not only did all participants learn a great deal about working with 3D modeling software (including Maxon Cinema4D), the process was enlightening in regard to the important role that 3D scanning and modeling might play in cultural heritage in the future. From a scholarly standpoint, it was very clear to Hoobler that the process of modeling the tomb and its contents, and virtually placing them, allowed—and even forced—the modeler to engage in the art-historical technique known as close looking in dealing with the objects. Experiencing this process also made apparent many characteristics of the burial chambers that the creation of two-dimensional diagrams had not.

Based on the NEH Summer Institute training, Hoobler sought and received grants from Cornell College and the McElroy Fund/Iowa College Foundation for Ve’Amber Miller and Catherine Quinn to work with her in summer 2014.

Ve’Amber Miller (Cornell College ’15) comments:

“As an Archaeology major—and someone who already had an interest in how to engage technology with the past—there was no hesitation in wanting to join this team. At first it was daunting even as I was going into my final year of undergraduate studies because I did not have experience in 3D modeling, but over the course of our work I found the support from everyone and the story these artifacts told was more than enough to push me through. Being able to place the items that had been recreated back into a virtual tomb made the history even more real; 3D printing those same objects–some that had been destroyed decades ago–so that others could hold them in their hands made it more real for them as well. The most important thing is that I learned from this project, and will only do better in the future so that history becomes an interactive and engaging experience for everyone.”

As of this writing, Miller is working as a Park Guide at the Pullman National Monument in fall 2017. Prior to working in this position, she worked at Weir Farm Historic Site, where she put her skills and experience acquired during this project to use in creating a virtual gallery of art[6] and an ESRI StoryMap based on “Julian Alden Weir’s Student Years in Europe,” using digitized documents and artifacts from Weir Farm’s collection.

Her collaborator that summer, Catherine Quinn, Cornell College ’15, was an art history major who had been interested in technology for years, having gained experience through graphic design courses in high school and customizing a gallery website while interning at the Center on Contemporary Art. Quinn said:

“Working on this project with Dr. Hoobler was exciting for a number of reasons. It was one of the first opportunities I had as an art history major to apply what I had learned in the classroom while contributing to a real world, ongoing, body of research. In addition, I was able to combine multiple fields of interest (art history and technology) while being introduced to others (archaeology), and building extensively on my existing knowledge through hands-on learning. Finally, being able to work on this particular site was especially meaningful due to the fact that we were working with the intention of offering our research to the Community Museum in Oaxaca. As one of my first forays into ‘digital humanities,’ this project has left me inspired by all the ways I see technology providing not only new insight but also accessibility to people, and I foresee it having a beneficial influence on how we curate museum collections, design interactive exhibits, and present research.”

Quinn is currently based in Seattle and applying for graduate programs in digital cultural heritage and related disciplines.

That first summer, predictable challenges ensued. The software had been updated between 2013 and 2014 and many functions had changed. In addition, due to the vicissitudes of funding, the two students were starting at different times, with Miller beginning several weeks after Quinn. Thus, Quinn and Hoobler struggled together with the Maxon software, and Quinn largely trained herself on the finer points of working with it, using tutorials and message boards she found on the Internet. Quinn then trained Miller when she joined the team. (One counterintuitive point about working with students on summer research is that it is much easier to have two or perhaps three students working in collaboration: when there is a question on how to do something, they work through it together, usually showing their professor how to do it once they have figured it out.)

Miller, Quinn, and Hoobler then worked through the challenges of free-hand modeling of objects illustrated in profile on the catalogue cards (see Figures 4 and 5). This was exciting work, since some of the objects shown on the cards were made out of unfired clay or stucco and apparently were destroyed in transit to Mexico City. For this reason, many of the objects documented in the tombs cannot be found in the warehouse of the National Museum of Anthropology and History in Mexico City. Thus, while virtual, these proxies, derived from catalogue cards, are the only place where these objects “exist” in the world.

Fig. 4 – Screengrab showing the Maxon Cinema4D software program with a bowl modeled in the active window.

Figure 4: Screengrab showing the Maxon Cinema4D software program with a bowl modeled in the active window.

 

Fig. 5 – Screengrab showing the Maxon Cinema4D software program with a stone beads modeled in the active window.

Figure 5: Screengrab showing the Maxon Cinema4D software program with a stone beads modeled in the active window.

While a broader discussion of the theorization of replicas is beyond the scope of this article, two points are worth noting. One is regarding copyright issues. Unlike a project involving modern or contemporary art or creations, the objects found in tombs were created over a thousand years ago and have no clear author whose descendants could be traced to seek permission for the replication. Even if one considers the archaeologist Alfonso Caso who led the team that conducted the excavations as the author of these objects, these were “reactivated” some 70–80 years ago. The National Museum of Anthropology and History might also assert its rights to the objects, since it is their physical repository. However, as per the College Art Association’s Code of Best Practices in Fair Use for the Visual Arts, the modeling of such works would be well within the furthering of “… the teacher’s substantive pedagogical objectives,” as described in Section Two, Teaching About Art (College Art Association 2015, 10).

The second point has to do with the ontological status of the replica. Many scholars have written eloquently about replication, that moment “when ideality and reality touch each other,” as the philosopher Søren Kierkegaard put it (Kierkegaard 1983, 131). Probably the most influential text on this topic is Walter Benjamin’s 1935 essay “The Work of Art in the Age of Mechanical Reproduction.” Benjamin discussed in depth the concept of authenticity in copying works of art, arguing that there is a unique authority to the original that he called its “aura,” which would be dissipated in an age of mechanical reproduction (Benjamin [1935] 1968, 224). However, given the number of visitors to the Louvre yearly, it seems the elusive aura has not withered away, but is never inherent to a copy no matter how perfect its mode of replication. Scholars have generally asserted that the original work of art will not be replaced by digital facsimiles, and in fact these copies may increase the desire to experience the original (Hall 1999, 277; Cuno 2014). The topic of replication will continue to be the subject of debate and discussion in art history. However, the team was working from originals that in many cases were ceramic vessels mass-produced in workshops, minimally decorated and similarly sized and shaped for stacking and transport, and thus fit much more easily within the sphere of “visual culture” than fine art. Therefore, such questions, while fascinating, are less relevant to the argument at hand.

Moving from Process to Product: Manage Your Expectations

During the process of making the models, students and teacher alike were learning many skills. Some of these were art historical. Many art historians have described art history, like many other humanities disciplines, as having a toolkit of methods from which to choose rather than a prescribed set of steps to follow as in the sciences (Long and Schonfeld 2014, 10). In general, art historians’ methods are informed by the kind of research questions they seek to answer, which may vary depending on their project. Three methods used for this project were close looking, library and archival research, and a contextual analysis methodology. Close looking, or viewing and analyzing objects through very close and sustained study is the basis of connoisseurship and authentication as well as formal and iconographic analysis. Even with objects of the type that the team were working with (i.e., bowls and stone objects made by anonymous artisans hundreds of years ago), patterns and insights about them emerge through careful looking. Small wonder then that many art historians also execute their own illustrations, and many seasoned professors encourage their students to draw works in order to commit their contours to memory. The use of 3D modeling demands a similar quality of focused attention to replicating an object or space, although one is now “drawing” with a mouse rather than a pencil. Since the team was working largely from photographs of catalogue cards, it was very important for all concerned that the students were able to see originals of the kinds of objects being modeled, even if not necessarily the exact work, at the Museo de las Culturas de Oaxaca, Santo Domingo, in Oaxaca City during the research period.

In terms of archival research, while there was no time to do additional research since so little of this material has been digitized, students had to dig through several large and unwieldy archaeological publications published in Spanish in the 1950s and ‘60s. This acquainted them with basic but less-discussed principles of archival research such as figuring out which sections of the text were essential to translate versus those that could be skimmed.

Finally, the team was certainly undertaking a contextual analysis of the tombs, trying to understand the original placement (and by extension, use) of the objects in these spaces. All the collaborators discussed how these tombs were in a sense similar to ritual caches excavated and documented by Leonardo López Luján at the Aztec Templo Mayor in Mexico City. There, López Luján has been able to show how the intentional deposition of objects in a specific sequence in different parts of the site was the result of ritual actions undertaken in support of concrete purposes related to propitiation of the gods. Yet, the Zapotec case is less clear, given that tombs were filled some seven centuries or more before the invasion of the Spanish, and deities worshipped in one way in the sixteenth century may have been venerated in an altogether different one centuries earlier. However, reconstructing the tombs and their contents does bring us closer to understanding the lives of ancient peoples that we know comparatively little about.

While the processual byproducts related to art history were quickly realized, the products were not. The original plan was for the objects in the tomb to be modeled in Maxon Cinema4D, the tomb itself would be built in the Unity game engine software, and then, the object models would be imported into the tomb.[7] However, the version of Unity we were working with was not compatible with webGL, becoming difficult to view on most browsers shortly after the model was created, a frustrating but very common experience in digital humanities projects.

Quinn did in fact create a very satisfactory model of the tomb in Unity, but at this point, more challenges emerged. While the resulting models of the objects in the tombs were excellent, their high resolution meant that there were no on-campus computers that could handle both the tomb model as well as the virtual versions of all the objects in the tomb. Later, in summer 2015, Arturo Hernández modeled objects in the same data-heavy manner, but then realized that the models could be made workable by removing details not obvious to the naked eye. The team determined that it should have more carefully sought out a best practices statement for getting around the issue of large file sizes (see Figure 6 and online gallery[8]). Ultimately, Quinn was able to determine how many objects could be loaded into the model so that the digital reconstruction could be interacted with without crashing the program. Reaching this more process-based objective provided us with a better understanding of how the interior of tombs were illuminated. Unity allows the user to move the sun across the sky, and although previous scholars had insisted that there was almost no illumination in the tombs, and that the Zapotecs would have had to use torches (Martha Carmona Macías 2007, personal communication), the virtual model made it clear that for much of the day there was sufficient light in the tomb for mourners to conduct simple rituals. In the future, a more complete model that takes into account the height of ancient house walls might disprove this idea, but for now, it seems likely that rituals might have started during the day. The Unity model also had the benefit of offering a product, a kind of “proof-of-concept” for the whole project. Even with the thoroughly modern, overall-clad default Figure Unity offers for interacting with the scene (no stock characters fit for ca. 300 CE Mesoamerica, surprisingly!), there was invariably a great deal of interest and admiration for the video of the tomb as modeled in Unity. This shows how important it is to build in some degree a visual aid, particularly in art history.

Fig. 6 – Screengrab showing the Unity game engine, with several panes open – at the top, the interior of a structure, with several vessels visible.

Figure 6: Screengrab showing the Unity game engine, with several panes open.

Miller also completed a section of one of the tombs as a final iteration of the project during an independent study with Hoobler. When she made the original 2D tomb models, Hoobler noted that objects of particular importance were placed in niches in the side and back walls. It was particularly important to try to visualize these privileged spaces, yet the diagrams Hoobler produced in 2011 were extremely crude (see Figure 7). During the summer, Miller and Quinn modeled perhaps the most curious and unusual object that was dealt with, a hardstone “billy club” found in one of the niches of Tomb 118 that is extremely atypical of the tombs. In a later independent study, Miller continued work on shedding light on the niches. In particular, she 3D modeled an entire niche of a different tomb, Tomb 104, which held quite a few of the most elaborately decorated ceramics from the site. Some of these were plates, but others were odd pitcher-like vessels. Miller accomplished a great deal with this, even giving their surfaces the appearance of the painted glyphs that were present on these vessels. Despite the imprecise information on the catalogue cards related to this space, Miller was able to generate decent models for the bone needles likely used for ritual bloodletting that were found in conjunction with the vessels. The much more naturalistic representation of the niche that Miller was able to generate is the product from these projects that is most easily transferable to traditional scholarship about the Zapotec (see Figure 8).

Fig. 7 -- Simplified diagram of floor plan of a tomb with overlaid diagram of niche, several line drawings crudely showing placement of contents.

Figure 7: Simplified diagram of floor plan of a tomb with overlaid diagram of niche.

 

Fig. 8 – Realistic image of a niche within the tomb, showing several vessels inside of it, some brightly painted.

Figure 8: Realistic image of a niche within the tomb, showing several vessels inside of it, some brightly painted.

 

Both of these examples show how important it is to have realistic ideas and goals of what can be produced in a single summer, particularly by people who are new to working with the software in question. Upon reflection, it is clear that it was unrealistic to expect that the team could model a whole tomb and 30+ objects as well as make such a model interactive and functional. However, even if the original goal to have a highly detailed model containing all the objects from that tomb was not achievable, three significant products were created: models of individual objects; a mimetic model of a portion of a tomb holding iconographically rich materials; and the modeling of an interactive tomb, albeit without many objects in it. Additionally, a tremendous amount was learned with regard to the development of processes that can result in better products in the future (see Figure 9).

Fig. 9 – Screenshot of a video, marked “Before” at lower left, and showing an empty stone chamber.

Figure 9: Screenshot of a video, marked “Before” at lower left, and showing an empty stone chamber.

Projects are Iterative–But Preparation Is Crucial

In Summer 2015, Hoobler teamed up with Arturo Hernández, Jr. (Cornell College ’16), a studio art and computer science double major. Arturo commented that:

“I joined the Digital Humanities Zapotec Tomb Project because I was thrilled about learning how to use new pieces of software and hardware; additionally, I wanted to further explore and learn about my culture and heritage. This project and team allowed me to contribute ideas and learn more about the importance of digitizing artifacts.

I woke up always looking forward to creating objects and exploring different techniques during the process, as well as researching different applications. One of the rewarding feelings was seeing results from 3D printing some of the objects and analyzing their past or their functionality.”

Before starting his research with Hoobler, Arturo had taken a computer graphics class where he programmed some tools for a small 3D modeling program. Consequently, going from dealing with back-end to user-end 3D space helped him to see the bigger picture of 3D applications. Since graduation from Cornell, Hernández has continued to be involved in projects related to technology and Latin America. He worked for a year at Abriendo Mentes, a non-profit organization, teaching basic computer skills to rural and underserved populations in Costa Rica. He is currently based in Los Angeles and is getting further computer training and certification to continue in technology, ideally with a focus on international work.

Hernández learned the software extremely quickly. He was able to model asymmetrical and eccentrically shaped pieces very effectively, and ultimately was able to solve one of the most difficult problems of the previous year: the question of how to add multiple objects into a virtual tomb.

This came about partially by chance and completely on Hernández’s own initiative. In summer 2015, the funding for Hoobler and Hernández to work together came through the Cornell Summer Research Institute (CSRI) sponsored by Cornell College, which offered newly formalized ways for students and faculty to collaborate, with students receiving various opportunities to showcase their work to participants from across all departments. In one workshop that included Professor of Physics Derin Sherman, Hernández explained the problem with the models, which could not hold the digital models of vessels and be interacted with, for the file size became unmanageable for any computer on campus. Sherman commented offhand that perhaps Hernández could just use video software to make the concept clear, as Sherman had done to make a particular physics concept more understandable for students. This sparked Hernández’s imagination, and after some research, he found Blender,[9] a free, open-source software that allows for importing models and video and integrating them together in a process known as motion tracking. This can be seen online,[10] or in Figures 10 and 11. However, even though initial use of the Blender software suggested a possible way to work with the tombs, prior planning was still necessary to use this new tool in the most productive way.

Fig. 10 – Screenshot of a video, marked “After” at lower left, showing a stone chamber with small ceramic figures and vessels inside.

Figure 10: Screenshot of a video, marked “After” at lower left, showing a stone chamber with small ceramic figures and vessels inside.

 

Fig. 11 – Young man with a camera kneels by a table with several plastic objects on it, taking a photograph of them.

Figure 11: Young man with a camera kneels by a table with several plastic objects on it, taking a photograph of them.

Technology and Community Engagements: 3D Modeling to Printing

It is important to note that the precise limits of the site of Monte Albán were in fact set in the 1930s by several towns surrounding the core of the site, each of which donated some of their communal landholdings to create an archaeological zone. As a result, there is an inherent conflict, even putting aside all national legislation, as to how the artifacts of Monte Albán might be shared simultaneously with all these communities. This includes villages further out from the site’s core, which in ancient times helped to make some of the vessels and other offerings found in the tombs. It is possible, as has been discussed by other scholars, that digital versions of heritage objects might offer the possibility for their sharing by different stakeholders. (This has been discussed in various articles from a 2012 issue of the Journal of Material Culture, particularly Brown and Nicholas 2012 and Newell 2012 as well as Bell et al., Hennessy et al., and the entire 2013 special issue of Museum Anthropology Review titled “After the Return: Digital Repatriation and the Circulation of Indigenous Knowledge.”)

One of the ultimate goals for work on the tombs has been to return, at least virtually, some of the material culture of Oaxaca taken from the state’s small communities after the archaeological excavations of the 1930s. It was during this period that most of the excavated objects were sent to the warehouses of the National Museum of Anthropology and History in Mexico City. As a response to this loss of their local culture, since the 1980s some of the towns have sought to keep this kind of material from leaving their towns by creating local community museums (Hoobler 2006). While efforts to create virtual versions of the community museums had been discussed by the team in summer 2014, it was ultimately not undertaken. However, because Hernández is fully fluent in Spanish and fully bicultural with knowledge of Mexican and even specifically Oaxacan culture, a community engagement component could be added during the 2015 portion of the project.

Interested in testing these possibilities, Hoobler decided that she could make a test case for virtual sharing with one community by working with the Community Museum of the town of San Juan Guelavía close to Oaxaca City. Knowing that a large segment of the town’s population was fluent in Zapotec languages and/or English, (the town has had a large number of migrants to the United States, see Cohen and Browning 2007), Hoobler decided to offer some materials that might help facilitate increased understanding of the ancient ancestors of the Zapotecs. Such materials included coloring sheets with line drawings of actual ancient vessels. Hernández created trilingual Zapotec-Spanish-English game boards for a version of the Lotería game that is similar to Bingo, but with images and words. Hernández also created models for and supervised the 3D printing of replicas of artifacts found in the tombs (see Figures 11 and 12), including some plastic vessels, “ear flares,” ornaments like those made of jadeite found in the tombs fitted with clip earring backs, and an incense burner. This last object was particularly satisfying because when Hernández and Hoobler visited the community museum, they found a case holding actual pre-Columbian objects found in the town. It included the handle for such an incense burner, but the bowl of the burner had been broken off. (See Figure 13 for a contrast between the ancient and modern objects.) This 3D model was an object that young people in the town could actually handle without fear of causing it damage, allowing them to understand better the purpose of a formerly innocuous object in their museum.

Figure 12 - Closer view of the table in figure 11, showing different 3D printed objects, including vessels and small figurines.

Figure 12: Closer view of the table in figure 11, showing different 3D printed objects, including vessels and small figurines.

 

Fig. 13 – Photograph of a display case with a glass top, and ceramic fragments inside. Inset with second image, a plastic vessel similar to one of the ones seen inside the case.

Figure 13: Photograph of a display case with a glass top, and ceramic fragments inside. Inset with second image, a plastic vessel similar to one of the ones seen inside the case.

While this was a gratifying episode, the interaction largely ended there because Hoobler had not undertaken long-term planning for a continuous relationship with the museum. There had been discussion between Hernández and Hoobler at the beginning of the summer about training young people in the community to work with digital 3D modeling themselves, but as it was unclear at that point what the capabilities of the computer at the community museum were, this idea was discarded. However, as Pohawpatchoko et al. (2017) discuss, this would likely be the richest option for receiving useful input regarding the value of reproducing ancient objects for the community. As has happened in past collaborations with indigenous groups, there were “good intentions” on the part of the North American university team but not many actual solutions (La Salle 2010). Community engagement is incredibly rewarding, but can be difficult or feel awkward when not executed within a longstanding relationship. Without an already-established personal relationship with the town and community museum committee, this attempt was limited in its success. However, this experiment provided Hernández with valuable experience working with local communities that he would use in his subsequent work in Costa Rica, a project that did prove to be more successful.

This episode brings up a final point about 3D modeling and its use in art-historical scholarship and teaching. It can be an end unto itself, and as the price of 3D printers becomes more affordable, it can be used in conjunction with 3D printing effectively. This is helpful information in many ways. First, it allows us to rethink traditional methods of studying art. Within art history, primacy has been given to the visual qualities of a work of art, yet sculpture and many objects were very much prized for their tactile qualities as well. Theoretically, 3D modeling and 3D printing would allow students to recapture some of the tactile experience of an object. There may be some exceptions—one being that the tomb modeled by Miller in 2014 was particularly hard to model with digital means since it had asymmetrical walls that curved irregularly; yet those walls also referenced the fingers that had shaped it (see Figure 4). Thus, though it was printed in plastic filament on a CubePro printer and the fine-grained texture was in no way accurate, in general terms even the plastic proxy in some way called attention to the hands that had shaped it. Unfortunately, in practice, many smaller schools are not buying the kind of high-end 3D printers that can print texture similar to that of the original and, at larger universities, there is sometimes a siloing of resources, with the result that “less technological” departments such as art history might not be able to use this sophisticated equipment, whose raw materials are similarly quite costly.

Secondly, 3D modeling and printing gives students a sense of the scale of objects when they are printed at full size. Just as viewers of the Mona Lisa are always surprised by how small the painting is, it is also helpful to understand, in a phenomenological sense, the contents of tombs at a human level. For example, in summer 2015, one of the objects given to the community museum was a “mystery vessel” found in several of the tombs. (See gray object at left of Figure 13.) Its side walls are so low that it is hard to imagine what it could have been used for—it is not an incense burner or other easily recognizable artifact. The hope in giving it to the community museum was that an older member of the community might recognize it, as has happened previously (Lind and Urcid 2010, 276–77). However, because low-end 3D printers are usually only capable of printing objects that can fit within a 10” square, it is nearly impossible to recreate the sense of scale one experiences from a huge pre-Columbian olla jugs. Low-end printers also create plastic objects that are aggressively monochromatic and very toy-like. More sophisticated, expensive printers can generate objects in materials such as ceramic, metal, or paper with very delicate tints mimicking the original object, but lower-end printers create models that look very Lego-like (Figure 13) and can even run the risk of seeming disrespectful when working with cultural heritage objects.

The Importance of Reflection in the Process

Since Hoobler originally conceived of the project as more product-based and was focused on the end result, she had not built in as much time and opportunity as she later would have wanted for self reflection on the part of the students (or herself). This is important on the one hand because it is increasingly clear how metacognition, or reflecting on the process of learning itself, is crucial to learning. Self-reflection is helpful for a number of reasons. First, as Paige Morgan notes, it allows participants to broaden the question of whether a project is “done” beyond a yes/no binary (Morgan 2014). A team can recognize and acknowledge progress even if they do not realize all that they plan to do. Second, progress can be measured in real time, perhaps weekly. Since digital work is new for many professors, they may find that they were optimistic about what the team can accomplish in the time allotted. Third, writing and documentation may help explicitly describe and justify the choices made at different points where available data may not be available.

Interestingly, although self-reflection was not structured as part of the project, all the students sought out the opportunity to reflect on the process and showcase their work. Miller created a website for the project (www.digitalzapotectombs.com) that Hernández added to and Hoobler has maintained. Professors who work with undergraduates on academic projects should provide them with this opportunity, which allows them to keep track of how they arrived at certain solutions, maintain a record of challenges they have surmounted, and chart the progress they have made. A website or other public venue also allows them to have a permanent record of their work, accessible by potential employers or graduate school admissions personnel. In general, professors should build this kind of periodic reflection into the project timeline, perhaps by encouraging blogging. This would allow the students (and the professor) to reflect more effectively on how much progress they were actually able to make in a single summer, semester, independent study period, or over the course of a longer-term project.

Despite not keeping an ongoing record of progress in a blog format, the lessons learned by both the students and the professor throughout the process are evident. Working collaboratively with peers from such diverse fields meant each team member was able to bring their own set of skills and background knowledge to the project, and that others were able to learn from them. By extension, cross-disciplinary relationships were formed with other students and staff on campus as well as at other institutions as they were brought in to consult on various issues. Hernández, Miller, and Quinn were able to take ownership of their contributions to the project, each utilizing their “very particular set of skills,” and working more as collaborators with Professor Hoobler. Hoobler would argue that it was a good and productive experience for the students to see their professor not as the “sage on the stage” but as a coworker, not infallible— sometimes not even the authority on the project. All participants saw first hand how important project management and planning skills are, and yet how one discrete portion of a project can be completed in a relatively short time.

Why Use 3D Modeling in Art History?

To conclude, there are many affordances of 3D modeling for art historians. Current modeling technologies allow for what Johanna Drucker has called digitized, rather than digital art history, the latter being defined as “analytic techniques enabled by computational technology” (Drucker 2013: 7). The use of 3D modeling is the digital equivalent of sketching the objects you are studying—it forces sustained close looking and a quality of focused attention to representing a given object or space. However, it does obligate you to be much more concrete than sketches are in representing your understanding of the physical context of objects and buildings as you think through their placement and surroundings, sometimes including terrain, neighboring structures, etc. This type of modeling is particularly helpful for considering ancient spaces where context is unclear: a 3D environment allows the scholar to reunite fragments that are lost, dispersed, or damaged. When printed, 3D models also afford the user a physical object that gives them the experience of scale and basic tactile qualities, opening up questions of use and function for mystery objects. As technology for 3D printing continues to improve, 3D printing will likely also offer a very close proxy for the object in terms of colors and textures.

However, the pedagogical benefits of 3D modeling projects may actually outweigh these digital “products.” The opportunity for students to form close working relationships with each other and faculty in a setting where their professor is not an infallible authority and they may well have to “teach the teacher” at points is important. In this particular case, students also gained valuable international experience, confronting cultural differences and communication barriers at times. In general, students learn about and then grapple with thorny problems to which there are no easy solutions. It is important that they complete at least one aspect of the project, perhaps a prototype that can act as a future “calling card” for them, and ideally it should be part of a broader process of reflection on the project. Such a “proof of concept” has broader pedagogical value too—the professor can then use it in their classes, discussing how it was made by students, and making such processes feel manageable and “relatable” for other students.

Of course, the main takeaway of this article for professors is to choose incredibly smart, positive, conscientious students who are much better than you are with software as your collaborators and then—just get out of the way. Your students will find ways to take ownership and make the project, or at least parts of it, happen in ways you never expected, but which will teach you about technological (and other) solutions you never dreamed existed.

Acknowledgments

Many thanks are due by the authors to: The Cornell College Summer Research Institute, Cornell College Student-Faculty Research Fund, the RJ McElroy Fund / Iowa College Foundation grant. At Cornell College, we would like to thank Brooke Bergantzel, Instructional Technology Librarian at Cornell College, Amy Gullen, Consulting Librarian for the Sciences and Technology, Christina Penn-Goetsch, Professor of Art History, Joe Dieker, Vice President for Academic Affairs and Dean of the College, Ben Greenstein, Professor of Geology and Associate Dean of the College, and Derin Sherman, Professor of Physics. We would also like to thank the committee of the Museo Comunitario San Juan Guelavía, and particularly Juan Manuel Martínez García.

Ellen Hoobler would also like to thank Mandar Sharad Banavadikar for his patience and understanding during these two summers of work, as well as Angel David Nieves, Ph.D. Associate Professor at Hamilton College for his wise counsel and support during this process.

Bibliography

Bell, Joshua, Kimberly Christen and Mark Turin. 2013. “Introduction: After the Return.” Museum Anthropology Review, special issue “After the Return: Digital Repatriation and the Circulation of Indigenous Knowledge” 77, nos. 1–2: 1–21.

Benjamin, Walter. (1935) 1968. “The Work of Art in the Age of Mechanical Reproduction.” In Illuminations: Essays and Reflections, edited by Hannah Arendt, translated by Harry Zohn, 217–51. New York: Schocken.

Brown, Deidre, and George Nicholas. 2012. “Protecting Indigenous Cultural Property in the Age of Digital Democracy: Institutional and Communal Responses to Canadian First Nations and Māori Heritage Concerns.” Journal of Material Culture 17, no. 3: 307–24.

Cohen, Jeffrey, and Anjali Browning. 2007. “The Decline of a Craft: Basket Making in San Juan Guelavía, Oaxaca.” Human Organization 66, no. 3: 229–39.

College Art Association. 2015. “Code of Best Practices in Fair Use for the Visual Arts.” February 2015. Accessed November 25, 2017. http://www.collegeart.org/pdf/fair-use/best-practices-fair-use-visual-arts.pdf.

Cuno, James. 2014. “Beyond Digitization—New Possibilities in Digital Art History.” The Iris: Behind the Scenes at the Getty (January 29, 2014). Accessed November 1, 2017. http://blogs.getty.edu/iris/beyond-digitization-new-possibilities-in-digital-art-history/.

Drucker, Johanna. 2013. “Is there a ‘Digital’ Art History?” Visual Resources: An International Journal of Documentation 29, nos. 1–2: 5–13.

Hennessy, Kate, Natasha Lyons, Stephen Loring, Charles Arnold, Mervin Joe, Albert Elias, and James Pokiak. 2013. “The Inuvialuit Living History Project: Digital Return as the Forging of Relationships Between Institutions, People, and Data.” Museum Anthropology Review 77, nos. 1–2: 44–73.

Hall, Debbie. 1999. “The Original and the Reproduction: Art in the Age of Digital Technology.” Visual Resources 15, no. 2: 269–78.

Hoobler, Ellen. 2003. “‘To Take Their Heritage In Their Hands’: Indigenous Self-Representation and Decolonization in the Community Museums of Oaxaca, Mexico.” American Indian Quarterly 30, no. 3: 441–60.

———. 2011. “The Limits of Memory: Alfonso Caso and Narratives of Tomb Assemblage from Monte Albán, Oaxaca, Mexico, 500–800 and 1931–49 CE.” PhD diss., Columbia University.

La Salle, Marina J. 2010. “Community Collaboration and Other Good Intentions.” Archaeologies 6, no. 3: 401–22.

Lind, Michael, and Javier Urcid. 2010. The Lords of Lambityeco Political Evolution in the Valley of Oaxaca During the Xoo Phase. Boulder, Co.: University Press of Colorado.

Long, Matthew P., and Roger C. Schonfeld. 2014. “Preparing for the Future of Research Services for Art History: Recommendations from the Ithaka S R Report.” Art Documentation: Journal of the Art Libraries Society of North America 33, no. 2: 192–205. doi:10.1086/678316.

Morgan, Paige. 2014. “How to Get a Digital Humanities Project Off the Ground” (June 5, 2014). Accessed November 1, 2017. http://www.paigemorgan.net/how-to-get-a-digital-humanities-project-off-the-ground/.

Nagata, Wayne, Hera Ngata-Gibson and Amiria Salmond. 2012. “Te Ataakura: Digital Taona and Cultural Innovation.” Journal of Material Culture 17, no. 3: 229–44.

Newell, Jenny. 2012. “Old Objects, New Media: Historical Collections, Digitization and Affect.” Journal of Material Culture 17, no. 3: 287–306.

Pohawpatchko, Calvin, Chip Colwell, Jami Powell and Jerry Lassos. 2017. “Developing a Native Digital Voice: Technology and Inclusivity in Museums.” Museum Anthropology 40, no. 1: 52–64.

Snyder, Lisa M. 2012. “Virtual Reality for Humanities Scholarship.” New Technologies in Medieval and Renaissance Studies 3: 396.

About the Authors

Since February 2017, Ellen Hoobler is the William B. Ziff, Jr. Associate Curator of the Art of the Americas at the Walters Art Museum in Baltimore, MD. Prior to becoming a curator, she was from 2012–2017 an Assistant Professor of Art History at Cornell College in Mount Vernon, IA. A specialist in art of ancient Oaxaca, she is interested in the possibilities of digital technologies to further understanding of ancient cultures and cultural heritage monuments. She can be reached by email at emh2104@gmail.com.

Catherine Quinn graduated from Cornell College in 2015 with honors in Art History. A native of Seattle, she currently works in corporate America while serving as a docent for the Seattle Art Museum’s SAMbassador program, where she enjoys interacting with visitors, discussing art, and keeping her art history skills sharp. She is planning to attend graduate school in fall 2018 to continue learning about digital humanities, with the goal of pursuing a career in digital humanities and cultural heritage. Catherine can be reached by email for comments or questions at Catherine.j.quinn@gmail.com.

Ve’Amber Miller graduated from Cornell College in 2015 with a degree in both Archaeology and English and Creative Writing. As of early 2018, she works as a Park Guide at Pullman National Monument, and enjoys telling history through tours and educational outreach programs of a historic neighborhood in Chicago, IL. She is hoping to attend graduate school in fall 2018 to learn more about how technology and cultural institutions are part of the future of public history. To know more about Ve’Amber and her qualifications, please visit her LinkedIn profile at: https://www.linkedin.com/in/ve-amber- miller-b37b2255/

Arturo Hernández, Jr. is a freelance designer, technologist, and visual artist living in Los Angeles. He graduated from Cornell College in 2016 with majors in Computer Science and Studio Art, and was from 2016–2017 a teacher with the non-profit organization Abriendo Mentes in Costa Rica. There, he taught basic technology skills to rural Costa Ricans. For comments and opportunities, Arturo can be reached via his website: http://www.arturohernandezjr.com/. Samples of his technology work related to this project can be seen at https://github.com/ahernandez16/Monte-Alban-Zapotec-Tombs.

0

Confessions of a Premature Digital Humanist

Abstract

Traditional interpretations of the history of the Digital Humanities (DH) have largely focused on the field’s origins in humanities computing and literary studies. The singular focus on English departments and literary scholars as progenitors of DH obscures what in fact have been the DH field’s multidisciplinary origins. This article analyzes the contributions made by the US social, public, and quantitative history subfields during the 1970s and 1980s to what would ultimately become the Digital Humanities. It uses the author’s long career as a social, quantitative, and public historian (including his early use of mainframe computers in the 1970s to analyze historical data) and his role and experiences as co-founder of CUNY’s pioneering American Social History Project to underscore the ways digital history has provided a complementary pathway to DH’s emergence. The piece also explores the importance of digital pedagogy to DH’s current growth and maturation, emphasizing various DH projects at the CUNY Graduate Center that have helped deepen and extend the impact of digital work in the academy.

“And you may ask yourself—Well… How did I get here?”
Talking Heads, “Once In a Lifetime” (1981)

 
Much actual and virtual ink has been spilled over the past few years recounting how the field of Digital Humanities came into being. As a social historian and someone who has been involved in digital work of one sort or another since the mid 1970s, I am somewhat bemused by what Geoffrey Rockwell has aptly termed the “canonical Roberto Busa story of origin” offered by English department colleagues (Rockwell 2007). That canonical DH history usually starts with the famous Father Roberto Busa developing his digital concordances of St. Thomas Aquinas’s writings beginning in 1949 (the first of which was published in 1974) with critical technical support provided by Thomas Watson, head of IBM.[1] It quickly moves from there to recount the emergence of humanities computing (as it was originally known) in the 1980s, followed by the development of various digitized literary archives launched by literary scholars such as Jerry McGann (Rossetti) and Ed Folsom (Whitman) in the 1990s (Hockey 2004). In this recounting, academics in English, inspired by Father Busa, pushed ahead with the idea of using computers to conceive, create, and present the digital concordances, literary editions, and, ultimately, fully digitized and online archives of materials, using common standards embodied in the Text Encoding Initiative (TEI), which was established in 1987.[2] The new field of Digital Humanities is said to have emerged after 2004 directly out of these developments in the literary studies field, what Willard McCarty terms “literary computing” (McCarty 2011, 4).[3]

As a historian who believes in multi-causal explanations of historical phenomena (including what happens intellectually inside of universities), I think there are alternative interpretations of this origin story that help reveal a much more complicated history of DH.[4] I will argue in this piece that the history field—particularly historians working in its social, public, and quantitative history sub-fields—also made a substantial and quite different contribution to the emergence of the Digital Humanities that parallels, at times diverges from, and even anticipates the efforts of literary scholars and literary studies.[5] I will first sketch broader developments in the social, public, and quantitative history sub-fields that began more than four decades ago. These transformations in the forms and content of historical inquiry would ultimately lead a group of historians to contribute to the development of DH decades later. I will also use my own evolution over this time period (what I dub in the title of this piece my “premature” Digital Humanism), first as a social and labor historian, then as a media producer, digital historian, and finally now as a teacher of digital humanities and digital pedagogy, to illustrate the different pathways that led many historians, myself included, into contributing to the birth and evolution of the Digital Humanities. I will use my ongoing collaborations with my colleagues at the American Social History Project (which I co-founded more than 35 years ago) as well as with Roy Rosenzweig and the Center for History and New Media to help tell this alternate DH origins story. In the process, I hope to complicate the rather linear Father Busa/humanities computing/TEI/digital literary archives origin story of DH that has come to define the field.

Social and Labor History

Social history first emerged in the pre-World War II era with the founding in 1929 in France of the Annales school of historical inquiry by Lucien Febvre and Marc Bloch and carried forward by Fernand Braudel in the 1950s and Emmanuel Le Roy Ladurie in the 1970s. The field of social history found fertile new ground in the United States during the 1960s and 1970s. The “new” social history was very much a product of the rejection of traditional political history narratives and a search for new methodologies and interdisciplinary connections. Social history examined the lives and experiences of “ordinary people”—workers, immigrants, enslaved African Americans, women, urban dwellers, farmers, etc.—rather than the narrow focus on the experiences of Great White Men that had dominated both academic and popular history writing for decades if not centuries. This changed historical focus on history “from the bottom up” necessitated the development of new methodological approaches to uncover previously unused source materials that historians needed to employ to convey a fuller sense of what happened in the past. Archives and libraries had traditionally provided historians access to large collections of private and public correspondence of major politicians, important military leaders, and big businessmen (the gendered term being entirely appropriate in this context) as well as catalogued and well-archived state papers, government documents, and memoirs and letters of the rich and famous. But if the subject of history was now to change to a focus on ordinary people, how were historians to recount the stories of those who left behind few if any traditional written records? New methodologies would have to be developed to ferret out those hidden histories.[6]

The related sub-field of labor history, which, like social history, was also committed to writing history “from the bottom up,” illustrates these methodological dilemmas and possibilities. Older approaches to US labor history had focused narrowly on the structure and function of national labor unions and national political parties, national labor and party leaders, and what happened in various workplaces, drawing on government reports, national newspapers, and union records. The new labor history, which was pioneered in the early 1960s, first by British Marxist historians such as Eric Hobsbawm and E. P. Thompson, sought to move beyond those restricted confines to tell the previously unknown story of the making of the English working class (to appropriate the title of one of Thompson’s most important works). Hobsbawm and especially Thompson relied heavily in their early work on unconventional local and literary sources to uncover this lost history of English working people. The new labor history they pioneered was soon adapted by US labor historians, including David Montgomery, David Brody, and Herbert Gutman and by graduate students, deploying an array of political and cultural sources to reveal the behaviors and beliefs of US working people in all of their racial and ethnic diversity. The new US labor history embraced unorthodox historical methodologies including: oral history; a close focus on local and community studies, including a deep dive into local working-class newspapers; broadened definitions of what constituted work (e.g. women’s housework); and working-class family and community life and self-activity (including expressions of popular working-class culture and neighborhood, political, and religious associations and organizations). I committed myself to the new labor history and its innovative methodologies in graduate school at UCLA in the early 1970s when I began to shape my doctoral dissertation, which sought to portray the ways black, white, and immigrant coal miners in the West Virginia and Colorado coal fields managed to forge interracial and interethnic local labor unions in the late nineteenth and early twentieth centuries (Brier 1992).

Public History

A second activist and politically engaged approach to communicating historical scholarship—public history—also emerged in the 1970s. Public history grew in parallel to and was made possible by the new academic field of social history. To be sure, while social history spoke largely to the history profession, challenging its underlying methodological and intellectual assumptions, public history and the people who self-identified as public historians often chose to move outside the academy, embedding themselves and their public history work inside unions, community-based organizations, museums, and political groups. Public historians, whether they stayed inside the academy or chose to situate themselves outside of it, were committed to making the study of the past relevant (to appropriate that overused Sixties’ phrase) to individuals and groups that could and would most benefit from exposure to and knowledge about their “lost” pasts (Novick 1988, 512–21).

Public history’s emergence in the mid-1970s signaled that at least one wing of the profession, albeit the younger, more radical one, was committed to finding new ways and new, non-print formats to communicate historical ideas and information to a broad public audience through museum exhibits, graphic novels, audio recordings and radio broadcasts, and especially film and television. A range of projects and institutions that were made possible by this new sub-field of public history began to take shape by the late 1970s. I worked with fellow radical historians Susan Porter Benson and Roy Rosenzweig and the three of us put together in 1986 the first major collection of articles and reports on US public history projects and initiatives. Entitled Presenting the Past, the collection was based on a special theme issue of the Radical History Review (the three of us were members of the RHR editorial collective) that we had co-edited five years earlier.[7] Focusing on a range of individual and local public history projects, Presenting the Past summarized a decade of academic and non-academic public history work and projects in the United States (Benson, Brier, and Rosenzweig 1986).[8]

Stephen Robertson, who now heads the Roy Rosenzweig Center for History and New Media (CHNM)[9] at George Mason University, has correctly noted, in a widely read 2014 blog post,[10] that we can and should trace the origins of the much newer sub-field of digital history, a major contributor to the Digital Humanities’ growth, to the public history movement that was launched a quarter century earlier (Robertson 2014). Robertson goes on to suggest that this early focus on public history led digital historians to ask different questions than literary scholars. Historians focused much more on producing digital history in a variety of presentational forms and formats rather than literary scholars’ emphasis on defining and theorizing the new Digital Humanities field and producing online literary archives. This alternative focus on public presentations of history (i.e., intended for the larger public outside of the academy and the profession) may explain why digital historians seem much less interested in staking out their piece of the DH academic turf while literary scholars seem more inclined both to theorize their DH scholarship and to assert that DH’s genesis can be located in literary scholars’ early digital work.

Quantitative History

A third, and arguably broader, methodological transformation in the study and writing of US history in these same years was the emergence of what was called quantitative history. “Cliometrics” (as some termed it, a bit too cutely) held out the possibility of generating new insights into historical behavior through detailed analyses of a myriad of historical data available in a variety of official sources. This included, but was certainly not limited to, raw data compiled by federal and state agencies in resources like census manuscripts.[11] Quantitative history, which had its roots in the broader turn toward social science taken by a number of US economic historians that began in the late 1950s, had in fact generated by the early 1970s a kind of fever dream among many academic historians and their graduate students (and a raging nightmare for others) (Thomas 2004).[12] Edward Shorter, a historian of psychiatry (!), for example, authored the widely-read The Historian and The Computer: A Practical Guide in 1971. Even the Annales school in France, led by Ladurie, was not immune from the embrace of quantification. Writing in a 1973 essay, Laurie argued that “history that is not quantifiable cannot claim to be scientific” (quoted in Noiret 2012). Quantitative history involved generating raw data from a variety of primary source materials (e.g., US census manuscripts) and then using a variety of statistical tools to analyze that data. The dreams and nightmares that this new methodology generated among academic historians were fueled by the publication of two studies that framed the prominence and ultimate eclipse of quantitative history: Stephan Thernstrom’s Poverty and Progress, published in 1964, and Robert Fogel and Stanley Engerman’s Time on the Cross, which appeared a decade later (Thernstrom 1964; Fogel and Engerman 1974).

Thernstrom’s study used US census manuscripts (the original hand-coded forms for each resident produced by census enumerators) from 1850 to 1880 as well as local bank and tax records and city directories to generate quantitative data, which he then coded and subjected to various statistical measures. Out of this analysis of data he developed his theories of the extent of social mobility, defined occupationally and geographically, that native-born and Irish immigrant residents of Newburyport, Massachusetts enjoyed in those crucial years of the nation’s industrial takeoff. The critical success of Thernstrom’s book helped launch a mini-boom in quantitative history. A three-week seminar on computing in history drew thirty-five historians in 1965 to the University of Michigan; two years later a newsletter on computing in history had more than 800 subscribers (Graham, Milligan, and Weingart 2015). Thernstrom’s early use of quantitative data (which he analyzed without the benefit of computers) and the positive critical reception it received helped launch the quantitative history upsurge that reshaped much US social and urban history writing in the following decade. Without going into much detail here or elaborating on my own deep reservations about Thernstrom’s methodology[13] and the larger political and ideological conclusions he drew from his analysis of the census manuscripts and city directories, suffice it to say that Thernstrom’s work was widely admired by his peers and emulated by many graduate students, helping him secure a coveted position at Harvard in 1973.[14]

The other influential cliometric study, Fogel and Engerman’s Time on the Cross, was widely reviewed (including in Time magazine) after it appeared in early 1974. Though neither author was a social historian (Fogel was an economist, Engerman an economic historian), they were lavishly praised by many academics and reviewers for their innovative statistical analysis of historical data drawn from Southern plantation records (such as the number of whippings meted out by slave owners and overseers to enslaved African Americans). Their use of statistical data led Fogel and Engerman to revise the standard view of the realities of the institution of slavery. Unlike the conclusions reached by earlier historians such as Herbert Aptheker and Kenneth Stampp that centered on the savage exploitation and brutalization of slaves and their active resistance to the institution of slavery, Fogel and Engerman concluded that the institution of slavery was not particularly economically inefficient, as traditional interpretations argued, that the slaves were only “moderately exploited,” and that they were only occasionally abused physically by their owners (Aptheker 1943 [1963]; Stampp 1956 [1967]). Time on the Cross was the focus of much breathless commentary both inside and outside of the academy about the appropriateness of the authors’ assessments of slavery and how quantitative history techniques, which had been around for several decades, would help historians fundamentally rewrite US history.[15] If this latter point sounds eerily prescient of the early hype about DH offered by many of its practitioners and non-academic enthusiasts, I would argue that this is not an accident. The theoretical and methodological orthodoxies of academic disciplines are periodically challenged from within, with new methodologies heralded as life- (or at least field-) changing transformations of the old. Of course, C. Vann Woodward’s highly critical review of Fogel and Engerman in the New York Review of Books and Herbert Gutman’s brilliant book-length takedown of Time on the Cross soon raised important questions and serious reservations about quantitative history’s limitations and its potential for outright distortion (Woodward 1974; Gutman 1975; Thomas 2004). Gutman’s and Woodward’s sharp critiques aside, many academic historians and graduate students (myself included) could not quite resist dabbling in (if not taking a headlong plunge into) quantitative analysis.

Using a Computer to do Quantitative History

Though I had reservations about quantitative history—my skepticism stemming from a general sense that quantitative historians overpromised easy answers to complex questions of historical causation—I decided to broaden the fairly basic new labor history methodology that I was then using in my early dissertation research, which had been based on printed historical sources (government reports, nineteenth-century national newspaper accounts, print archival materials, etc.). I had been drawn to coal miners and coal mining unionism as a subject for my dissertation because of the unusual role that coal miners played historically as prototypical proletarians and labor militants, not only in the United States, but also across the globe. I was interested in understanding the roots of coal miners’ militancy and solidarity in the face of the oppressive living and working conditions they were forced to endure. I also wanted to understand how (or even if) white, black, and immigrant mineworkers had been able to navigate the struggle to forge bonds of solidarity during trade union organizing drives. I had discovered an interesting amount of quantitative data in the course of my doctoral dissertation research: an enumeration of all coal strikes (1,410 in number) that occurred in the United States in the 1881–94 period detailed in the annual reports of the US Commissioner of Labor.[16] This was what we would now call a “dataset,” a term that was not yet used in my wing of the academy in 1975. This critical fourteen-year historical period witnessed the rise and fall of several national labor union organizations among coal miners, including the Knights of Labor, the most consequential nineteenth-century US labor organization, and the birth of the United Mine Workers of America, the union that continues to represent to this day the rapidly dwindling number of US coal miners.

In my collaboration with Jon Amsden, an economic and labor historian and UCLA faculty member, the two of us decided to statistically analyze this data about the behavior and actions of striking coal miners in these years. The dataset of more than 1,400 strikes statistically presented in large tables was simply too large, however, to analyze through conventional qualitative methods to divine patterns and trends. Amsden and I consequently made a decision in 1975 to take the plunge into computer-assisted data analysis. The UCLA Computer Center was a beehive of activity in these early years of academic computing, especially focused on the emerging field of computer science.[17] The center was using an IBM 360 mainframe computer, running Fortran and the Statistical Package for the Social Sciences (the now venerable SPSS, originally released in 1968, and first marketed in 1975) to support social scientific analyses (Noiret 2012).

IBM 360 Computer, circa 1975
Figure 1: IBM 360 Computer, circa 1975

 
Amsden and I began by recording some of the characteristics involved in each of the 1,410 coal strikes that occurred in those 14 years: year of the strike, cause or objective of the strike, and whether a formal union was involved. To make more detailed comparisons we drew a one-in-five systematic random sample of the coal strikes. This additional sampled data included the number of workers involved in each strike, strike duration, and miners’ wages and hours before and after the strike. We laboriously coded each strike by hand on standard 80-character IBM Fortran coding sheets.

IBM Fortran Coding Sheet
Figure 2: IBM Fortran Coding Sheet

 
We then had a keypunch operator at the UCLA Computer Center (no doubt a woman, sadly unknown and faceless to us, righteous labor historians though we both were!)[18] transfer the data on each strike entry to individual IBM Fortran punch cards, originally known at Hollerith cards (Lubar 1992). That process generated a card stack large enough to carry around in a flat cardboard box the size of a large shoe box.

Fortran Punch Card
Figure 3: Fortran Punch Card

 
We regularly visited the UCLA Computer Center in the afternoon to have our card stack “read” by an IBM card reading machine and then asked the IBM 360 to generate specific statistical tabulations and correlations we requested, trying to uncover trends and comparative relationships among the data.[19] The nature of this work on the mainframe computer did not require us to learn Fortran (I know DHer Steve Ramsay would disapprove![20]), though Amsden and I did have to brush up on our basic statistics to be able to figure out how to analyze and make sense of the computer output. We picked up our results (the “read outs”) the next morning, printed on large, continuous sheets of fanfold paper.

IBM 360 Fanfold Paper
Figure 4: IBM 360 Fanfold Paper

 
It was a slow and laborious process, with many false starts and badly declared and pointless computing requests (e.g., poor choices of different data points to try to correlate).

Ultimately, however, this computerized data analysis of strike data yielded significant statistical correlations that helped us uncover previously unknown and only partially visible patterns and meanings in coal miners’ self-activity and allowed us to generate new insights (or confirm existing ones) into the changing levels of class consciousness exhibited by miners. Our historical approach to quantitative analysis was an early anticipation, if I can be permitted a bit of hyperbole, of Franco Moretti’s “distant reading” techniques in literary scholarship (Moretti 2005), using statistical methods to examine all strikes in an industry, rather than relying on a very “close reading” of one, two, or a handful of important strikes that most labor historians, myself included, typically undertook in our scholarly work. Amsden and I wrote up our results in 1975 and our scholarly article appeared in the Journal of Interdisciplinary History in 1977, a relatively new journal that featured interdisciplinary and data-driven scholarship. The article received respectful notice as a solid quantitative contribution to the field and was reprinted several times over the next three decades (Amsden and Brier 1977).[21]

One of our key statistical findings was that the power and militancy of coal miners increased as their union organizations strengthened (no surprises there) and that heightened union power between 1881 and 1894 (a particularly contentious period in US labor history) generated more militant strikes in the coal industry. Our data analysis revealed that these militant strikes often moved away from narrow efforts to secure higher wages to allow miners across the country to pose more fundamental challenges to the coal operators’ near total control over productive relations inside coal pits. Below are two screen shots, both generated by SPSS, from the published article: a scatter diagram (a new technique for historians to employ, at least in 1975) and one of the tables. The two figures convey the kinds of interesting historical questions we were able to pose quantitatively and how we were able to represent the answers to those questions graphically.

Scatter Diagram of Multi-establishment US Coal Strikes, 1881 to 1894
Figure 5: Scatter Diagram of Multi-establishment US Coal Strikes, 1881 to 1894

 
Figure 5 above shows the growth in the number of multi-establishment coal strikes and the increasing number of mines involved in strike activity over time, a good measure of increasing union power and worker solidarity over the critical 14-year period covered in the dataset.

Table 3: Index of Strike Solidarity, comparing Union-Called Coal Strikes with Non-Union Strikes
Table 3: Index of Strike Solidarity, comparing Union-Called Coal Strikes with Non-Union Strikes

 
Table 3 employs a solidarity index that Amsden and I developed out of our analysis of the coal strike statistics, based on the ratio of the number of strikers to the total number of mine employees in a given mine whose workers had gone out on strike. The data revealed that union-called strikes were consistently able to involve a higher percentage of the overall mining workforce as compared to non-union strikes and with less variation from the norm. This table lay at the heart of why I had decided to study coal miners and their unions in the first place. I hoped to analyze why and how miners consistently put themselves and their unions at the center of militant working-class struggles in industrializing America. I might have reached some of these same conclusions by analyzing traditional qualitative sources or by looking closely at one or a handful of strikes. However, Amsden and I had managed to successfully employ a statistical analysis in new ways (at least in the history field) that allowed us to “see” these developments and trends in the data nationally and regionally. We were able therefore to argue that the evolving consciousness of miners over time was reflected in their strike demands and in their ability to successfully spread the union message across the country. I should note here that the United Mine Workers of America had become the largest union by far in these early years of the American Federation of Labor. In sum, we believed we had developed a new statistical methodology to analyze and understand late nineteenth-century working-class behavior. We had used a computer to help answer conceptual questions that were important in shaping our historical interpretation. This effort proved to be a quite early instance of the use of digital techniques to ask and at least partially answer key historical (and, by definition, humanities) questions.

From Quantitative History to the American Social History Project

Around the time of the 1977 publication of the coal miners on strike article I decided to follow my public history muse, morphing from a university-based history scholar and professor-in-training, albeit one who had begun to use new digital technologies, into an activist public historian. I had moved to New York City soon after completing the computer-aided project on coal mining strikes to learn how to produce history films. This was a conscious personal and career choice I made to leave the academy to become an independent filmmaker. My commitment to historical ideas having a greater public and political impact drove my decision to change careers. On my first job in New York in 1977 as research director for a public television series of dramatic films on major moments in US labor history I met Herbert Gutman, one of the deans of the new labor and social history whose work I had read and admired as a graduate student. I spent the next two years researching and producing historical documentaries and other kinds of dramatic films.

The author in 1980 doing research for an educational television project on NYC history at the Columbia Univ. library. (Picture credit: Julie List)
Figure 7: The author in 1980 doing research for an educational television project on NYC history at the Columbia Univ. library. (Picture credit: Julie List)

 
Two years after meeting Gutman I was invited by Herb, who taught at the CUNY Graduate Center, to co-teach a summer seminar for labor leaders for which he had secured funding from the National Endowment for the Humanities (NEH). The NEH summer seminars, in an innovative combination of academic and public history, were designed to communicate to unionized workers the fruits of the new social and labor history that Herb had done so much to pioneer and to which I had committed my nascent academic career in graduate school at UCLA. With the success of these summer seminars, which we taught at the CUNY Graduate Center in 1979 and 1980, Gutman and I decided to create the American Social History Project (ASHP) at CUNY. We reasoned that reaching 15 workers each summer in our seminars, though immensely rewarding for all involved (including the two teachers), was not as efficient as creating a new curriculum that we could make available to adult and worker education programs and teachers across the country. The project quickly received major grants in 1981 and 1982, totaling $1.2 million, from the NEH and the Ford Foundation, and under Herb’s and my leadership we rapidly hired a staff of a dozen historians, teachers, artists, and administrators to create a multimedia curriculum, entitled “Who Built America?” (WBA?). The curriculum mixed the writing of a new two-volume trade book focused on working people’s contributions to US history with a range of new multimedia productions (initially 16mm films and slide/tape shows, VHS videos and, later, a range of digital productions, including two Who Built America? CD-ROMs and several web sites such as “History Matters”). ASHP also had a second, clear orientation, in addition to developing multimedia materials: We built a vibrant education program that connected the project in its first few years with CUNY community college faculty and also New York City high school teachers who used our media materials (including specially designed accompanying viewer guides) in their classes that helped deepen and refine Who Built America?’s pedagogical impact on students. We hoped this multimedia curriculum and ASHP’s ongoing engagement with teachers would broaden the scope and popular appeal of working-class and social history and would be widely adopted in high school, community college, and worker education classrooms around the country as well as by the general public.[22]

I should note here that my early exposure to electronic tools, including being a “ham” radio operator and electronics tinkerer in high school in the early 1960s and using mainframe computers at UCLA in 1975, inclined me to become an early and enthusiastic adopter of and proselytizer for personal computers when they became publicly available in the early 1980s. I insisted in 1982, for example, against resistance from some of my ASHP colleagues who expected to have secretarial help in writing and editing their WBA? chapter drafts, that we use personal computers (I was Kaypro II guy!) to facilitate the drafting and editing of the Who Built America? textbook, work on which began that year (ASHP 1990, 1992).[23]

Kaypro II Computer
Figure 8: Kaypro II Computer

 
ASHP stood outside of the academic history profession as traditionally understood and practiced in universities at that time. As a grant-funded, university-based project with a dozen staff members, many of us with ABDs in history who worked on the project full-time (not on traditional nine-month academic schedules), ASHP staff were clearly “alt-ac”ers several decades before anyone coined that term. We wore our non-traditional academic identities proudly and even a bit defiantly. Gutman and I also realized, nonetheless, that ASHP needed a direct link to an academic institution like CUNY to legitimize and to establish an institutional base that would allow the project to survive and thrive, which led us to instantiate ASHP inside of CUNY. The American Social History Project, in fact, celebrated its 35th anniversary in CUNY in October 2016.[24] That was a consequential decision, obviously, since ASHP might not have survived without the kind of institutional and bureaucratic support that CUNY (and the Graduate Center) have provided over the past three and a half decades. ASHP, at the same time, also stood outside of the academic history profession in believing in and in producing our work collaboratively, which militated against the “lone scholar in the archive” cult that still dominates most academic scholarship and continues to fundamentally determine the processes of promotion and tenure inside the academy. Public history, which many ASHP staff members came out of, had argued for and even privileged such collaborative work, which in a very real sense is a precursor to the more collaborative work and projects that now define much of the new digital scholarship in the Digital Humanities and in the “alt-ac” careers that have proliferated in its wake. Well before Lisa Spiro (2012) enumerated her list of key DH “values”—openness, collegiality and connectedness, diversity, and experimentation—we had embodied those very values in how we structured and operated the American Social History Project (and continue to do so), a set of values that I have also tried to incorporate and teach in all of my academic work ever since.

ASHP’s engagement with collaborative digital work began quite early. In 1990 we launched a series of co-ventures with social historian Roy Rosenzweig (who had been a valued and important ASHP collaborator from the outset of the project a decade earlier, including as a co-author of the Who Built America? textbook) and Bob Stein, the head of The Voyager Company, the pioneering digital publisher. Roy and I had begun in the late 1980s to ruminate about the possibilities of computer-enhanced historical presentations when Bob Stein approached me in 1990 with a proposal to turn the first volume of the WBA? trade book (which had just been published) into an electronic book (ASHP 1990).[25] Applying the best lessons Roy and I and our ASHP colleagues had learned as public historians who were committed to using visual, video, audio, and textual tools and resources to convey important moments and struggles in US history, we worked with Voyager staff to conceive, design, and produce the first Who Built America? CD-ROM in 1993, covering the years 1876 to 1914 (ASHP 1993).[26] As noted earlier, our use of multimedia forms was an essential attribute that we learned as practitioners of public history, a quite different orientation than that relied on by literary DHers who work with text analysis.

The disk, which was co-authored by Roy Rosenzweig, Josh Brown, and me, was arguably the first electronic history book and one of the first e-books ever to appear. The WBA? CD-ROM won critical popular acclaim and a number of prestigious awards, inside in the academy and beyond (Thomas 2004). It also generated, perhaps because of its success, a degree of political notoriety when its inclusion by Apple in the tens of thousands of educational packs of CD-ROMs the company gave away to K-12 schools that purchased Apple computers in 1994-95 led to a coordinated attack on WBA?, ASHP, and Apple by the Christian Right and the Moral Majority. The Radical Right was troubled by the notion conveyed in several of the literally hundreds of primary historical documents we included in the CD-ROM that “gay cowboys” might have been involved in the “taming” of the West or that abortion was common in early twentieth-century urban America. The right-wing attacks were reported in the mainstream press, including the Wall Street Journal and Newsweek.

Putting the ‘PC’ in PCs,” Newsweek, February 20, 1995
Figure 9: “Putting the ‘PC’ in PCs,” Newsweek, February 20, 1995

 
The Right, however, ironically failed in all the furor to notice the CD-ROM’s explicitly pro-worker/anti-capitalist politics! The Right tried to get Apple to remove the WBA? CD-ROM from the education packs, but Apple ultimately backed ASHP and WBA?, though only after much contention and negative publicity.[27]

Despite this political controversy, the first WBA? CD-ROM and early historical web projects like Ed Ayers’s Civil War-era The Valley of the Shadow (1993) helped imagine new possibilities for digital scholarship and digital presentations of historical work. I would suggest that the appearance of the first WBA? CD-ROM nearly a quarter century ago was one of the pioneering instances of the new digital history that contributed a decade later to the emergence of the Digital Humanities, making Roy, Josh, and me and our ASHP colleagues what I have termed in the title of this article and elsewhere in print “premature digital humanists.”[28] That said, I do believe we missed an opportunity to begin to build connections to other scholars outside of history who were undertaking similar digital work around the same time that we completed the WBA? CD-ROM in 1993. Jerry McGann, for example, was beginning his pioneering work at the University of Virginia on the Rossetti Archive and was writing his landmark study “The Rationale of HyperText” (McGann 1995). And while we became aware of each other’s work over the next half dozen years, we never quite came together to ponder the ways in which our very disparate disciplinary approaches to digital scholarship and presentation might have productively been linked up or at least put into some kind of active dialogue. As a result, digital history and digital literary studies occupied distinct academic silos, following quite different paths and embracing very different methodologies and ideas. And neither digital history nor digital literary studies had much in common with the digital new media artists who were also working in this same period and even earlier, grouped around the pioneering journal Ars Electronica.[29] This was a missed opportunity that I believe has hindered Digital Humanities from being more of a big tent and, more importantly, allowing it to become a more robust interdisciplinary force inside the academy and beyond.

In any case my digital history colleagues and I continued to pursue our own digital history work. Roy Rosenzweig, who taught at George Mason University, founded the Center for History and New Media in 1994 a year after the first WBA? CD-ROM appeared. Our two centers next collaborated on several award-winning digital history projects, including the History Matters website mentioned earlier, which made many of the public domain primary source documents presented originally in the WBA? CD-ROM available online. This proved to be a particularly useful and accessible way for teachers at both the high school and college levels to expose their students to a rich array of primary historical sources. And, following the September 11, 2001 terrorist attacks in New York and Washington, DC, our two centers were invited by the Sloan Foundation to collaborate on the development of the September 11 Digital Archive (9/11DA). As Josh Brown and I argued in an article on the creation of the 9/11DA, September 11th was “the first truly digital event of world historical importance: a significant part of its historical record—from e-mail to photography to audio to video—was expressed, captured, disseminated, or viewed in (or converted to) digital forms and formats” (Brier and Brown 2011, 101). It was also one of the first digital projects to be largely “crowdsourced,” given our open solicitation of ordinary people’s digital reminiscences, photos, and videos of the events of September 11th and its aftermath. As historians faced with the task of conceiving and building a brand new digital archive from scratch that focused on a single world historical event, we were also forced to take on additional roles as archivists and preservationists, something we had previously and happily left to professional librarians. We had to make judgments about what to include and exclude in the 9/11 archive, how and whether to display it online, how to contextualize those resources, and, when voluntary online digital submissions of materials by individuals proved insufficient to allow us to offer a fully-rounded picture of what happened, how to target particular groups (including Muslims, Latinos, and the Chinese community in lower Manhattan) with special outreach efforts to be able to include their collective and individual stories and memories in the 9/11DA. Our prior work in and long-term engagement with public history proved essential in this process. We ended up putting the archive online as we were building it, getting the initial iteration of the site up on the web in January 2002 well before the lion’s share of individual digital submissions started pouring in. The body of digital materials that came to constitute the September 11 Digital Archive ultimately totaled nearly a quarter million discrete digital items, making it one of the largest and most comprehensive digital repositories of materials on the September 11 attacks.[30]

While literary scholars confront similar issues of preservation of and access to the materials they are presenting in digital archives, they usually have had the good fortune to be able to rely on extant and often far more circumscribed print sources as the primary materials they are digitizing, annotating, and presenting to fellow scholars and the general public. Public historians who are collecting digital historical data to capture what happened in the recent past or even the present, as we were forced to do in the September 11 Digital Archive, do not have the luxury of basing our work on a settled corpus of information or data. We also faced the extremely delicate task of putting contemporary people’s voices online, making their deepest and most painful personal insights and feelings available to a public audience. Being custodians of that kind of source material brings special responsibilities and sensitivities that most literary digital humanists don’t have to deal with when constructing their digital archives. Our methodologies and larger public imperatives as digital historians are therefore different from those of digital literary scholars. This is especially true given our commitment in the 9/11DA and other digital history archiving projects like the CHNM’s “Hurricane Digital Memory Bank” (on the devastating 2005 Gulf Coast hurricanes Katrina and Rita), as well as ASHP’s current CUNY Digital History Archive project. The latter focuses on student and faculty activism across CUNY beginning in the late 1960s and on presenting historical materials that are deeply personal and politically consequential.[31]

It is important to note that while ASHP continued to collaborate on several ongoing digital history projects with CHNM (headed first by Dan Cohen and Tom Scheinfeldt after Roy’s death in 2007, and, since 2013, by Stephen Robertson), the two centers have moved in different directions in terms of doing digital history. CHNM’s efforts have focused largely on the development of important digital software tools. CHNM’s Zotero, for example, is used to help scholars manage their research sources, while its Omeka software offers a platform for publishing online collections and exhibitions. CHNM has also established a strong and direct connection to the Digital Humanities field, especially through its THATCamps, which are participant-directed digital skills workshops and meetings.[32] On the other hand, ASHP has stayed closer to its original purpose of developing a range of well curated and pedagogically appropriate multimedia historical source materials for use by teachers and students at both the high school and college levels, intended to help them understand and learn about the past. Emblematic of ASHP’s continuing work are The Lost Museum: Exploring Antebellum American Life and Culture and HERB: Social History for Every Classroom websites as well as Mission US, an adventure-style online series of games in which younger players take on the role of young people during critical moments in US history.[33]

From ASHP to ITP and the Digital Humanities

I moved on in my own academic career after formally leaving ASHP as its executive director in 1998, though I remained actively involved in a number of ongoing ASHP digital projects. These included the development of a second WBA? CD-ROM, covering the years from 1914 to 1946, which was published in 2001 (ASHP 2001) and is still available, as well as the aforementioned 9/11 Digital Archive and the CUNY Digital History Archive. As I morphed over three decades from analog media producer, to digital media producer, to digital archivist/digital historian, I became keenly aware of the need to extend the lessons of the public and digital history movements I helped to build to my own and my graduate students’ classroom practices. That was what drove me to develop the Interactive Technology and Pedagogy (ITP) certificate program at the CUNY Graduate Center in 2002. My goal was to teach graduate students that digital tools offered real promise beyond the restricted confines of academic research in a single academic field to help us reimagine and to reshape college classrooms and the entire teaching and learning experience, as my ASHP colleagues and I began doing more than 30 years ago with the Who Built America? education program. I always tell ITP students that I take the “P” in our name (“Pedagogy”) as seriously as I take the “T” (“Technology”) as a way to indicate the centrality of teaching and learning to the way the certificate program was conceived and has operated. I have coordinated ITP for almost 15 years now and will be stepping down as coordinator at the end of the spring 2017 term. I believe that the program has contributed as much to digital pedagogy and to the Digital Humanities as anything else I’ve been involved in, not only at the CUNY Graduate Center where I have been fortunate to have labored for almost all of my academic career, but also in the City University of New York as a whole.[34] One of the ITP program’s most important and ongoing contributions to the Digital Humanities and digital pedagogy fields has been the founding in 2011 of the online Journal of Interactive Technology and Pedagogy, which is produced twice-yearly and is directed by an editorial collective of digital scholars and digital pedagogues, including faculty, graduate students, and library staff.

Working with faculty colleagues like Matt Gold, Carlos Hernandez, Kimon Keramidas, Michael Mandiberg, and Maura Smale, with many highly motivated and skilled graduate students (too numerous to name here), and committed digital administrators and leaders like Luke Waltzer, Lisa Brundage, and Boone Gorges, as well as my ongoing work with long-time ASHP colleagues and comrades Josh Brown, Pennee Bender, Andrea Ades Vasquez, and Ellen Noonan, I have been blessed with opportunities to help create a robust community of digital practice at the Graduate Center and across CUNY. This community of scholars and digital practitioners has helped develop a progressive vision of digital technology and digital pedagogy that I believe can serve as a model for Digital Humanities work in the future. Though far from where I began forty years ago as a doctoral student with an IBM 360 computer and a stack of Fortran cards, my ongoing digital work at CUNY seems to me to be the logical and appropriate culmination of a career that has spanned many identities, including as a social and labor historian, public historian, digital historian, digital producer, and, finally, as a digital pedagogue who has made what I hope has been a modest contribution to the evolution and maturation of the field of Digital Humanities.

Notes

[1] Busa, an Italian Jesuit priest, traveled to New York City in 1949 and convinced IBM founder Thomas Watson to let him use IBM’s mainframe computer to generate a concordance of St. Thomas Aquinas’s writing, Busa’s life work. The best book on the key role of Father Busa is Steven E. Jones. 2016. Roberto Busa, S.J., and The Emergence of Humanities Computing: The Priest and the Punched Cards. New York: Routledge. Geoffrey Rockwell argues that an alternative to starting the history of DH with Busa is to look to the work of linguists who constructed word frequency counts and concordances as early as 1948 using simulations of computers (Rockwell 2007). Willard McCarty, one of the founders of humanities computing, has recently suggested that we could probably trace DH’s origins all the way back to Alan Turing’s “Machine” in the 1930s and 1940s. See McCarty, Willard. 2013. “What does Turing have to do with Busa?” Keynote for ACRH-3, Sofia Bulgaria, December 12. http://www.mccarty.org.uk/essays/McCarty,%20Turing%20and%20Busa.pdf.

[2] The origins of the TEI are described at http://www.tei-c.org/About/history.xml.

[3] See especially the following contributions on DH’s origins in Debates in the Digital Humanities: Matthew Kirschenbaum’s “What is DH and What’s It Doing in English Departments?” http://dhdebates.gc.cuny.edu/debates/text/38; and Steven E. Jones’s “The Emergence of the Digital Humanities (as the Network Is Everting)” http://dhdebates.gc.cuny.edu/debates/text/52. Kenneth M. Price and Ray Siemens reproduce a similar chronology of the literary origins of DH in their 2013 introduction to Literary Studies in the Digital Age (https://dlsanthology.commons.mla.org/introduction/). Willard McCarty is apparently working on his own history of literary computing from Busa to 1991. It is interesting to note, on the other hand, that Franco Moretti, a literary scholar, a key player in DH, and author of one of the field’s foundational texts, Graphs, Maps, Trees: Abstract Models for Literary History, readily acknowledges that academic work in quantitative history (which I discuss later in this essay) helped shape his important concept of “distant reading” (Moretti 2005, 1-30). Distant reading is a fundamental DH methodology at the core of digital literary studies.

[4] I am obviously not tilling this ground alone. There are several major projects underway to dig out the origins/history of Digital Humanities. One of the most promising is the efforts of Julianne Nyhan and her colleagues at the Department of Information Studies, University College London. Their “Hidden Histories: Computing and the Humanities c.1949-1980” project is based on a series of more than 40 oral history interviews with early DH practitioners with the intention of developing a deeper historical understanding of the disciplinary and interdisciplinary starting and continuation points of DH (Nyhan, et al. 2015; Nyhan and Flinn 2016).

[5] My colleague Michael Mandiberg has astutely noted that DH has other important origins and early influences besides literary studies and history. He suggests that DH “has been retracing the steps of new media art,” evidenced by the founding of Ars Electronica in 1979. https://www.aec.at/about/en/geschichte/.

[6] One of the pioneers of this new social history methodology, the Philadelphia Social History Project, based at the University of Pennsylvania, employed early mainframe computers in the late 1970s to create relational databases of historical information about the residents of Philadelphia (Thomas 2004).

[7] Radical History Review 25 (Winter 1980-81). The RHR issue had two other co-editors: Robert Entenmann and Warren Goldstein.

[8] The Presenting the Past collection included essays by Mike Wallace, Michael Frisch, and Roy Rosenzweig analyzing how historical consciousness has been constructed by history museums and mainstream historical publications, as well as essays by Linda Shopes, James Green, and Jeremy Brecher on how local groups in Baltimore, Boston, and in Connecticut’s Brass Valley created alternative ways and formats to understand and present their community’s history of oppositional struggles.

[9] Roy founded CHNM in 1994. The center was appropriately named for him following his death in 2007.

[10] A much-expanded version of Robertson’s original blog post appeared in the 2016 edition of Debates in the Digital Humanities (Gold and Klein 2016): http://dhdebates.gc.cuny.edu/debates/text/76.

[11] A useful introduction to quantification in history can be found at “What Is Quantitative History?” on the History Matters website: http://historymatters.gmu.edu/mse/numbers/what.html. Historian Cameron Blevins also discusses the origins of quantitative history in his essay in Debates in the Digital Humanities 2016: http://dhdebates.gc.cuny.edu/debates/text/77.

[12] Carl Bridenbaugh, a traditional historian of colonial American history, sharply attacked those who would “worship at the shrine of the Bitch goddess QUANTIFICATION” (quoted in Novick 1988, 383–84; capitalization in the original).

[13] I devoted a chapter of my dissertation to a critique of Thernstrom’s conclusions in Poverty and Progress and subsequent publications about the political impact of a large “floating proletariat” on working-class social mobility in US history, which he concluded served to undercut working-class consciousness. My dissertation argued otherwise.

[14] Thernstrom had been teaching at UCLA, where I first encountered him while working on my doctorate. He departed for Harvard in 1973 just in time for Roy Rosenzweig to become one of his doctoral students. Roy completed his dissertation in 1978 on workers in Worcester, Massachusetts, which incorporated little of Thernstrom’s quantitative methodology, but instead employed much of Herbert Gutman’s social and labor history approach. See Rosenzweig, Roy. 1985. Eight Hours for What We Will: Workers and Leisure in an Industrial City, 1870-1920. New York: Cambridge Univ. Press.

[15] Peter Passell, a Columbia economist, in a review of Time on the Cross, declared: “If a more important book about American history has been published in the last decade, I don’t know about it” (Passell 1974). The authors, Passell concluded, “have with one stroke turned around a whole field of interpretation and exposed the frailty of history done without science.”

[16] The strikes were detailed in the third and tenth printed annual reports of the US Commissioner of Labor. U.S. Commissioner of Labor, Third Annual Report. . .1887: Strikes and Lockouts (Washington D.C.: U.S. GPO, 1888); U.S. Commissioner of Labor, Tenth Annual Report. . .1894: Strikes and Lockouts (Washington D.C.: U.S. GPO, 1896).

[17] UCLA was one of the first campuses on the West Coast to develop a computer center, growing out of its early ARPANET involvement. With Stanford, UCLA had participated in the first host-to-host computer connection on ARPANET in October 1969. See http://internetstudies.ucla.edu/. I have no idea what model number of IBM 360 UCLA was using in 1975, but it may well have been the last in the line, the Model 195. See http://www-03.ibm.com/ibm/history/exhibits/mainframe/mainframe_FS360.html. See also Roy Rosenzweig’s (1998) important review essay on the history of the Internet, “Wizards, Bureaucrats, Warriors, and Hackers: Writing the History of the Internet”: http://rrchnm.org/essay/wizards-bureaucrats-warriors-hackers-writing-the-history-of-the-internet/.

[18] Melissa Terras and Julianne Nyhan, in an essay in Debates in the Digital Humanities 2016, tell a similar story about the unknown female keypunch operators Father Busa employed. http://dhdebates.gc.cuny.edu/debates/text/57.

[19] These included regression analyses, standard deviations, and F and T tests of variance.

[20] In a short blog post, Ramsay argued that DHers needed to “make things,” to learn how to code to really consider themselves DHers; it caused quite a flap. See Ramsay, Stephen. 2011. “Who’s In and Who’s Out.” Stephen Ramsay Blog. http://stephenramsay.us/text/2011/01/08/whos-in-and-whos-out/.

[21] The 1977 article was reprinted in Rabb, Theodore and Robert Rotbert, eds. 1981. Industrialization and Urbanization: Studies in Interdisciplinary History. Princeton, NJ: Princeton University Press and in excerpted form in Brenner, A., B. Day and M. Ness, eds. 2009. The Encyclopedia of Strikes in American History. Armonk, NY: M.E. Sharpe. One of the deans of U.S. labor history, David Montgomery, referenced our data and article and employed a similar set of statistical measures in his important article on nineteenth-century US strikes. Montgomery, David. 1980. “Strikes in Nineteenth-Century America.” Social Science History 4: 91-93.

[22] I continued to serve as ASHP’s executive director until 1998, when my shoes were ably filled by my long-time ASHP colleague, Joshua Brown, who continues to head the project to this day. I went on to serve as a senior administrator (Associate Provost and then Vice President) at the Graduate Center until 2009, when I resumed my faculty duties there.

[23] I needed special permission from our funder, the Ford Foundation, to spend ten thousand dollars of our grant to buy four Kaypro II computers (running the CPM operating system and the Wordstar word processing program) on which the entire first volume of WBA? was produced. I keep my old Kaypro II, a 30-pound “luggable,” and a large box of 5.25” floppy computer disks to show my students what early personal computers looked and felt like. My fascination with and desire to hold on to older forms of technology (I also drive a fully restored 1972 Oldsmobile Cutlass Supreme as well) apparently resonates with contemporary efforts to develop an archeology of older media formats and machines at places like the Media Archaeology Laboratory at the University of Colorado. See http://mediaarchaeologylab.com/.

[24] This decision to formally establish ASHP as part of the CUNY Graduate Center proved particularly important, given Herb Gutman’s untimely death in 1985 at age 56. ASHP became part of the Center for Media and Learning (CML) that we founded at CUNY in 1990, which has also provided the institutional home for the Graduate Center’s New Media Lab (NML), which I co-founded in 1998 and continue to co-direct. The NML operates under the aegis of the CML.

[25] I recounted Roy’s and my visit in 1989 to a Washington, DC trade show of computer-controlled training modules and programs in my tribute to him after his death in 2007. See http://thanksroy.org/items/show/501.

[26] Because the first WBA? CD-ROM was produced for earlier Mac (OS9) and PC (Windows 95) operating systems, it is no longer playable on current computer systems, yet another orphaned piece of digital technology in a rapidly evolving computing landscape.

[27] Michael Meyer, “Putting the ‘PC’ in PCs,” Newsweek (February 20, 1995): 46; Jeffrey A. Trachtenberg, “U.S. History on a CD-ROM Stirs Up a Storm,” Wall Street Journal (February 10, 1995): B1-B2; and Juan Gonzalez. “Apple’s Big Byte Out of History.” New York Daily News (February 8, 1995): 10. We managed to fend off the Right-wing attack with what was then an unheard of barrage of email messages that we were able to generate from librarians and school teachers all over the world. It’s important to recall that email was still a relatively new technology in 1995 (AOL, Prodigy, and CompuServe were all launched in that year). The librarians emailed Apple in droves, convincing the company that unless it kept the WBA? CD-ROM in its education packs, the librarians would be unable to recommend future purchases of Apple computers for their schools. After the appointment of a panel of unnamed educators had endorsed the value of the WBA? CD-ROM, Apple resumed distributing copies of the disk in their education bundles for another year, with the total number of distributed WBA? CD-ROMs reaching almost 100,000 copies.

[28] I appropriated the “premature” phrase and explained its historical origins in the mid-1930s fight against fascism in a footnote to my article, “Where’s the Pedagogy? The Role of Teaching and Learning in the Digital Humanities” (Gold 2012, fn12). The standard work on digital history is Dan Cohen and Roy Rosenzweig. 2005. Digital History: A Guide to Gathering, Preserving, and Presenting the Past on the Web. Philadelphia: University of Pennsylvania Press.

[29] Lev Manovich (2001) in The Language of New Media notes that artists began using digital technology during the 1990s to extend and enhance their work, a key moment in what he describes as “the computerization of culture” (221).

[30] It remains, to this day, in the top 15 results one gets out of the nearly 200 million results in a Google search for “September 11.”

[31] See CHNM’s Sheila Brennan and Mills Kelly’s essay on the Hurricane Digital Memory Bank, “Why Collecting History Online is Web 1.5,” on the CHNM website at http://chnm.gmu.edu/essays-on-history-new-media/essays/?essayid=47. The initial online iteration of the CUNY Digital History Archive can be found at http://cdha.cuny.edu/.

[32] Descriptions and details about CHNM’s various projects described here can be found at http://chnm.gmu.edu/.

[33] Descriptions and details about ASHP’s various projects described here can be found on the ASHP website: http://ashp.cuny.edu/.

[34] My contribution to the 2012 edition of Debates in the Digital Humanities was an article entitled “Where’s the Pedagogy? The Role of Teaching and Learning in the Digital Humanities,” which argued that DHers need to pay more attention to pedagogy in their work. http://dhdebates.gc.cuny.edu/debates/text/8.

Bibliography

American Social History Project. 1990, 1992. Who Built America? Working People and Nation’s Economy, Politics, Culture and Society. New York: Pantheon.

———. 1993. Who Built America: From the Centennial Celebration of 1876 to the Great War of 1914 (CD-ROM). Santa Monica, CA: Voyager Co.

———. 2001. Who Built America? From the Great War of 1914 to the Dawn of the Atomic Age (CD-ROM). New York: Worth Publishers.

American Social History Project and Center for History and New Media. 1998. History Matters: The U.S. History Survey on the Web. http://historymatters.gmu.edu.

Amsden, Jon and Stephen Brier. 1977. “Coal Miners on Strike: The Transformation of Strike Demands and the Formation of a National Union.” The Journal of Interdisciplinary History: 8, 583–616.

Aptheker, Herbert. 1943 (1963). American Negro Slave Revolts. New York: International Publishers.

Benson, Susan Porter, Stephen Brier, and Roy Rosenzweig. 1986. Presenting the Past: Essays on History and the Public. Philadelphia: Temple University Press.

Brier, Stephen. 1992. “‘The Most Persistent Unionists’: Class Formation and Class Conflict in the Coal Fields and the Emergence of Interracial and Interethnic Unionism, 1880 –1904.” PhD diss., UCLA.

Brier, Stephen and Joshua Brown. 2011. “The September 11 Digital Archive: Saving the Histories of September 11, 2001.” Radical History Review 111 (Fall 2011): 101-09.

Fogel, Robert William and Stanley L. Engerman. 1974. Time on the Cross: The Economics of American Negro Slavery. Boston: Little, Brown and Company.

Gold, Matthew, ed. 2012. Debates in the Digital Humanities. Minneapolis: University of Minnesota Press.

Gold, Matthew and Lauren Klein, eds. 2016. Debates in the Digital Humanities 2016. Minneapolis: University of Minnesota Press.

Graham, S., I. Milligan, and S. Weingart. 2015. “Early Emergences: Father Busa, Humanities Computing, and the Emergence of the Digital Humanities.” The Historian’s Macroscope: Big Digital History. http://www.themacroscope.org/?page_id=601.

Gutman, Herbert. 1975. Slavery and the Numbers Game: A Critique of Time on the Cross. Urbana, IL: University of Illinois Press.

Hockey, Susan. 2004. “The History of Humanities Computing.” In A Companion to Digital Humanities, edited by Susan Schreibman, Roy Siemens, and John Unsworth. Oxford: Blackwell. http://www.digitalhumanities.org/companion/view?docId=blackwell/9781405103213/9781405103213.xml&chunk.id=ss1-2-1.

Lubar, Steven. 1992. ‘Do Not Fold, Spindle or Mutilate’: A Cultural History of the Punch Card.” Journal of American Culture 15: 43–55.

Manovich, Lev. 2001. The Language of New Media. Cambridge: The MIT Press.

McCarty, Willard. 2011. “Beyond Chronology and Profession: Discovering How to Write a History of the Digital Humanities.” Willard McCarty web page. University College London. http://www.mccarty.org.uk/essays/McCarty,%20Beyond%20chronology%20and%20profession.pdf.

McGann, Jerome. 1995. “The Rationale of Hypertext.” http://www2.iath.virginia.edu/public/jjm2f/rationale.html.

Moretti, Frank. 2005. Graphs, Maps, Trees: Abstract Models for Literary History. Brooklyn, NY: Verso.

Noiret, Serge. 2012 [2015]. “Digital History: The New Craft of (Public) Historians.” http://dph.hypotheses.org/14.

Novick, Peter. 1988. That Noble Dream: The ‘Objectivity Question’ and the American Historical Profession. New York: Cambridge Univ. Press.

Nyhan, Julianne, Andrew Flinn, and Anne Welsh. 2015. “Oral History and the Hidden Histories Project: Towards Histories of Computing in the Humanities.” Digital Scholarship in the Humanities 30: 71-85. Oxford: Oxford University Press. http://dsh.oxfordjournals.org/content/30/1/71/.

Nyhan, Julianne and Andrew Flinn. 2016. Computation and the Humanities: Towards an Oral History of Digital Humanities. Cham, Switzerland: Springer Open. http://link.springer.com/book/10.1007%2F978-3-319-20170-2.

Passell, Peter. 1974. “An Economic Analysis of that Peculiarly Economic Institution.” New York Times. April 28. http://www.nytimes.com/1974/04/28/archives/an-economic-analysis-of-that-peculiarly-economic-institution-vol-ii.html.

Robertson, Stephen. 2014. “The Differences between Digital History and Digital Humanities.” Stephen Robertson’s Blog. May 23. https://drstephenrobertson.com/blog-post/the-differences-between-digital-history-and-digital-humanities/.

Rockwell, Geoffrey. 2007. “An Alternate Beginning to Humanities Computing.” Geoffrey Rockwell’s Research Blog. May 2. http://theoreti.ca/?p=1608.

Rosenzweig, Roy. 1998. “Wizards, Bureaucrats, Warriors, and Hackers: Writing the History of the Internet.” American Historical Review 103: 1530-52. http://rrchnm.org/essay/wizards-bureaucrats-warriors-hackers-writing-the-history-of-the-internet/

Shorter, Edward. 1971. The Historian and the Computer: A Practical Guide. Englewood Cliffs, NJ: Prentice-Hall.

Spiro, Lisa. 2012. “‘This is Why We Fight’: Defining the Values of the Digital Humanities.” In Debates in the Digital Humanities, edited by Matthew Gold. Minneapolis: University of Minnesota Press. http://dhdebates.gc.cuny.edu/debates/text/13.

Stampp, Kenneth. 1956 (1967). The Peculiar Institution: Slavery in the Ante-Bellum South. New York: Knopf.

Thernstrom, Stephan. 1964. Poverty and Progress: Social Mobility in a Nineteenth Century City. Cambridge: Harvard University Press.

Thomas. William G. II. 2004. “Computing and the Historical Imagination.” In A Companion to Digital Humanities, edited by Susan Schreibman, Ray Siemens, and John Unsworth. Oxford: Blackwell.

Woodward, C. Vann. 1974. “The Jolly Institution.” New York Review of Books. May 2.

Acknowledgments

The author thanks Jon Amsden, Josh Brown, Matt Gold, Steven Lubar, Michael Mandiberg, Julianne Nyhan, Stephen Robertson, and Luke Waltzer for helpful comments and suggestions on an earlier draft of this essay.

About the Author

Stephen Brier is a social and labor historian and educational technologist who teaches in the PhD program in Urban Education and is the founder and coordinator of the Interactive Technology and Pedagogy doctoral certificate program, both at the CUNY Graduate Center. He served for eighteen years as the founding director of the American Social History Project/Center for Media and Learning and as a senior administrator for eleven years at the Graduate Center. Brier helped launch the Journal of Interactive Technology and Pedagogy in 2011 and served as a member of the journal’s editorial collective until 2017.

3

A Survey of Digital Humanities Programs

Abstract

The number of digital humanities programs has risen steadily since 2008, adding capacity to the field. But what kind of capacity, and in what areas? This paper presents a survey of DH programs in the Anglophone world (Australia, Canada, Ireland, the United Kingdom, and the United States), including degrees, certificates, and formalized minors, concentrations, and specializations. By analyzing the location, structure, and disciplinarity of these programs, we examine the larger picture of DH, at least insofar as it is represented to prospective students and cultivated through required coursework. We also explore the activities that make up these programs, which speak to the broader skills and methods at play in the field, as well as some important silences. These findings provide some empirical perspective on debates about teaching DH, particularly the attention paid to theory and critical reflection. Finally, we compare our results (where possible) to information on European programs to consider areas of similarity and difference, and sketch a broader picture of digital humanities.

Introduction

Much has been written of what lies inside (and outside) the digital humanities (DH). A fitting example might be the annual Day of DH, when hundreds of “DHers” (digital humanists) write about what they do and how they define the field (see https://twitter.com/dayofdh). Read enough of their stories and certain themes and patterns may emerge, but difference and pluralism will abound. More formal attempts to define the field are not hard to find—there is an entire anthology devoted to the subject (Terras, Nyhan, and Vanhoutte 2013)—and others have approached DH by studying its locations (Zorich 2008; Prescott 2016), its members (Grandjean 2014a, 2014b, 2015), their communication patterns (Ross et al. 2011; Quan-Haase, Martin, and McCay-Peet 2015), conference submissions (Weingart 2016), and so forth.

A small but important subset of research looks at teaching and learning as a lens through which to view the field. Existing studies have examined course syllabi (Terras 2006; Spiro 2011) and the development of specific programs and curricula (Rockwell 1999; Siemens 2001; Sinclair 2001; Unsworth 2001; Unsworth and Butler 2001; Drucker, Unsworth, and Laue 2002; Sinclair & Gouglas 2002; McCarty 2012; Smith 2014). In addition, there are pedagogical discussions about what should be taught in DH (Hockey 1986, 2001; Mahony & Pierazzo 2002; Clement 2012) and its broader relationship to technology, the humanities, and higher education (Brier 2012; Liu 2012; Waltzer 2012).

This study adds to the literature on teaching and learning by presenting a survey of existing degree and certificate programs in DH. While these programs are only part of the activities that make up the broader world of DH, they provide a formal view of training in the field and, by extension, of the field itself. Additionally, they reflect the public face of DH at their institutions, both to potential students and to faculty and administrators outside of DH. By studying the requirements of these programs (especially required coursework), we explore the activities that make up DH, at least to the extent that they are systematically taught and represented to students during admissions and recruitment, as well as where DH programs position themselves within and across the subject boundaries of their institutions. These activities speak to broader skills and methods at play in DH, as well as some important silences. They also provide an empirical perspective on pedagogical debates, particularly the attention paid to theory and critical reflection

Background

Melissa Terras (2006) was the first to point to the utility of education studies in approaching the digital humanities (or what she then called “humanities computing”). In the broadest sense, Terras distinguishes between subjects, which are usually associated with academic departments and defined by “a set of core theories and techniques to be taught” (230), and disciplines, which lack departmental status yet still have their own identities, cultural attributes, communities of practice, heroes, idols, and mythology. After analyzing four university courses in humanities computing, Terras examines other aspects of the community such as its associations, journals, discussion groups, and conference submissions. She concludes that humanities computing is a discipline, although not yet a subject: “the community exists, and functions, and has found a way to continue disseminating its knowledge and encouraging others into the community without the institutionalization of the subject” (242). Terras notes that humanities computing scholars, lacking prescribed activities, have freedom in developing their own research and career paths. She remains curious, however, about the “hidden curriculum” of the field at a time when few formal programs yet existed.

Following Terras, Lisa Spiro (2011) takes up this study of the “hidden curriculum” by collecting and analyzing 134 English-language syllabi from DH courses offered between 2006–2011. While some of these courses were offered in DH departments (16, 11.9%), most were drawn from other disciplines, including English, history, media studies, interdisciplinary studies, library and information science, computer science, rhetoric and composition, visual studies, communication, anthropology, and philosophy. Classics, linguistics, and other languages were missing. Spiro analyzes the assignments, readings, media types, key concepts, and technologies covered in these courses, finding (among other things) that DH courses often link theory to practice; involve collaborative work on projects; engage in social media such as blogging or Twitter; focus not only on text but also on video, audio, images, games, maps, simulation, and 3D modeling; and reflect contemporary issues such as data and databases, openness and copyright, networks and networking, and interaction. Finally, Spiro presents a list of terms she expected to see more often in these syllabi, including “argument,” “statistics,” “programming,” “representation,” “interpretation,” “accessibility,” “sustainability,” and “algorithmic.”

These two studies form the broad picture of DH education. More recent studies have taken up DH teaching and learning within particular contexts, such as community colleges (McGrail 2016), colleges of liberal arts and science (Alexander & Davis 2012; Buurma & Levine 2016), graduate education (Selisker 2016), libraries (Rosenblum, et al., 2016; Varner 2016; Vedantham & Porter 2016) and library and information science education (Senchyne 2016), and the public sphere (Brennan 2016; Hsu 2016). These accounts stress common structural challenges and opportunities across these contexts. In particular, many underscore assumptions made about and within DH, including access to technology, institutional resources, and background literacies. In addition, many activities in these contexts fall outside of formal degrees and programs or even classroom learning, demonstrating the variety of spaces in which DH may be taught and trained.

Other accounts have drawn the deep picture of DH education by examining the development of programs and courses at specific institutions, such as McMaster University (Rockwell 1999), University of Virginia (Unsworth 2001; Unsworth and Butler 2001; Drucker, Unsworth, and Laue 2002), University of Alberta (Sinclair & Gouglas 2002), King’s College London (McCarty 2012), and Wilfrid Laurier University (Smith 2014), among others. Abstracts from “The Humanities Computing Curriculum / The Computing Curriculum in the Arts and Humanities” Conference in 2001 contain references to various institutions (Siemens 2001), as does a subsequent report on the conference (Sinclair 2001). Not surprisingly, these accounts often focus on the histories and peculiarities of each institution, a “localization” that Knight (2011) regards as necessary in DH.

Our study takes a program-based approach to studying teaching and learning in DH. While formal programs represent only a portion of the entire DH curricula, they are important in several respects: First, they reflect intentional groupings of courses, concepts, skills, methods, techniques, and so on. As such, they purport to represent the field in its broadest strokes rather than more specialized portions of it (with the exception of programs offered in specific areas, such as book history and DH). Second, these programs, under the aegis of awarding institutions and larger accrediting bodies, are responsible for declaring explicit learning outcomes of their graduates, often including required courses. These requirements form one picture of what all DHers are expected to know upon graduation (at a certain level), and this changing spectrum of competencies presumably reflects corresponding changes in the field over time. Third, formal DH programs organize teaching, research, and professional development in the field; they are channels through which material and symbolic capital flow, making them responsible, in no small part, for shaping the field itself. Finally, these programs, their requirements, and coursework are one way—perhaps the primary way—in which prospective students encounter the field and make choices about whether to enroll in a DH program and, if so, which one. These programs are also consulted by faculty and administrators developing new programs at their own institutions, both for common competencies and for distinguishing features of particular programs.

In addition to helping define the field, a study of formal DH programs also contributes to the dialogue around pedagogy in the field. Hockey, for example, has long wondered whether programming should be taught (1986) and asks, “How far can the need for analytical and critical thinking in the humanities be reconciled with the practical orientation of much work in humanities computing?” (2001). Also skeptical of mere technological skills, Simon Mahony and Elena Pierazzo (2002) argue for teaching methodologies or “ways of thinking” in DH. Tanya Clement examines multiliteracies in DH (e.g., critical thinking, commitment, community, and play), which help to push the field beyond “training” to “a pursuit that enables all students to ask valuable and productive questions that make for ‘a life worth living’” (2012, 372).

Others have called on DH to engage more fully in critical reflection, especially in relation to technology and the role of the humanities in higher education. Alan Liu notes that much DH work has failed to consider “the relation of the whole digital juggernaut to the new world order,” eschewing even clichéd topics such as “the digital divide,” “surveillance,” “privacy,” and “copyright” (2012, 491). Steve Brier (2012) points out that teaching and learning are an afterthought to many DHers, a lacuna that misses the radical potential of DH for transforming teaching and professional development. Luke Walzer (2012) observes that DH has done little to help protect and reconceptualize the role of the humanities in higher education, long under threat from austerity measures and perceived uselessness in the neoliberal academy (Mowitt 2012).

These and other concerns point to longstanding questions about the proper balance of technological skills and critical reflection in DH. While a study of existing DH programs cannot address the value of critical reflection, it can report on the presence (or absence) of such reflection in required coursework and program outcomes. Thus, it is part of a critical reflection on the field as it stands now, how it is taught to current students, and how such training will shape the future of the field. It can also speak to common learning experiences within DH (e.g., fieldwork, capstones), as well as disciplinary connections, particularly in program electives. These findings, together with our more general findings about DH activities, give pause to consider what is represented in, emphasized by, and omitted from the field at its most explicit levels of educational training.

Methods

This study involved collection of data about DH programs, coding descriptions of programs and courses using a controlled vocabulary, and analysis and visualization.

Data Collection

We compiled a list of 37 DH programs active in 2015 (see Appendix A), drawn from listings in the field (UCLA Center for Digital Humanities 2015; Clement 2015), background literature, and web searches (e.g., “digital humanities masters”). In addition to degrees and certificates, we included minors and concentrations that have formal requirements and coursework, since these programs can be seen as co-issuing degrees with major areas of study and as inflecting those areas in significant ways. We did not include digital arts or emerging media programs in which humanities content was not the central focus of inquiry. In a few cases, the listings or literature mentioned programs that could not be found online, but we determined that these instances were not extant programs—some were initiatives or centers misdescribed, others were programs in planning or simply collections of courses with no formal requirements—and thus fell outside the scope of this study. We also asked for the names of additional programs at a conference presentation, in personal emails, and on Twitter. Because our sources and searches are all English-language, the list of programs we collected are all programs taught in Anglophone countries. This limits what we can say about global DH.

For each program, we made a PDF of the webpage on which its description appears, along with a plain text file of the description. We recorded the URL of each program and information about its title; description; institution; school, division, or department; level (graduate or undergraduate); type (degree or otherwise); year founded; curriculum (total credits, number and list of required and elective courses); and references to independent research, fieldwork, and final deliverables. After identifying any required courses for each program, we looked up descriptions of those courses in the institution’s course catalog and recorded them in a spreadsheet.

Coding and Intercoder Agreement

To analyze the topics covered by programs and required courses, we applied the Taxonomy of Digital Research Activities in the Humanities (TaDiRAH 2014a), which attempts to capture the “scholarly primitives” of the field (Perkins et al. 2014). Unsworth (2000) describes these primitives as “basic functions common to scholarly activities across disciplines, over time, and independent of theoretical orientation,” obvious enough to be “self-understood,” and his preliminary list includes ‘Discovering’, ‘Annotating’, ‘Comparing’, ‘Referring’, ‘Sampling’, ‘Illustrating’, and ‘Representing’.

We doubt that any word—or classification system—works in this way. Language is always a reflection of culture and society, and with that comes questions of power, discipline/ing, and field background. Moreover, term meaning shifts over time and across locations. Nevertheless, we believe classification schema can be useful in organizing and analyzing information, and that is the spirit in which we employ TaDiRAH here.

TaDiRAH is one of several classification schema in DH and is itself based on three prior sources: the arts-humanities.net taxonomy of DH projects, tools, centers, and other resources; the categories and tags originally used by the DiRT (Digital Research Tools) Directory (2014); and headings from “Doing Digital Humanities,” a Zotero bibliography of DH literature (2014) created by the Digital Research Infrastructure for Arts and Humanities (DARIAH). The TaDiRAH version used in this study (v. 0.5.1) also included two rounds of community feedback and subsequent revisions (Dombrowski and Perkins 2014). TaDiRAH’s controlled vocabulary terms are arranged into three broad categories: activities, objects, and techniques. Only activities terms were used in this study because the other terms lack definitions, making them subject to greater variance in interpretation. TaDiRAH contains forty activities terms organized into eight parent terms (‘Capture’, ‘Creation’, ‘Enrichment’, ‘Analysis’, ‘Interpretation’, ‘Storage’, ‘Dissemination’, and ‘Meta-Activities’).

TaDiRAH was built in conversation with a similar project at DARIAH called the Network for Digital Methods in the Arts and Humanities (NeDiMAH) and later incorporated into that project (2015). NeDiMAH’s Methods Ontology (NeMO) contains 160 activities terms organized into five broad categories (‘Acquiring’, ‘Communicating’, ‘Conceiving’, ‘Processing’, ‘Seeking’) and is often more granular than TaDiRAH (e.g., ‘Curating’, ‘Emulating’, ‘Migrating’, ‘Storing’, and ‘Versioning’ rather than simply ‘Preservation’). While NeMO may have other applications, we believe it is too large to be used in this study. There are many cases in which programs or even course descriptions are not as detailed as NeMO in their language, and even the forty-eight TaDiRAH terms proved difficult to apply because of their number and complexity. In addition, TaDiRAH has been applied in DARIAH’s DH Course Registry of European programs, permitting some comparisons between those programs and the ones studied here.

In this study, a term was applied to a program/course description whenever explicit evidence was found that students completing the program or course would be guaranteed to undertake the activities explicitly described in that term’s definition. In other words, we coded for minimum competencies that someone would have after completing a program or course. The narrowest term was applied whenever possible, and multiple terms could be applied to the same description (and, in most cases, were). For example, a reference to book digitization would be coded as ‘Imaging’:

Imaging refers to the capture of texts, images, artefacts or spatial formations using optical means of capture. Imaging can be made in 2D or 3D, using various means (light, laser, infrared, ultrasound). Imaging usually does not lead to the identification of discrete semantic or structural units in the data, such as words or musical notes, which is something DataRecognition accomplishes. Imaging also includes scanning and digital photography.

If there was further mention of OCR (optical character recognition), that would be coded as ‘DataRecognition’ and so on. To take another example, a reference to visualization and other forms of analysis would be coded both as ‘Visualization’ and as its parent term, ‘Analysis’, if no more specific child terms could be identified.

In some cases, descriptions would provide a broad list of activities happening somewhere across a program or course but not guaranteed for all students completing that program or course (e.g., “Through our practicum component, students can acquire hands-on experience with innovative tools for the computational analysis of cultural texts, and gain exposure to new methods for analyzing social movements and communities enabled by new media networks.”). In these cases, we looked for further evidence before applying a term to that description.

Students may also acquire specialty in a variety of areas, but this study is focused on what is learned in common by any student who completes a specific DH program or course; as such, we coded only cases of requirements and common experiences. For the same reason, we coded only required courses, not electives. Finally, we coded programs and required courses separately to analyze whether there was any difference in stated activities at these two levels.

To test intercoder agreement, we selected three program descriptions at random and applied TaDiRAH terms to each. In only a handful of cases did all three of us agree on our term assignments. We attribute this low level of agreement to the large number of activities terms in TaDiRAH, the complexity of program/course descriptions, questions of scope (whether to use a broader or narrower term), and general vagueness. For example, a program description might allude to work with texts at some point, yet not explicitly state text analysis until later, only once, when it is embedded in a list of other examples (e.g., GIS, text mining, network analysis), with a reference to sentiment analysis elsewhere. Since texts could involve digitization, publishing, or other activities, we would not code ‘Text analysis’ immediately, and we would only code it if students would were be guaranteed exposure to this such methods in the program. To complicate matters further, there is no single term for text analysis in TaDiRAH—it spans across four (‘Content analysis’, ‘Relational analysis’, ‘Structural analysis’, and ‘Stylistic analysis’)—and one coder might apply all four terms, another only some, and the third might use the parent term ‘Analysis’, which also includes spatial analysis, network analysis, and visualization.

Even after reviewing these examples and the definitions of specific TaDiRAH terms, we could not reach a high level of intercoder agreement. However, we did find comparing our term assignments to be useful, and we were able to reach consensus in discussion. Based on this experience, we decided that each of us would code every program/course description and then discuss our codings together until we reached a final agreement. Before starting our preliminary codings, we discussed our understanding of each TaDiRAH term (in case it had not come up already in the exercise). We reviewed our preliminary codings using a visualization showing whether one, two, or three coders applied a term to a program/course description. In an effort to reduce bias, especially framing effects (cognitive biases that result from the order in which information is presented), the visualization did not display who had coded which terms. If two coders agreed on a term, they explained their codings to the third and all three came to an agreement. If only one coder applied a term, the other two explained why they did not code for that term and all three came to an agreement. Put another way, we considered every term that anyone applied, and we considered it under the presumption that it would be applied until proven otherwise. Frequently, our discussions involved pointing to specific locations in the program/course descriptions and referencing TaDiRAH definitions or notes from previous meetings when interpretations were discussed.

In analyzing our final codings, we used absolute term frequencies (the number of times a term was applied in general) and weighted frequencies (a proxy for relative frequency and here a measure of individual programs and courses). To compute weighted frequencies, each of the eight parent terms were given a weight of 1, which was divided equally among their subterms. For example, the parent term ‘Dissemination’ has six subterms, so each of those were assigned an equal weight of one-sixth, whereas ‘Enrichment’ has three subterms, each assigned a weight of one-third. These weights were summed by area to show how much of an area (relatively speaking) is represented in program/course descriptions, regardless of area size. If all the subterms in an area are present, that entire area is present—just as it would be if we had applied only the broader term in the first place. These weighted frequencies are used only where programs are displayed individually.

Initially, we had thought about comparing differences in stated activities between programs and required courses. While we found some variations (e.g., a program would be coded for one area of activities but not its courses and vice versa), we also noticed cases in which the language used to describe programs was too vague to code for activities that were borne out in required course descriptions. For this reason and to be as inclusive as possible with our relatively conservative codings, we compared program and course data simultaneously in our final analysis. Future studies may address the way in which program descriptions connect to particular coursework, and articulating such connections may help reveal the ways in which DH is taught (in terms of pedagogy) rather than only its formal structure (as presented here).

Analysis and Visualization

In analyzing program data, we examined the overall character of each program (its title), its structure (whether it grants degrees and, if so, at what level), special requirements (independent study, final deliverables, fieldwork), and its location, both in terms of institutional structure (e.g., departments, labs, centers) and discipline(s). We intended to analyze more thoroughly the number of required courses as compared to electives, the variety of choice students have in electives, and the range of departments in which electives are offered. These comparisons proved difficult: even within an American context, institutions vary in their credit hours and the formality of their requirements (e.g., choosing from a menu of specific electives, as opposed to any course from a department or “with permission”). These inconsistencies multiply greatly in an international context, and so we did not undertake a quantitative study of the number or range of required and elective courses.

Program data and codings were visualized using the free software Tableau Public. All images included in this article are available in a public workbook at https://public.tableau.com/views/DigitalHumanitiesProgramsSurvey/Combined. As we discuss in the final section, we are also building a public-facing version of the data and visualizations, which may be updated by members of the DH community. Thus, the data presented here can and should change over time, making these results only a snapshot of DH in some locations at the present.

Anglophone Programs

The number of DH programs in Anglophone countries has risen sharply over time, beginning in 1991 and growing steadily by several programs each year since 2008 (see Figure 1). This growth speaks to increased capacity in the field, not just by means of centers, journals, conferences, and other professional infrastructure, but also through formal education. Since 2008, there has been a steady addition of several programs each year, and based on informal observation since our data collection ended, we believe this trend continues.

A bar chart showing the number of new Anglophone DH programs each year from 1991 to 2015. A line showing the cumulative total of programs increases sharply at 2008.
Figure 1. Digital humanities programs in our collected data by year established

Program Titles

Most of the programs in our collected data (22, 59%) are titled simply “Digital Humanities,” along with a few variations, such as “Book History and Digital Humanities” and “Digital Humanities Research” (see Figure 2). A handful of programs are named for particular areas of DH or related topics (e.g., “Digital Culture,” “Public Scholarship”), and only a fraction (3 programs, 8%) are called “Humanities Computing.” We did not investigate changes in program names over time, although this might be worthwhile in the future.

A stacked bar chart comparing the titles of Anglophone DH programs. The segments that make up each bar are color coded by degree type (e.g., doctoral, master’s, bachelor’s, certificate, other).
Figure 2. Titles of digital humanities programs in our collected data

Structure

Less than half of DH programs in our collected data grant degrees: some at the level of bachelor’s (8%), most at the level of master’s (22%), and some at the doctoral (8%) level (Figure 3). The majority of DH programs are certificates, minors, specializations, and concentrations—certificates being much more common at the graduate level and nearly one-third of all programs in our collected data. The handful of doctoral programs are all located in the UK and Ireland.

A stacked bar chart showing the number of Anglophone DH programs at the undergraduate and graduate levels. The segments that make up each bar are color coded by degree type (e.g., doctoral, master’s, bachelor’s, certificate, other).
Figure 3. Digital humanities programs in our collected data (by degree and level)

 

In addition to degree-granting status, we also examined special requirements for the 37 DH programs in our study. Half of those programs require some form of independent research (see Figure 4). All doctoral programs require such research; most master’s programs do as well. Again, we only looked for cases of explicit requirements; it seems likely that research of some variety is conducted within all the programs analyzed here. However, we focus this study on explicit statements of academic activity in order to separate the assumptions of practitioners of DH about its activities from what appears in public-facing descriptions of the field.
Half of DH programs in our collected data require a final deliverable, referred to variously as a capstone, dissertation, portfolio, or thesis (see Figure 5). Again, discrepancies between written and unwritten expectations in degree programs abound—and are certainly not limited to DH—and some programs may have not explicitly stated this requirement, so deliverables may be undercounted. That said, most graduate programs require some kind of final deliverable, and most undergraduate and non-degree-granting programs (e.g., minors, specializations) do not.

Finally, about one-quarter of programs require fieldwork, often in the form of an internship (see Figure 6). This fieldwork requirement is spread across degree types and levels.

A stacked bar chart showing whether Anglophone DH programs require independent research as a part of their degree requirements. The segments that make up each bar are color coded by degree type (e.g., doctoral, master’s, bachelor’s, certificate, other).
Figure 4. Independent research requirements of digital humanities programs in our collected data

 

A stacked bar chart showing the final deliverable requirement (dissertation, portfolio, etc.) of Anglophone DH programs. The segments that make up each bar are color coded by degree type (e.g., doctoral, master’s, bachelor’s, certificate, other).
Figure 5. Final deliverable required by digital humanities programs in our collected data

 

A stacked bar chart showing whether Anglophone DH programs require fieldwork as a part of their degree requirements. The segments that make up each bar are color coded by degree type (e.g., doctoral, master’s, bachelor’s, certificate, other).
Figure 6. Fieldwork requirements of digital humanities programs in our collected data

 

Location and Disciplinarity

About one-third of the DH programs in our dataset are offered outside of academic schools/departments (in centers, initiatives, and, in one case, jointly with the library), and most issue from colleges/schools of arts and humanities (see Figure 7). Although much DH work occurs outside of traditional departments (Zorich 2008), formal training in Anglophone countries remains tied to them. Most DH concentrations and specializations are located within English departments, evidence for Kirschenbaum’s claim that DH’s “professional apparatus…is probably more rooted in English than any other departmental home” (2010, 55).

A bar chart showing location of Anglophone DH programs within an institution (college/school, center, department. etc.)
Figure 7. Institutional location of digital humanities programs in our collected data

The elective courses of DH programs span myriad departments and disciplines. The familiar humanities departments are well represented (art history, classics, history, philosophy, religion, and various languages), along with computer science, design, media, and technology. Several programs include electives drawn from education departments and information and library science. More surprising departments (and courses) include anthropology (“Anthropological Knowledge in the Museum”), geography (“Urban GIS”), political science (“New Media and Politics”), psychology (“Affective Interaction”), sociology (“Social and Historical Study of Information, Software, and Networks”), even criminology (“Cyber Crime”).

The number of electives required by each program and the pool from which they may be drawn varies greatly among programs, and in some cases it is so open-ended that it is nearly impossible to document thoroughly. Some programs have no elective courses and focus only on shared, required coursework. Others list dozens of potential elective courses as suggestions, rather than an exhaustive list. Because course offerings, especially in cross-disciplinary areas, change from term to term and different courses may be offered under a single, general course listing such as “Special Topics,” the list of elective course we have collected is only a sample of the type of courses students in DH programs may take, and we do not analyze them quantitatively here.

Theory and Critical Reflection

To analyze the role of theory and critical reflection in DH programs, we focused our analysis on two TaDiRAH terms: ‘Theorizing’,

a method which aims to relate a number of elements or ideas into a coherent system based on some general principles and capable of explaining relevant phenomena or observations. Theorizing relies on techniques such as reasoning, abstract thinking, conceptualizing and defining. A theory may be implemented in the form of a model, or a model may give rise to formulating a theory.

and ‘Meta: GiveOverview’, which

refers to the activity of providing information which is relatively general or provides a historical or systematic overview of a given topic. Nevertheless, it can be aimed at experts or beginners in a field, subfield or specialty.

In most cases, we used ‘Meta: GiveOverview’ to code theoretical or historical introductions to DH itself, though any explicit mention of theory was coded (or also coded) as ‘Theorizing’. We found that all DH programs, whether in program descriptions or required courses, included some mention of theory or historical/systematic overview (see Figure 8).

A table of Anglophone institutions and DH programs showing whether researchers coded ‘Theory’ or ‘GiveOverview’ for the program or required course descriptions.
Figure 8. Theory and critical reflection in digital humanities programs in our collected data

Accordingly, we might say that each program, according to its local interpretation, engages in some type of theoretical or critical reflection. We cannot, of course, say much more about the character of this reflection, whether it is the type of critical reflection called for in the pedagogical literature, or how this reflection interfaces with the teaching of skills and techniques in these programs. We hope someone studies this aspect of programs, but it is also worth noting that only 6 of the 37 programs here were coded for ‘Teaching/Learning’ (see Figure 12). Presumably, most programs do not engage theoretically with issues of pedagogy or the relationship between DH and higher education, commensurate with Brier’s claim that these areas are often overlooked (2012). Such engagement may occur in elective courses or perhaps nowhere in these programs.

European Programs

All of the 37 programs discussed above are located in Anglophone countries, most of them in the United States (22 programs, 60%). We note that TaDiRAH, too, originates in this context, as does our English-language web searches for DH programs. While this data is certainly in dialogue with the many discussions of DH education cited above, it limits what we can say about DH from a global perspective. It is important to understand the various ways DH manifests around the globe, both to raise awareness of these approaches and to compare the ways in which DH education converges and diverges across these contexts. To that end, we gathered existing data on European programs by scraping DARIAH’s Digital Humanities Course Registry (DARIAH-EU 2014a) and consulting the European Association for Digital Humanities’ (EADH) education resources webpage (2016). This DARIAH/EADH data is not intended to stand in for the entirety of global DH, as it looks exclusively at European programs (and even then it is limited in interpretation by our own language barriers). DH is happening outside of this scope (e.g., Gil 2017), and we hope that future initiatives can expand the conversation about DH programs worldwide—possibly as part of our plans for data publication, which we address at the end of this article.

DARIAH’s database lists 102 degree programs, 77 of which were flagged in page markup as “outdated” with the note, “This record has not been revised for a year or longer.” While inspecting DARIAH data, we found 43 programs tagged with TaDiRAH terms, and we eliminated 17 entries that were duplicates, had broken URLs and could not be located through a web search, or appeared to be single courses or events rather than formal programs. We also updated information on a few programs (e.g., specializations classified as degrees). We then added 5 programs listed by EADH but not by DARIAH, for a grand total of 93 European DH programs (only 16 of which were listed jointly by both organizations). We refer to this dataset as “DARIAH/EADH data” in the remainder of this paper. A map of these locations is provided in Figure 9, and the full list of programs considered in this paper is given in Appendices.

A map of Europe showing the number of DH programs in each country, based on DARIAH/EADH listings.
Figure 9. Geographic location of programs in DARIAH/EADH data

 

The DARIAH/EADH data lists 93 programs spread across parts of Europe, with the highest concentration (33%) in Germany (see Table 1). We caution here and in subsequent discussions that DARIAH and EADH may not have applied the same criteria for including programs as we did in our data collection, so results are not directly comparable. Some programs in informatics or data asset management might have been ruled out using our data collection methods, which were focused on humanities content.

Table 1. Summary of programs included in our collected data and DARIAH/EADH data
Country Programs in our collected data
N (%)
Programs in DARIAH/EADH data
N (%)
Australia 1 (3%)
Austria 1 (1%)
Belgium 2 (2%)
Canada 6 (16%)
Croatia 3 (3%)
Finland 1 (1%)
France 8 (9%)
Germany 31 (33%)
Ireland 3 (8%) 4 (4%)
Italy 4 94%)
Netherlands 16 (17%)
Norway 1 (1%)
Portugal 1 (1%)
Spain 2 (2%)
Sweden 1 (1%)
Switzerland 6 (7%)
United Kingdom 5 (14%) 12 (13%)
United States 22 (60%)

Program Titles

A cursory examination of the DARIAH/EADH program title reveals more variety, including many programs in computer linguistics and informatics (see Appendix B). We did not analyze these titles further because of language barriers. And again, we caution that some of these programs might not have been included according to the criteria for our study, though the vast majority appear relevant.

Structure

Most programs in the DARIAH/EADH data are degree-granting at the level of master’s (61%) or bachelor’s (25%) (see Figure 10). While we are reasonably confident in these broad trends, we are skeptical of the exact totals for two reasons. In DARIAH’s Registry, we noticed several cases of specializations being labeled as degrees. Though we rectified these cases where possible, language barriers prevented us from more thoroughly researching each program—another challenge that a global study of DH would encounter. On the other hand, it’s also possible that non-degree programs were undercounted in general, given that the Registry was meant to list degrees and courses. Based on our inspection of each program, we do not believe these errors are widespread enough to change the general distribution of the data: more European programs issue degrees, mostly at the master’s level.

A stacked bar chart showing the number of European DH programs at the undergraduate and graduate levels, as listed by DARIAH/EADH. The segments that make up each bar are color coded by degree type (e.g., doctoral, master’s, bachelor’s, certificate, other).
Figure 10. Digital humanities programs (by degree and level, DARIAH/EADH data)

Location and Disciplinarity

Most European programs are also located in academic divisions called colleges, departments, faculties, or schools (see Figure 11), depending on country. Only a handful of programs are located in institutes, centres, or labs, even less frequently than in our collected data.

A bar chart showing location of European DH programs within an institution (college/department/faculty/school, centre, institute. etc.), as listed by DARIAH/EADH.
Figure 11. Institutional location of digital humanities programs (DARIAH/EADH data)

We did not analyze disciplinarity in the DARIAH/EADH data because the programs span various countries, education systems, and languages—things we could not feasibly study here. However, 43 programs in the DARIAH/EADH data were tagged with TaDiRAH terms, allowing for comparison with programs in our collected data. These speak to what happens in DH programs in Europe, even if their disciplinary boundaries vary.

DH Activities

To analyze the skills and methods at play in DH programs, we examined our TaDiRAH codings in terms of overall term frequency (see Figure 12) and weighted frequency across individual programs (see Figures 13 and 14). Several trends were apparent in our codings, as well as DARIAH-listed programs that were also tagged with TaDiRAH terms.

In our data on Anglophone programs of DH programs, analysis and meta-activities (e.g., ‘Community building’, ‘Project management’, ‘Teaching/Learning’) make up the largest share of activities, along with creation (e.g., ‘Designing’, ‘Programming’, ‘Writing’). This is apparent in absolute term frequencies (see Figure 12, excepting ‘Theorizing’ and ‘Meta: GiveOverview’) and in a heatmap comparison of programs (see Figure 13). Again, the heatmap used weighted frequencies to adjust for the fact that some areas have few terms, while others have more than double the smallest. It is worth noting that ‘Writing’ is one of the most frequent terms (11 programs), but this activity certainly occurs elsewhere and is probably undercounted because it was not explicitly mentioned in program descriptions. The same may be true for other activities.

A series of bar charts showing the number of times each TaDiRAH term appeared in the datasets. Terms are listed under their parent terms, and subtotals are given for each parent term. Data collected by researchers (Anglophone programs) are displayed in blue, and DARIAH data are displayed in orange.
Figure 12. TaDiRAH term coding frequency (grouped)

 

A heatmap of Anglophone DH programs and TaDiRAH parent terms. The saturation of each cell shows the number of times that terms within that parent term were coded for that particular program, whether in program descriptions or course descriptions.
Figure 13. Digital humanities programs in our collected data and their required courses (by area)

Many program specializations seem to follow from the flavor of DH at particular institutions (e.g. the graduate certificate at Stanford’s Center for Spatial and Textual Analysis, University of Iowa’s emphasis on public engagement), commensurate with Knight’s (2011) call for “localization” in DH.

In contrast with the most frequent terms, some terms were never applied to program/course descriptions in our data, including ‘Translation’, ‘Cleanup’, ‘Editing’, and ‘Identifying’. Enrichment and storage activities (e.g., ‘Archiving’, ‘Organizing’, ‘Preservation’) were generally sparse (only 1.9% of all codings), even after compensating for the fact that these areas have fewer terms. We suspect that these activities do occur in DH programs and courses—in fact, they are assumed in broader activities such as thematic research collections, content management systems, and even dissemination. Their lack of inclusion in program/course descriptions seems constituent with claims made by librarians that their expertise in technology, information organization, and scholarly communication is undervalued in the field, whether instrumentalized as part a service model that excludes them from the academic rewards of and critical decision-making in DH work (Muñoz 2013; Posner 2013) or devalued as a form of feminized labor (Shirazi 2014). Ironically, these abilities are regarded as qualifications for academic librarian positions and as marketable job skills for humanities students and, at the same time, as a lesser form of academic work, often referred to as faculty “service” (Nowviskie 2012; Sample 2013; Takats 2013). We suspect that many program descriptions replicate this disconnect by de-emphasizing some activities (e.g., storage, enrichment) over others (e.g., analysis, project management).

Generally, there seems to be less emphasis on content (‘Capture’, ‘Enrichment’, and ‘Storage’ terms) and more focus on platforms and tools (‘Analysis’ and ‘Meta-Activities’ terms) within programs in our collected data. In interpreting this disparity, we think it’s important to attend to the larger contexts surrounding education in various locations. The Anglophone programs we studied are mostly located in the United States, where “big data” drives many decisions, including those surrounding higher education. As boyd and Crawford note, this phenomenon rests on the interplay of technology, analysis, and “[m]ythology: the widespread belief that large data sets offer a higher form of intelligence and knowledge that can generate insights that were previously impossible, with the aura of truth, objectivity, and accuracy” (2013: 663). Within this context, programs advertising analysis, visualization, and project management may appear as more attractive to prospective students and supporting institutions, two important audiences of program webpages. This influence does not mean that such activities do not occur or are not important to DH, but it again turns attention to questions about the way in which these skills are developed and deployed and whether that occurs against a backdrop of critical reflection on methods and tools. How these broad program-level descriptions play out in the context of particular courses and instruction is beyond the scope of this program-level study, but we think that surfacing the way programs are described is an important first step to a deeper analysis of these questions.

When comparing our 37 programs to the 43 TaDiRAH-tagged European ones, several differences emerge—though we caution that these findings, in particular, may be less reliable than others presented here. In our study, we coded for guaranteed activities, explicit either in program descriptions or required course description. In DARIAH’s Registry, entries are submitted by users, who are given a link to another version of TaDiRAH (2014b) and instructed to code at least one activities keyword (DARIAH-EU 2014b). We do not know the criteria each submitter uses for applying terms, and it’s likely that intercoder agreement would be low in absence of pre-coordination. For example, programs in the Netherlands are noticeably sparser in their codings than programs elsewhere—perhaps submitted by the same coder, or coders with a shared understanding and different from the others (see Figure 14).

A heatmap of DH programs and TaDiRAH parent terms, as listed by DARIAH. The saturation of each cell shows the number of times that terms within that parent term were coded for that particular program.
Figure 14. Digital humanities programs (by area, TaDiRAH-tagged subset of DARIAH data)

We tried to compare directly our codings with DARIAH data by looking at five programs listed in common. Only one of these programs had TaDiRAH terms in DARIAH data: specifically, all eight top-level terms. When examining other programs, we found several tagged with more than half of the top-level terms and one tagged with 40 of 48 activities terms. These examples alone suggest that DARIAH data may be maximally inclusive in its TaDiRAH codings. Nevertheless, we can treat this crowdsourced data as reflective of broad trends in the area and compare them, generally, to those found in our study. Moreover, there does not appear to be any geographic or degree-based bias in the DARIAH data: the 43 tagged programs span ten different countries and both graduate and undergraduate offerings, degree and non-degree programs.

Comparing term frequencies in our collected data and DARIAH/EADH data (see Figure 12), it appears that enrichment, capture, and storage activities are more prevalent in European programs, while analysis and meta-activities are relatively less common (see Table 2). While both datasets have roughly the same number of programs (37 and 43, respectively), the DARIAH data has over twice as many terms as our study. For this reason, we computed a relative expression of difference by dividing the total percent of a TaDiRAH area in DARIAH data by the total percent in our study. Viewed this way, ‘Enrichment’ has over five times as many weighted codings in DARIAH as our study, followed by ‘Capture’ with over twice as many; ‘Analysis’, ‘Interpretation’, and ‘Meta-activities’ are less common. Thus, Anglophone and European programs appear to focus on different areas, within the limitations mentioned above and while still overlapping in most areas. This difference might be caused by the inclusion of more programs related to informatics, digital asset management, and communication in the DARIAH data than in our collected data, or the presence of more extensive cultural heritage materials, support for them, and integration into European programs. At a deeper level, this difference may reflect a different way of thinking or talking about DH or the histories of European programs, many of which were established before programs in our collected data.

Table 2. Summary of TaDiRAH term coding frequencies (grouped)
TaDiRAH parent term (includes subterms) In our collected data
N (%)
In DARIAH
N (%)
Factor of difference overall (weighted)
Capture 13 (6.1%) 73 (15.7%) 5.6 (2.55)
Creation 35 (16.5%) 74 (15.9%) 2.1 (0.96%)
Enrichment 4 (1.9%) 48 (10.3%) 12.0 (5.46)
Analysis 47 (22.2%) 77 (16.5%) 1.6 (0.75)
Interpretation 27 (12.7%) 40 (8.6%) 1.5 (0.67)
Storage 11 (5.2%) 43 (9.2%) 3.9 (1.78)
Dissemination 24 (11.3%) 63 (13.5%) 2.6 (1.19)
Meta-Activities 51 (24.1%) 48 (10.3%) 0.9 (0.43)

Reflections on TaDiRAH

Since TaDiRAH aims to be comprehensive of the field—even machine readable—we believe our challenges applying it may prove instructive to revising the taxonomy for wider application and for considering how DH is described more generally.
Most examples of hard-to-code language were technical (e.g., databases, content management systems, CSS, and XML) and blurred the lines between capture, creation, and storage and, at a narrower level, web development and programming. Given the rate at which technologies change, it may be difficult to come up with stable terms for DH. At the same time, we may need to recognize that some of the most ubiquitous technologies and platforms in the field (e.g., Omeka, WordPress) actually subsume over various activities and require myriad skills. This, in turn, might give attention to skills such as knowledge organization, which seem rarely taught or mentioned on an explicit basis.

A separate set of hard-to-code activities included gaming and user experience (UX). We suspect the list might grow as tangential fields intersect with DH. Arguably, UX falls under ‘Meta: Assessing’, but there are design and web development aspects of UX that distinguish it from other forms of assessment, aspects that probably belong better with ‘Creation’. Similarly, gaming might be encompassed by ‘Meta: Teaching/Learning’, which

involves one group of people interactively helping another group of people acquire and/or develop skills, competencies, and knowledge that lets them solve problems in a specific area of research,

but this broad definition omits distinctive aspects of gaming, such as play and enjoyment, that are central to the concept. Gaming and UX, much like the technical cases discussed earlier, draw on a range of different disciplines and methods, making them difficult to classify. Nevertheless, they appear in fieldwork and are even taught in certain programs/courses, making it important to represent them in the taxonomy of DH.

With these examples in mind and considering the constantly evolving nature of DH and the language that surrounds it, it is difficult and perhaps counterproductive to suggest any concrete changes to TaDiRAH that would better represent the activities involved in “doing DH.” We present these findings as an empirical representation of what DH in certain parts of the world looks like now, with the hope that it will garner critical reflection from DH practitioners and teachers about how the next generation of students perceives our field and the skills that are taught and valued within it.

Conclusion and Further Directions

Our survey of DH programs in the Anglophone world may be summarized by the following points.

  • The majority of Anglophone programs are not degree-granting; they are certificates, minors, specializations, and concentrations. By comparison, most European programs are degree-granting, often at the master’s level.
  • About half of Anglophone programs require some form of independent research, and half require a final deliverable, referred to variously as a capstone, dissertation, portfolio, or thesis. About one-quarter of programs require fieldwork, often in the form of an internship.
  • About one-third of Anglophone DH programs are offered outside of academic schools/departments (in centers, initiatives, and, in one case, jointly with the library). By comparison, most European programs are located in academic divisions; only a handful are offered in institutes, centres, or labs.
  • Analysis and meta-activities (e.g., community building, project management) make up the largest share of activities in Anglophone programs, along with creation (e.g., designing, programming, writing). By contrast, activities such as enrichment, capture, and storage seem more prevalent in European programs. Some of these areas may be over- or under-represented for various cultural reasons we’ve discussed above.

As with any survey, there may be things uncounted, undercounted, or miscounted, and we have tried to note these limitations throughout this article.

One immediate application of this data is a resource for prospective students and those planning and revising formal programs. At minimum, this data provides general information about these 37 programs, along with some indication of special areas of emphasis—a compliment to DARIAH/EADH data. As we discussed earlier, this list should be more inclusive of DH throughout the globe, and that probably requires an international team fluent in the various languages of the programs. Following our inspection of DARIAH’s Registry, we believe it’s difficult to control the accuracy of such data in a centralized way. To address both of these challenges, we believe that updates to this data are best managed by the DH community, and to that end, we have created a GitHub repository at https://github.com/dhprograms/data where updates can be forked and pulled into a master branch. This branch will be connected to Tableau Public for live versions of visualizations similar to the ones included here. Beyond this technical infrastructure, our next steps include outreach to the community to ensure that listings are updated and inclusive in ways that go beyond our resources in this study.

Second, there are possibilities for studying program change over time using the archive of program webpages and course descriptions generated by this study. Capture of program and course information in the future might allow exploration of the growth of the field as well as changes in its activities. We believe that a different taxonomy or classification system might prove useful here, as well as a different method of coding. These are active considerations as we build the GitHub repository. We also note that this study may induce some effect (hopefully positive) in the way that programs and courses are described, perhaps pushing them to be more explicit about the nature and extent of DH activities.

Finally, we hope this study gives the community pause to consider how DH is described and represented, and how it is taught. If there are common expectations not reflected here, perhaps DHers could be more explicit about how we, as a community, describe the activities that make up DH work, at least in building our taxonomies and describing our formal programs and required courses. Conversely, if there are activities that seem overrepresented here, we might consider why those activities are prized in the field (and which are not) and whether this is the picture we wish to present publicly. We might further consider this picture in relationship to the cultural and political-economic contexts in which DH actually exists. Are we engaging with these larger structures? Do the activities of the field reflect this? Is it found in our teaching and learning, and in the ways that we describe those?

Acknowledgements

We are grateful to Allison Piazza for collecting initial data about some programs, as well as Craig MacDonald for advice on statistical analysis and coding methods. Attendees at the inaugural Keystone Digital Humanities Conference at the University of Pennsylvania Libraries provided helpful feedback on the ideas presented here. JITP reviewers Stewart Varner and Kathi Berens were helpful interlocutors for this draft, as were anonymous reviewers of a DH2017 conference proposal based on this work.

Bibliography

Alexander, Bryan and Rebecca Frost Davis. 2012. “Should Liberal Arts Campuses Do Digital Humanities? Process and Products in the Small College World.” In In Debates in the Digital Humanities, edited by Matthew K. Gold. Minneapolis: University Of Minnesota Press. Retrieved from http://dhdebates.gc.cuny.edu/debates/text/25.

boyd, danah and Kate Crawford. 2013. “Critical Questions for Big Data.” Information, Communication & Society 15(5): 662–79. Retrieved from http://dx.doi.org/10.1080/1369118X.2012.678878.

Brennan, Sheila A. 2016. “Public, First.” In Debates in the Digital Humanities, edited by Matthew K. Gold and Lauren F. Klein. Minneapolis: University of Minnesota Press. Retrieved from http://dhdebates.gc.cuny.edu/debates/text/83.

Brier, Stephen. 2012. “Where’s the Pedagogy? The Role of Teaching and Learning in the Digital Humanities.” In Debates in the Digital Humanities, edited by Matthew K. Gold, 390–401. Minneapolis: University Of Minnesota Press. Retrieved from http://dhdebates.gc.cuny.edu/debates/text/8.

Buurma, Rachel Sagner and Anna Tione Levine. “The Sympathetic Research Imagination: Digital Humanities and the Liberal Arts.” In Debates in the Digital Humanities, edited by Matthew K. Gold and Lauren F. Klein. Minneapolis: University of Minnesota Press. Retrieved from http://dhdebates.gc.cuny.edu/debates/text/74.

Clement, Tanya. 2012. “Multiliteracies in the Undergraduate Digital Humanities Curriculum.” In Digital Humanities Pedagogy: Practices, Principles and Politics, edited by Brett D. Hirsch, 365–88. Open Book Publishers. Retrieved from http://www.openbookpublishers.com/product/161/digital-humanities-pedagogy–practices–principles-and-politics.

———. 2015. “Digital Humanities Inflected Undergraduate Programs.” Tanyaclement.org. January 8, 2015. Retrieved from http://tanyaclement.org/2009/11/04/digital-humanities-inflected-undergraduate-programs-2.

DARIAH-EU. 2014a. “Digital Humanities Course Registry.” https://dh-registry.de.dariah.eu.

———. 2014b. “Manual and FAQ.” Digital Humanities Course Registry. Retrieved from https://dh-registry.de.dariah.eu/pages/manual.

“DiRT Directory.” 2014. Retrieved from http://dirtdirectory.org.

“Doing Digital Humanities – A DARIAH Bibliography.” 2014. Zotero. Retrieved from https://www.zotero.org/groups/doing_digital_humanities_-_a_dariah_bibliography/items/order/creator/sort/asc.

Dombrowski, Quinn, and Jody Perkins. 2014. “TaDiRAH: Building Capacity for Integrated Access.” dh+lib. May 21, 2014. Retrieved from http://acrl.ala.org/dh/2014/05/21/tadirah-building-capacity-integrated-access.

Drucker, Johanna, John Unsworth, and Andrea Laue. 2002. “Final Report for Digital Humanities Curriculum Seminar.” Media Studies Program, College of Arts and Science: University of Virginia. Retrieved from http://www.iath.virginia.edu/hcs/dhcs.

European Association for Digital Humanities. 2016. “Education.” February 1, 2016. Retrieved from http://eadh.org/education.

Gil, Alex. “DH Organizations around the World.” Retrieved from http://testing.elotroalex.com/dhorgs. Accessed 10 Apr 2017.

Grandjean, Martin. 2014a. “The Digital Humanities Network on Twitter (#DH2014).” Martin Grandjean. July 14. Retrieved from http://www.martingrandjean.ch/dataviz-digital-humanities-twitter-dh2014.

———. 2014b. “The Digital Humanities Network on Twitter: Following or Being Followed?” Martin Grandjean. September 8. Retrieved from http://www.martingrandjean.ch/digital-humanities-network-twitter-following.

———. 2015. “Digital Humanities on Twitter, a Small-World?” Martin Grandjean. July 2. Retrieved from http://www.martingrandjean.ch/digital-humanities-on-twitter.

Hockey, Susan. 1986. “Workshop on Teaching Computers and Humanities Courses.” Literary & Linguistic Computing 1(4): 228–29.

———. 2001. “Towards a Curriculum for Humanities Computing: Theoretical Goals and Practical Outcomes.” The Humanities Computing Curriculum / The Computing Curriculum in the Arts and Humanities Conference. Malaspina University College, Nanaimo, British Columbia.

Hsu, Wendy F. 2016. “Lessons on Public Humanities from the Civic Sphere.” In Debates in the Digital Humanities, edited by Matthew K. Gold and Lauren F. Klein. Minneapolis: University of Minnesota Press. Retrieved from http://dhdebates.gc.cuny.edu/debates/part/13.

Kirschenbaum, Matthew G. 2010. “What Is Digital Humanities and What’s It Doing in English Departments?” ADE Bulletin 150: 55–61.

Knight, Kim. 2011. “The Institution(alization) of Digital Humanities.” Modern Language Association Conference 2011. Los Angeles. Retrieved from http://kimknight.com/?p=801.

Liu, Alan. 2012. “Where Is Cultural Criticism in the Digital Humanities?” In Debates in the Digital Humanities, edited by Matthew K. Gold, 490–509. Minneapolis, Minn.: University Of Minnesota Press. Retrieved from http://dhdebates.gc.cuny.edu/debates/text/20.

Mahony, Simon, and Elena Pierazzo. 2012. “Teaching Skills or Teaching Methodology.” In Digital Humanities Pedagogy: Practices, Principles and Politics, edited by Brett D. Hirsch, 215–25. Open Book Publishers. Retrieved from http://www.openbookpublishers.com/product/161/digital-humanities-pedagogy–practices–principles-and-politics.

McCarty, Willard. 2012. “The PhD in Digital Humanities.” In Digital Humanities Pedagogy: Practices, Principles and Politics, edited by Brett D. Hirsch. Open Book Publishers. Retrieved from http://www.openbookpublishers.com/product/161/digital-humanities-pedagogy–practices–principles-and-politics.

McGrail, Anne B. 2016 “The ‘Whole Game’: Digital Humanities at Community Colleges.” In Debates in the Digital Humanities, edited by Matthew K. Gold and Lauren F. Klein. Minneapolis: University of Minnesota Press. Retrieved from http://dhdebates.gc.cuny.edu/debates/text/53

Mowitt, John. 2012. “The Humanities and the University in Ruin.” Lateral 1. Retrieved from http://csalateral.org/issue1/content/mowitt.html

Muñoz, Trevor. 2013. “In Service? A Further Provocation on Digital Humanities Research in Libraries.” dh+lib. Retrieved from http://acrl.ala.org/dh/2013/06/19/in-service-a-further-provocation-on-digital-humanities-research-in-libraries.

“NeDiMAH Methods Ontology: NeMO.” 2015. Retrieved from http://www.nedimah.eu/content/nedimah-methods-ontology-nemo.

Nowviski, Bethany. 2012. “Evaluating Collaborative Digital Scholarship (or, Where Credit is Due).” Journal of Digital Humanities 1(4). Retrieved from http://journalofdigitalhumanities.org/1-4/evaluating-collaborative-digital-scholarship-by-bethany-nowviskie.

Perkins, Jody, Quinn Dombrowski, Luise Borek, and Christof Schöch. 2014. “Project Report: Building Bridges to the Future of a Distributed Network: From DiRT Categories to TaDiRAH, a Methods Taxonomy for Digital Humanities.” In Proceedings of the International Conference on Dublin Core and Metadata Applications 2014, 181–83. Austin, Texas.

Posner, Miriam. 2013. “No Half Measures: Overcoming Common Challenges to Doing Digital Humanities in the Library.” Journal of Library Administration 53(1): 43–52. UCLA: 10.1080/01930826.2013.756694. Retrieved from http://www.escholarship.org/uc/item/6q2625np.

Prescott, Andrew. 2016. “Beyond the Digital Humanities Center: The Administrative Landscapes of the Digital Humanities.” In A New Companion to Digital Humanities, 2nd ed., 461–76. Wiley-Blackwell.

Quan-Haase, Anabel, Kim Martin, and Lori McCay-Peet. 2015. “Networks of Digital Humanities Scholars: The Informational and Social Uses and Gratifications of Twitter.” Big Data & Society 2(1): 2053951715589417. doi:10.1177/2053951715589417.

Rockwell, Geoffrey. 1999. “Is Humanities Computing and Academic Discipline?” presented at An Interdisciplinary Seminar Series, Institute for Advanced Technology in the Humanities, University of Virginia, November 12.

Rosenblum, Brian, Frances Devlin, Tami Albin, and Wade Garrison. 2016. “Collaboration and CoTeaching Librarians Teaching Digital Humanities in the Classroom.” In Digital Humanities in the Library: Challenges and Opportunities for Subject Specialists, edited by Arianne Hartsell-Gundy, Laura Braunstein, and Liorah Golomb, 151–75. Association of College and Research Libraries.

Ross, Claire, Melissa Terras, Claire Warwick, and Anne Welsh. 2011. “Enabled Backchannel: Conference Twitter Use by Digital Humanists.” Journal of Documentation 67(2): 214–37. doi:10.1108/00220411111109449.

Sample, Mark. 2013. “When does Service become Scholarship?” [web log]. Retrieved from http://www.samplereality.com/2013/02/08/when-does-service-become-scholarship.

Selisker, Scott. 2016. “Digital Humanities Knowledge: Reflections on the Introductory Graduate Syllabus. In Debates in the Digital Humanities, edited by Matthew K. Gold and Lauren F. Klein. Minneapolis: University of Minnesota Press. Retrieved from http://dhdebates.gc.cuny.edu/debates/text/68.

Senchyne, Jonathan. 2016. “Between Knowledge and Metaknowledge: Shifting Disciplinary Borders in Digital Humanities and Library and Information Studies.” In Debates in the Digital Humanities, edited by Matthew K. Gold and Lauren F. Klein. Minneapolis: University of Minnesota Press. Retrieved from http://dhdebates.gc.cuny.edu/debates/text/81.

Shirazi, Roxanne. 2014. “Reproducing the Academy: Librarians and the Question of Service in the Digital Humanities.” Association for College and Research Libraries, Annual Conference and Exhibition of the American Library Association. Las Vegas, Nev. Retrieved from http://roxanneshirazi.com/2014/07/15/reproducing-the-academy-librarians-and-the-question-of-service-in-the-digital-humanities.

Siemens, Ray. 2001. “The Humanities Computing Curriculum / The Computing Curriculum in the Arts and Humanities: Presenters and Presentation Abstracts.” November 9–10, 2001. Retrieved from https://web.archive.org/web/20051220181036/http://web.mala.bc.ca/siemensr/HCCurriculum/abstracts.htm#Hockey.

Sinclair, Stefan. 2001. “Report from the Humanities Computing Curriculum Conference,” Humanist Discussion Group. November 16, 2001. Retrieved from http://dhhumanist.org/Archives/Virginia/v15/0351.html.

Sinclair, Stèfan, and Sean W. Gouglas. 2002. “Theory into Practice A Case Study of the Humanities Computing Master of Arts Programme at the University of Alberta.” Arts and Humanities in Higher Education 1(2): 167–83. doi:10.1177/1474022202001002004.

Smith, David. 2014. “Advocating for a Digital Humanities Curriculum: Design and Implementation.” Presented at Digital Humanities 2014. Lausanne, Switzerland. Retrieved from http://dharchive.org/paper/DH2014/Paper-665.xml.

Spiro, Lisa. 2011. “Knowing and Doing: Understanding the Digital Humanities Curriculum.” Presented at Digital Humanities 2011. Stanford University.

TaDiRAH. 2014a. “TaDiRAH – Taxonomy of Digital Research Activities in the Humanities.” GitHub. May 13, 2014. Retrieved from https://github.com/dhtaxonomy/TaDiRAH.

———. 2014b. “TaDiRAH – Taxonomy of Digital Research Activities in the Humanities.” July 18, 2014. Retrieved from http://tadirah.dariah.eu/vocab/index.php.
Takats, Sean. 2013. “A Digital Humanities Tenure Case, Part 2: Letters and Committees.” [web log]. Retrieved from http://quintessenceofham.org/2013/02/07/a-digital-humanities-tenure-case-part-2-letters-and-committees.

Terras, Melissa. 2006. “Disciplined: Using Educational Studies to Analyse ‘Humanities Computing.’” Literary and Linguistic Computing 21(2): 229–46. doi:10.1093/llc/fql022.

Terras, Melissa, Julianne Nyhan, and Edward Vanhoutte. 2013. Defining Digital Humanities: A Reader. Ashgate Publishing, Ltd.

UCLA Center for Digital Humanities. 2015. “Digital Humanities Programs and Organizations.” January 8, 2015. Retrieved from https://web.archive.org/web/20150108203540/http://www.cdh.ucla.edu/resources/us-dh-academic-programs.html.

Unsworth, John. 2000. “Scholarly Primitives: What Methods Do Humanities Researchers Have in Common, and How Might Our Tools Reflect This?” Presented at Symposium on Humanities Computing: Formal Methods, Experimental Practice, King’s College London. Retrieved from http://people.brandeis.edu/~unsworth/Kings.5-00/primitives.html.

———. 2001. “A Masters Degree in Digital Humanities at the University of Virginia.” Presented at 2001 Congress of the Social Sciences and Humanities. Université Laval, Québec, Canada. Retrieved from http://www3.isrl.illinois.edu/~unsworth/laval.html.

Unsworth, John, and Terry Butler. 2001. “A Masters Degree in Digital Humanities at the University of Virginia.” Presented at ACH-ALLC 2001, New York University, June 13–16, 2001.

Varner, Stuart. 2016. “Library Instruction for Digital Humanities Pedagogy in Undergraduate Classes.” In Laying the Foundation: Digital Humanities in Academic Libraries, edited by John W. White and Heather Gilbert, 205–22. Notre Dame, Ind: Purdue University Press.

Vedantham, Anu and Dot Porter. 2016. “Spaces, Skills, and Synthesis.” In Digital Humanities in the Library: Challenges and Opportunities for Subject Specialists, edited by Arianne Hartsell-Gundy, Laura Braunstein, and Liorah Golomb, 177–98. Association of College and Research Libraries.

Waltzer, Luke. 2012. “Digital Humanities and the ‘Ugly Stepchildren’ of American Higher Education.” In Debates in the Digital Humanities, edited by Matthew K. Gold, 335–49. Minneapolis: University Of Minnesota Press. Retrieved from http://dhdebates.gc.cuny.edu/debates/text/33.

Weingart, Scott. 2016. “dhconf.” the scottbot irregular. Accessed March 1, 2016. Retrieved from http://www.scottbot.net/HIAL/?tag=dhconf.

Zorich, D. 2008. A Survey of Digital Humanities Centers in the United States. Council on Library and Information Resources.

Appendix A

List of Digital Humanities Programs in our Collected Data

  • Minor (undergraduate) in Digital Humanities, Australian National University
  • Minor (undergraduate) in Digital Humanities & Technology, Brigham Young University
  • Minor (undergraduate) in Interactive Arts and Science, Brock University
  • BA in Interactive Arts and Science, Brock University
  • MA in Digital Humanities (Collaborative Master’s), Carleton University
  • MA (program track) in Digital Humanities, CUNY Graduate Center
  • Minor (undergraduate) in Digital Humanities, Farleigh Dickinson University
  • BS in Digital Humanities, Illinois Institute of Technology
  • MPhil/PhD in Digital Humanities Research, King’s College London
  • MA in Digital Humanities, King’s College London
  • BA in Digital Culture, King’s College London
  • MA in Digital Humanities, Loyola University Chicago
  • Certificate (graduate) in Digital Humanities, Michigan State University
  • Specialization (undergraduate) in Digital Humanities, Michigan State University
  • MA in Digital Humanities, National University of Ireland Maynooth
  • PhD in Digital Arts and Humanities, National University of Ireland Maynooth
  • Certificate (graduate) in Digital Humanities, North Carolina State University
  • Certificate (graduate) in Digital Humanities, Pratt Institute
  • Certificate in Digital Humanities, Rutgers University
  • Certificate (graduate) in Digital Humanities, Stanford University
  • Certificate (graduate) in Digital Humanities, Texas A&M University
  • Certificate (graduate) in Book History and Digital Humanities, Texas Tech University
  • MPhil in Digital Humanities and Culture, Trinity College Dublin
  • Certificate (graduate) in Digital Humanities, UCLA
  • Minor (undergraduate) in Digital Humanities, UCLA
  • MA/MSc in Digital Humanities, University College London
  • PhD in Digital Humanities, University College London
  • MA in Humanities Computing, University of Alberta
  • Specialization (undergraduate) in Literature & the Culture of Information, University of California, Santa Barbara
  • Concentration (graduate) in Humanities Computing, University of Georgia
  • Concentration (undergraduate) in Humanities Computing, University of Georgia
  • Certificate (graduate) in Public Digital Humanities, University of Iowa
  • Certificate (graduate) in Digital Humanities, University of Nebraska-Lincoln
  • Certificate (graduate) in Digital Humanities, University of North Carolina at Chapel Hill
  • Certificate (graduate) in Digital Humanities, University of Victoria
  • Certificate (graduate) in Certificate in Public Scholarship, University of Washington
  • Minor (undergraduate) in Digital Humanities, Western University Canada

Appendix B

List of Programs in DARIAH/EADH Data

A table of European institutions and DH programs. For each program, the type (e.g., Bachelor’s, Master’s) is listed, as well as whether the program was listed by DARIAH, EADH, or both.
Figure 15: European institutions and DH programs

Appendix C

Data

In addition to creating a GitHub repository at https://github.com/dhprograms/data, we include the program data we collected and our term codings below. Since the GitHub data may be updated over time, these files serve as the version of record for the data and analysis presented in this article.

Data for “A Survey of Digital Humanities Programs”

About the Authors

Chris Alen Sula is Associate Professor and Coordinator of Digital Humanities and the MS in Data Analytics & Visualization at Pratt Institute School of Information. His research applies visualization to humanities datasets, as well as exploring the ethics of data and visualization. He received his PhD in Philosophy from the City University of New York with a doctoral certificate in Interactive Technology and Pedagogy.

S.E. Hackney is a PhD student in Library and Information Science at the University of Pittsburgh. Their research looks at the documentation practices of online communities, and how identity, ideology, and the body get represented through the governance of digital spaces. They received their MSLIS with an Advanced Certificate in Digital Humanities from Pratt Institute School of Information in 2016.

Phillip Cunningham has been a reference assistant and cataloger with the Amistad Research Center since 2015. He received a BA in History from Kansas State University and MSLIS from Pratt Institute. He has interned at the Schomburg Center’s Jean Blackwell Hutson Research and Reference Division, the Gilder-Lehrman Institute for American History, and the Riley County (KS) Genealogical Society. His research has focused on local history, Kansas African-American history, and the use of digital humanities in public history.

Images are for demo purposes only and are properties of their respective owners. ROMA by ThunderThemes.net

css.php
Need help with the Commons? Visit our
help page
Send us a message
Skip to toolbar