Issue Seventeen

Screen capture from computer-generated virtual reality software showing the user's virtual hand reaching for controls in a simulated space. In the middle of the screen are multi-colored, three-dimensional models of spiraling biochemical proteins and floating controls with various labels "uploader, ON, Sun Position, Model, Position, Rotation, Skybox."

Barriers to Supporting Accessible VR in Academic Libraries


Virtual reality (VR) shows great promise for enhancing the learning experience of students in higher education and academic libraries are at the forefront of efforts to bring VR into the curriculum as an innovative learning tool. This paper reviews some of the growing applications and benefits of VR technologies for supporting pedagogy in academic libraries and outlines the challenges of making VR accessible for disabled students. It defines existing regulations and guidelines for designing accessible digital technologies and offers two case studies drawn from each of the authors’ own academic libraries, at Temple University and at the University of Oklahoma, in order to provide insight into the challenges and benefits of making VR more accessible for students. The paper argues that to continue to serve their mission of equitable access to information for the entire student population, academic libraries that implement VR programs need to balance innovation with inclusion by allocating sufficient staff time and technical resources and bringing accessibility thinking into VR projects from the beginning. To accomplish this, libraries will need the assistance of software developers and accessibility experts, and librarians will need to act as strong advocates for better support from commercial software and hardware vendors and to promote change in their institutions.


Virtual reality (VR) and other extended reality (XR) technologies show great promise for supporting pedagogy in higher education. VR gives students the chance to immerse themselves in virtual worlds and engage with rich three-dimensional (3D) models of learning content, ranging from biochemical models of complex protein structures to cultural heritage sites and artifacts. Research shows that VR can increase student engagement, support the development of spatial cognitive skills, and enhance the outcomes of design-based activities in fields such as architecture and engineering. With these benefits, however, come the risks that VR will exacerbate inequality and exclusions for disabled students.[1] Disability is typically defined as a combination of physical (e.g., not having use of one’s legs) and participation (e.g., not having a ramp so that a wheelchair user can access services) barriers. According to the Center for Disease Control, 26% of adults in the United States have a disability. These include cognitive, mobility, hearing, visual, and other types of disability.

As a class of technologies that engage multiple senses, VR has the capacity to engage users’ bodies and senses in a holistic, immersive experience. This suggests that VR holds great potential for supporting users with a diverse range of sensory, motor, or cognitive capabilities; however, there is no guarantee that the affordances of VR will be deployed in accessible ways. In fact, the cultural tendency to ignore disability coupled with the rapid pace of technological innovation have led to VR programs that exclude a variety of users. Within higher education, the exclusion of disabled students from the benefits of these new technologies being deployed risks leaving behind a significant portion of the student population. The U.S. Department of Education, National Center for Education Statistics (2019) has found that 19.4% of undergraduates and 11.9% of graduate students have some form of disability. Libraries have long been leaders in supporting accessibility (Jaeger 2018) and the rise of immersive technologies presents an opportunity for them to continue to be leaders in making information available to all users. Academic libraries, the focus of this paper, are particularly well positioned to address the challenges of VR accessibility given their leadership in innovative information services and existing close relationships with the research and pedagogy communities at their institutions.

In what follows, we present a brief outline of the recent emergence of VR technologies in academic libraries, introduce recent research on VR accessibility, and conclude with a discussion of two brief case studies drawn from the authors’ institutions that illustrate the benefits and barriers associated with implementing accessibility programs for VR in academic libraries.

VR in Higher Education

“Virtual reality” or “VR” refers to a class of technologies that enable interactive and immersive experiences of computer-generated worlds, produced through a mixture of visual, auditory, haptic, and/or olfactory stimuli that engage with the human sensory system and provide the user with an experience of being present in a virtual world. In most VR systems, visual and auditory senses are primarily engaged, with increasing research being done on integrating haptics and other stimuli. Different levels of immersion and interaction are possible depending on the specific configuration of devices, from relatively low immersion and low interaction provided by inexpensive 3D cardboard viewers for use with mobile devices (e.g., Google Cardboard) to expensive head-mounted displays (HMDs) such as the HTC Vive and Oculus Rift systems that use headsets and head and body tracking sensors to capture users’ movements along “six degrees of freedom” (three dimensions of translational movement along x, y, and z axes, plus three dimensions of rotational movement, roll, pitch, and yaw). At present, HMDs are more commonly used than CAVEs, or “Cave Automatic Virtual Environment,” room-sized VR environments that use 3D video projectors, head and body tracking, and 3D glasses to provide multi-user VR experiences (Cruz-Neira et al. 1992), which have been used in academic contexts since the 1990s. This interest in new information technologies that provide library users with access to computer-generated worlds is not new for librarians. The current interest in VR follows experimentation conducted in libraries beginning earlier in the 2000s on “virtual worlds,” 3D computer-generated social spaces, such as Second Life, that users interacted with through a typical configuration of 2D computer monitor, mouse, and keyboard. Libraries envisioned these technologies as potential tools for expanding library services and enhancing support for student learning and research evaluated the pedagogical efficacy of these new tools (e.g., Bronack et al. 2008; Carr, Oliver, and Burn 2010; Deutschmann, Panichi, and Molka-Danielsen 2009; Holmberg and Huvila 2008; Praslova, Sourin, and Sourina 2006).

Since the commercial release of affordable VR systems such as the HTC Vive and Oculus Rift in 2016 (and now cheaper, lower-resolution variants such as Oculus Go and Oculus Quest), academic libraries have started seriously exploring the possibility of VR to support research and pedagogy. They have begun to conceptualize VR as a platform for immersive user engagement with high-resolution 3D models that support existing curricular activities, such as the use of archaeological, architectural, or scientific models in classroom exercises. Cook and Lischer-Katz (2019) argue

the realistic nature of immersive virtual reality learning environments supports scholarship in new ways that are impossible with traditional two-dimensional displays (e.g., textbook illustrations, computer screens, etc.). … Virtual reality succeeds (or fails), then, insofar as it places the user in a learning environment within which the object of study can be analyzed as if that object were physically present and fully interactive in the user’s near visual field. (70)

VR has been used to support student learning in a variety of fields, such as anthropology and biochemistry (Lischer-Katz, Cook, and Boulden 2018), architecture (Milovanovic 2017; Pober and Cook, 2016; Schneider et al. 2013), and anatomy (Jang et al. 2017). Patterson et al. (2019) describe how the librarians at the University of Utah have been incorporating VR technologies into a wide variety of classes, supporting architecture students, geography students, dental students, fine arts students, and nursing students. From this perspective, VR is envisioned as a tool for accessing digital proxies of physical artifacts or locations that students would ordinarily engage with as physical models (for instance, casts of hominid skull specimens), artifacts, or locations, but which are often too expensive or difficult to access directly.

In addition to providing enhanced modes of access to learning materials, using VR can also enhance student engagement and self-efficacy if implemented in close consultation with faculty (Lischer-Katz, Cook, and Boulden 2018). The technical affordances of VR, when deployed with care, are able to support a range of pedagogical objectives. Dalgarno and Lee (2010) identified representational fidelity (i.e., realistic display of objects, realistic motion, etc.) and learner interaction (i.e., student interaction with educational content) as key affordances of VR technologies, which they suggest can support a range of learning benefits they identified, including spatial knowledge representation, experiential learning, engagement, contextual learning, and collaborative learning. Chavez and Bayona (2018) surveyed the research on literature on VR and identified interaction and immersion as the two aspects of VR that should be considered when designing VR learning applications. Similarly, Johnson-Glenberg (2018) identified a set of design principles for using VR in education based on related affordances of VR—“the sense of presence and the embodied affordances of gesture and manipulation in the third dimension” (1) and found that “active and embodied learning in mediated educational environments results in significantly higher learning gains” (9). Research also suggests that the special visual aspects of VR, such as depth perception and motion cues (Ware and Mitchell 2005), head tracking (Ragan et al. 2013), and immersive displays (Ni, Bowman and Chen 2006) are able to enhance the analytic capabilities of human perception. VR has been shown to enhance human abilities of visual pattern-recognition and decision-making, particularly when working with big data (Donalek et al. 2014), prototyping (Abhishek, Vance, and Oliver 2011), or understanding complex spatial relationships and structures in data sets (Prabhat et al. 2008; Kersten-Oertel, Chen and Collins 2014; Laha, Bowman and Socha 2014).

Immersion is often identified by researchers as a key characteristic of VR technologies that is applicable to enhancing the learning experiences of students. Fowler (2015) identified three types of VR immersion relevant to pedagogy: Conceptual immersion, which supports development of abstract knowledge through students’ self-directed exploration of learning materials, for instance, molecular models; task immersion, in which students begin to engage with and manipulate learning materials; and social immersion, in which students engage in dialogue with others to test and expand upon their understanding. One critique of the applications of VR-based pedagogy is that instructional designers and instructors rarely indicate their underlying learning models or theories (Johnston et al. 2018). For instance, Lund and Wang (2019) found that VR can improve student engagement in library instruction, but do not specify which pedagogical models are effective, instead comparing a particular classroom activity with traditional classroom methods versus the same activity using VR, measuring impact on academic performance and motivation. Radianti et al. (2020), in their review of 38 recent empirical studies on VR pedagogy, acknowledge that while immersion is a critical component of the pedagogical affordances of VR, different studies define the term differently. They also found that only 32% of the studies reviewed indicated which learning theories or models underpin research studies, which makes it difficult to generalize approaches and apply them to other contexts. Radianti et al. (2020) point out that “in some domains such as engineering and computer science, certain VR applications have been used on a regular basis to teach certain skills, especially those that require declarative knowledge and procedural–practical knowledge. However, in most domains, VR is still experimental and its usage is not systematic or based on best practices” (26).

What these trends suggest is that VR shows great potential for use in supporting classroom instruction in higher education institutions, even though pedagogical models and methods of evaluation are still being developed and most projects are in the experimental phase of development. Some fields have already been adopting VR into their departments, such as computer science, engineering, and health science programs, but academic libraries are leading the way in promoting VR for their wider campus communities (Cook and Lischer-Katz 2019). Since many libraries are emerging as leaders in supporting VR, it is essential for them to have policies and support services in place to ensure that these new technologies are usable by all potential users at their institution.

As librarians consider adopting these innovative technologies, discourses of innovation can sometimes lead to oversights that may exclude some users. VR technologies enter libraries alongside other emerging technologies and innovative library services. The current discourse of transformational change promoted by the corporate information technology sector are often at odds with critical approaches to librarianship that stress inclusion and social justice (Nicholson 2015). These conceptions of radical innovation and disruption construct institutions, their policies, and regulations as structures that only function to slow down and constrain innovation. The assumption is that innovative technology is inherently neutral in terms of its ethics and politics, and that it does not require institutional processes to constrain or limit its negative effects; however, by decoupling technological change from institutionalized processes that protect the rights of historically marginalized groups of library patrons, technological change inevitably reinscribes exclusion into the infrastructures of learning. As Mirza and Seale (2017) argue

technocratic visions of the future of libraries aspire to a world outside of politics and ideology, to the unmarked space of white masculinity, but such visions are embedded in multiple layers and axes of privilege. They elide the fact that technology is not benevolently impartial but is subject to the same inequities inherent to the social world. (187)

The idea that technologies embed biases and cultural assumptions is not a new idea—scholars in the field of Science and Technology Studies have argued for decades that technologies are never neutral (e.g., Winner 1986)—but librarians, library administrators, and library science researchers often forget to examine their own “tunnel vision and blind spots” (Wiegand, 1999), or more precisely, their unreflected implicit biases that shape decision making about which technologies to adapt and how to deploy them in libraries. On the other hand, this also means that it is possible to balance innovation with inclusivity by foregrounding library values at the start of the process of innovation, rather than by retrofitting designs, which can yield results that are less equitable and more costly (Wentz, Jaeger and Lazar 2011). Clearly, the learning affordances of VR (Dalgarno and Lee 2010), as they are currently designed, need to be reimagined for disabled users.

VR and Accessibility

Aside from these ethical considerations, as VR becomes increasingly common in education, business, and other disciplines, it becomes answerable to legal guidelines. Federal guidelines for more established information and communication technology can be found in Section 508 of the Rehab Act (see U.S. General Services Administration n.d.), which utilizes Web Content Accessibility Guidelines (WCAG) 2.0 as a standard for web technology (W3C Web Accessibility Initiative 2019). WCAG provide guidance on how to make web content accessible to disabled people and they are overseen by the Web Accessibility Initiative (WAI), part of the World Wide Web Consortium (W3C) (see W3C Web Accessibility Initiative 2019). While they provide a valuable framework, WCAG do not directly apply to immersive technologies and there are currently no accessibility guidelines that do so. Work has been done to develop individual accessibility extensions, hardware, and features, but measurable guidelines that would aid in accessible design are still needed. Only in the last few years have accessibility specialists started adapting existing guidelines by examining existing initiatives and mapping them to the success criteria in WCAG. This includes the XR Access Symposium that was held in the summer of 2019 (see Azenkot, Goldberg, Taft, and Soloway 2019), as well as W3C’s Inclusive Design for Immersive Web Standards Workshop held in the fall of 2019 (see W3C 2019). There are also more specific guidelines that can contribute to design considerations, such as the Game Accessibility Guidelines that are more focused on game design (see Ellis et al. n.d.). Increasing the urgency of this matter, as of December 31, 2018, any video game communication functionality released in 2019 or later must be accessible to disabled people under the 21st Century Communications and Video Accessibility Act (Enamorado 2019), which expands the group of industries mandated to meet accessibility guidelines to include the video game industry.

Those interested in learning more about the accessible design of VR and other immersive technologies should consider reading “Accessible by Design: An Opportunity for Virtual Reality” (Mott et al. 2019), which provides general guidelines for designing accessible VR. For an example of designing accessible tools for a specific user group, see Zhao et al. (2019), which details the developments of a VR toolkit for supporting low-vision users.

Before going any further, it is important to distinguish between VR in its current, popularized form vs. the affordances of VR as a medium. The initiatives, guidelines, and research projects referred to in this section are still largely focused on analyzing the design of the former. However, in order for the technology to become truly accessible, critical inquiry must continue to progress in its understanding of the broader capabilities, limitations, and levels of interaction that construct the latter. The design practices and recommendations that have been developed to support the accessibility of VR are largely individualized and prototypical, which means that each institution’s particular experiences tackling the challenges of accessible VR will vary based on a number of factors. These factors include their individual histories supporting VR, staffing levels and development support, resources, and institutional commitments to accessibility. As librarians at Temple University and University of Oklahoma, we are now in the process of developing guidelines and tools to meet these challenges.

VR at Temple University’s Loretta C. Duckworth Scholars Studio

Temple University’s Loretta C. Duckworth Scholars Studio (LCDSS) “serves as a space for student and faculty consultations, workshops, and collaborative research in digital humanities, digital arts, cultural analytics, and critical making” (Temple University Libraries n.d.). Before the main library’s relocation to its new building, the LCDSS, formerly known as the Digital Scholarship Center (DSC), was located in the basement of Paley Library. Upon its 2015 opening, the DSC had two Oculus Rift DK2 headsets available for interested users. Its space in the new Charles Library includes an Immersive Visualization Studio designed for up to 10 people to simultaneously participate in immersive experiences, and as of 2019 has twelve headsets from a variety of manufacturers, in addition to mobile based headsets with an eye towards continuous acquisition of newer technologies. There are six full-time staff members, one of whom is responsible for the upkeep and management of the Immersive Studio among their other duties.

In August of 2017, I (Jasmine Clark) began researching the accessibility of VR as part of a project I was developing during my library residency.[2] Upon reviewing existing literature, it was apparent that research on the usability of VR for disabled users was in its early stages. Most notable was a report, “VR Accessibility: Survey for People with Disabilities,” resulting from a survey of disabled VR users produced in partnership by ILMxLab and the Disability Visibility Project (see Wong, Gillis, and Peck 2018). However, the majority of research and resources exploring the applications of VR to disabled people were composed of one-off solutions and extensions. This included cases of VR being used as an assistive technology (e.g., spatial training for blind individuals), unique hardware solutions (e.g., the haptic cane), and known issues for specific types of users (e.g., assumed standing position in games being disorienting for wheelchair users). These developments, while valuable, were not design standards or solutions broadly adopted by the game industry. Another concern was the fact that, in the context of the DSC, VR was not just a technology, but also a service that included training and assistance in its use for library patrons. This added an additional layer of complexity because, while there have been discussions on disability in the context of making and makerspaces, there was no literature on accessible service policies, best practices, and documentation for digital scholarship as a whole. In response to these challenges, I began examining existing guidelines and assessing their applicability to emerging technologies. Because WCAG is the federal standard, I joined a working group that guided me through reading the supporting documents and success criteria of WCAG, as well as examining the major legislative changes that were happening around accessibility at that time. I also began working with Jordan Hample, the DSC’s (now LCDSS’s) main technical support staff member, to understand whether or not these guidelines were applicable to immersive technologies.

Because we also needed to address service practices and policies, I decided that user testing would be necessary. User testing would consist of three phases that would take place during a single visit: a pre-interview (to ensure safety and gain an understanding of a user’s disability and previous technical experience), a use test (where users would use VR headsets), and a post-interview (to solicit feedback). I coordinated with Temple’s Disability Resources and Services (DRS) and DSC staff to bring in disabled stakeholders (students, alumni, and other members of the Temple community) in an attempt to 1) determine whether or not they would be able to utilize the equipment, and 2) determine if there were barriers to providing them with the same level of service as other patrons. As Wong, Gillis, and Peck (2018) point out in their report, “people with disabilities are not a monolith—accessibility and inclusion is different for everyone” (1). In order to scope the research to a manageable scale, I decided we would begin with visually impaired, deaf/Hard-of-Hearing (HOH), and hearing impaired users (hearing impairment would include individuals with tinnitus, or other auditory conditions not included under the umbrella of deaf/HOH). Working with Jordan, as well as Alex Wermer-Colan, a Council on Library and Information Resources (CLIR) postdoctoral fellow, I proceeded to draft a research protocol that consisted of interview questions and an explanation for participants of what VR is and the purpose of the research being conducted. These were all sent out via DRS listservs to solicit participants. VR services in the DSC involved a lot of hands-on onboarding and orientation from staff. Often, patrons would drop in and simply want to get acquainted with the technology. As a result, the goal of the research project was for disabled participants in our user testing to be able to navigate to our space and successfully work with the staff members responsible for providing VR assistance to identify experiences that would be as usable as possible for them. There was also a need to better understand staff preparedness in providing assistance to disabled patrons. In the months leading up to the testing, I had preliminary discussions with staff, and also inquired into staff training on accessibility and disability more generally at the library and university level. I found that training was not formalized, so I gathered and shared resources with my colleagues to ensure the safety and dignity of participants. This included referring to the Gallaudet University’s guide on working with American Sign Language (ASL) interpreters (see Laurent Clerc National Deaf Education Center 2015) and various video tutorials on acting as a sighted guide for blind/low-vision people, and maintaining active discussions and explanations around ableism and disability. The discussions also allowed for better understanding of gaps in training and norms.

Once staff were sufficiently prepared, user testing commenced in the summer of 2018. Four participants were invited to the center, three of whom had various visual impairments and one of whom was deaf. On the days of their visits, I would go to the library entrance to greet and guide anyone who needed assistance. Upon arrival, they were brought into a meeting room for a pre-interview that would reintroduce the purpose of user testing, gauge any previous experience with the technology, and identify safety concerns by asking if they had other sensitivities that they felt would be a problem in VR (e.g., sensory sensitivities, sensitivity to flashing lights, etc.). We also asked about level of hearing/vision to get a better idea of which types of experiences worked for different types of hearing/vision. Some immediate questions brought up by participants were around accuracy of sound, depth perception, and similarity to real-world visual experience. Once the initial interview was completed, they were guided out to work with Jordan to identify potential experiences, similar to the way he typically worked with students. I took notes on the interactions, and Alex assisted as needed. Alex’s presence became particularly important when it came to the deaf user. It was brought to our attention that 1) due to variations in inner ear formation, those who were deaf/HOH were at higher risk for vertigo and, 2) a user reliant upon an ASL interpreter would not be able to see the interpreter while in the headset, complicating human assistance. In response, Alex took on the role of surrogate for this participant while they watched his activity on a monitor and gave instructions and feedback. Jordan took on the role of listening to the participants’ verbal feedback on each experience and, utilizing his knowledge of the DSC’s licenses for different VR programs, selected experiences that would be more accommodating to their specific hearing/visual needs.

Upon completion of this phase, participants were then brought back into the meeting room for a post interview. Responses to both interviews, as well as observations made during the interactions, were compiled and summarized into an internal report for our team. We had initially planned to have more users come in, but found that feedback on the limitations of the technology was consistent and addressable enough for us to make adjustments that would allow us to improve services and collect more nuanced data moving forward. For example, it was clear that the software varied so drastically that, in order to provide safe and effective services, it would be necessary to index the features and capabilities of various VR experiences.

The timing of this work was crucial, as we were a year away from the move to our new space, and the findings from the study helped us plan for it. The LCDSS is significantly larger than the DSC, and much more visible. However, while it has required that we re-envision our service policies and programming, it has also given us the opportunity to integrate accessibility into our work from the beginning. One way we are doing this is by developing an auditing workflow that would allow any staff member or student worker to examine newly-licensed VR experiences and produce an accessibility report, as there is a glaring lack of Voluntary Product Accessibility Templates (VPAT) for VR products. These reports would detail accessibility concerns and limitations at the beginning, allowing us to better serve disabled patrons. We are also working with the university’s central Information Technology Services to look at how this can be incorporated into broader LCDSS purchasing practices and documentation workflows.

Once this workflow is finalized, it will be used to support LCDSS staff in aiding faculty and researchers in the development of Equally Effective Alternative Access Plans (EEAAP) for their research and teaching. An EEAAP documents how a technology will be used in a class or program, its accessibility barriers, the plan to ensure equitable participation for disabled people, and the parties responsible for ensuring the plan is carried out. LCDSS staff frequently consult with faculty who wish to integrate LCDSS resources into their pedagogical practices. This can include feedback on assignment structure and design, recommended technologies, and other vital information required for pedagogical efficacy. By generating accessibility reports that identify technical limitations, LCDSS staff can aid faculty in developing multimodal approaches to integrating these technologies into their teaching. This means that, not only are we bringing accessibility to their attention early, but that we are also able to guide them and reduce intimidation, making buy-in more successful. Moving forward, Jordan Hample and I will be making all materials involved in this workflow publicly available, as well as continuing and expanding user testing to include other disabilities.

VR at the University of Oklahoma Libraries, Emerging Technologies Program

Accessibility initiatives for VR at the University of Oklahoma have followed a slightly different trajectory than the one outlined by Jasmine in the previous section. The VR program at OU Libraries was officially launched in 2016 in the Innovation @ the EDGE Makerspace, which began hosting classes and integrating VR content into the course curriculum, including initial integrations within biology, architecture, and fine arts courses (Cook and Lischer-Katz 2019). We use custom-built VR software that enables users “to manipulate their 3D content, modify environmental conditions (such as lighting), annotate 3D models, and take accurate measurements, side-by-side with other students or instructors” and support networked, multiuser VR sessions, which forms “a distributed virtual classroom in which faculty and students in different campus locations [are able to] teach and collaborate” (Cook and Lischer-Katz 2019, 73). Librarians provide VR learning opportunities in three main ways: 1) deployment in the library-managed makerspace; 2) facilitated course integrations; 3) special VR events. Each approach requires different levels of support and planning from librarians. In the case of deployment in our makerspaces, students are able to learn about the technology in a self-directed manner, with guidance from trained student workers who staff the space. Workshops and orientation sessions are available, and students, faculty, and community members typically drop in when they want and engage with technology in a self-directed manner. Since the focus of this space is on self-directed learning and experimentation, the training of student support staff is essential for ensuring that the space feels welcoming and inclusive to visitors and that staff are able to adjust the level of support they provide based on the needs of the visitors to the space.

In the case of course integrations, students are typically brought to our makerspace during regularly scheduled class time. We have portable VR kits that use high-powered gaming laptops and Oculus Rift headsets, which makes it possible to bring the learning experiences directly into the classroom if the faculty member prefers. Examples of VR-based classroom activities include interacting with 3D models that simulate learning objects, such as examining the morphology of different hominid skull casts in an anthropology class or analyzing complex protein structures and processes in a biochemistry class. VR is also used in other classes as a creative tool, such as in a sculpture course in which the students created sculptures in VR and then printed them using the 3D printers in the makerspace. In planning VR course integrations, librarians work directly with faculty members to design activities that will support their course learning objectives.

VR is also used frequently at OU Libraries for special events in which experts lead participants on guided tours through scholarly, high-resolution 3D models. Participants can join the VR tour on campus or from other institutions, since our custom-built VR software supports networked, multi-user sessions. Examples include inviting an archaeologist to lead a group through a 3D scan of a cave filled with ancient rock carvings that is located in the Southwestern United States (Schaffhauser 2017), as well as a tour led by a professor of Middle Eastern History through a 3D model of the Arches of Palmyra, located in Syria.

From the start of the emerging technologies initiative at OU Libraries, rapid innovation was a guiding principle, with the hope that the benefits of emerging technologies could be demonstrated to the broader campus community and that the library could become a hub for supporting emerging technologies across campus. It was important to quickly develop a base of VR technologies and librarian skills in order to promote the potential benefits of the technologies to faculty and students across campus. Starting in January 2016, students and faculty began using our VR spaces for research, learning, experimentation, and entertainment, and by 2018 we had faculty from over 15 different academic departments across campus using VR as a component in their classes (Cook and Lischer-Katz 2019), along with over 2000 individual uses of our VR workstations. By 2019, the emerging technology librarians (ETL) unit had grown to five full-time staff members who worked together to “rapidly prototype and deploy educational technology for the benefit of a range of University stakeholders” (Cook and Van der Veer Martens 2019, 614). At this time, concerns were raised by one of our ETLs about the accessibility of existing VR services and the ETL team brought in an accessibility specialist to advise them. One of the key challenges the team identified through the process of reviewing their existing VR capabilities was the fact that most commercially produced VR software lacks accessibility options, particularly in terms of compatibility with assistive devices. In reviewing users’ experiences in our makerspace, ETLs found that users with dexterity, coordination, or mobility disabilities often request passive VR experiences that provide immersive experiences without the need for use of the VR controller inputs. For programs such as the popular Google Earth VR program, it is not currently possible to provide users with passive experiences, rather the user needs to be able to actively control the two VR controllers themselves to engage with the VR experience. To the team’s surprise, some of the lower-resolution, untethered VR systems, such as the Oculus Go have shown more capabilities for providing passive experiences that rely only on head tracking and the use of target circles for movement through the VR space. Making narrated and guided tours for a VR experience available is essential for providing access to some groups of disabled users. Ensuring that VR controllers are accessible has also been a challenge and ETLs have begun experimenting with 3D printing add-on components to make the VR controllers more usable for users with limited hand function. In response to the lack of accessibility options for commercial software releases, modifications were made to OU’s custom-built VR software to provide accessibility capabilities, including: 1) controls for changing the sensitivity of VR interface controls; and 2) options for user interface text resizing. These modest modifications were made in consultation with VR users. Technical solutions alone are not sufficient, of course, and the ETL team has also found it very important to continue to improve training for student staff so that they are prepared to properly assist disabled users in a sensitive and respectful way. Communicating clearly to the wider university community about what accessible software and hardware capabilities are available is also a challenge that the team is tackling. These activities are still ad hoc in many ways, and we have found that additional work is needed to develop procedures for addressing VR accessibility in a more systematic way in the library and across campus.

The ETL team is taking several approaches to improving our support for accessible VR, looking outward to resources beyond the walls of OU Libraries and looking inward to resources at the university to support improvements to accessibility. ETLs are expanding their knowledge base through involvement in accessibility conferences and working groups and looking to our colleagues at other institutions, such as Temple University Libraries, for guidance on policies and procedures for evaluating and implementing VR software and hardware. The ETL team is planning on conducting future usability testing and focus groups with a range of disabled users from the OU community in order to further refine the feature set of our custom software, which we plan to package and distribute for other institutions to use and build upon.

The experiences of ETLs at OU Libraries point to the importance of working with accessibility experts and bringing disabled users into the design process to develop technologies and policies. Librarians should not be expected to take on accessible design by themselves, rather they should look to experts in this field for assistance. Working with our University’s disability coordinator has been essential for helping us to identify areas where we need to improve our accessibility capabilities, as well as providing us with a network of disabled users on campus who could provide us with user feedback on our technologies. The types of issues we are looking into include techniques for auditing VR software for accessibility issues, providing clearer signage and information on websites to provide students and faculty with a clear understanding of which emerging technology tools are accessible and what accommodations are possible, and ways in which we can continue to improve staff training so that the student workers who staff our makerspace can better support disabled users. The process of developing policies and establishing processes and documentation to support those policies does take time; however, this work has been essential for training staff and establishing best practices at our makerspace in order to address the challenges of VR accessibility. Additional work is necessary to codify this ongoing and still experimental work into institutional policy documents and continue to seek out adaptive tools to make VR accessible for a greater range of library patrons.


The current wave of immersive technologies was not initially designed for users with varying levels of visual, auditory, mobility, and neurological capabilities. Even for libraries and centers that do have development support there is no way to remediate the inaccessibility of every experience used and, even if there was, there would be no way to keep up with the regular updates of hardware and software. One-off, localized solutions cannot replace structural change. In order for VR to become an accessible medium, developers, hardware manufacturers, distribution platforms, and other stakeholders involved in its creation and distribution need to ensure accessibility within their respective roles. The current lack of support from these stakeholders makes it crucial that library staff and the educators that they support understand disability and accessibility, develop appropriate documentation, and advocate for software and hardware vendors to provide better accessibility support in their products. In the meantime, libraries supporting different tiers of VR use and investment will have to consider different approaches to accessibility.

The preceding examples drawn from our experiences at Temple University and the University of Oklahoma (OU) show the range of issues facing accessible VR, but also show the differences in approach for different service models and pedagogical objectives. Temple University includes VR in a very broad suite of technical offerings and its faculty are not currently at the phase of “buy-in” where regular VR development is a priority. As a result, Temple’s focus is on indexing experiences and integrating alternative access plans, with accessible development occurring on a smaller scale. In comparison, OU has much more of a demand for custom-developed software solutions. This demand is due to the fact that one of the main VR applications that OU promotes for course integrations is its own flexible, custom software, which supports a variety of disciplines, including courses in biochemistry, anthropology, architecture, and English. OU is beginning to investigate the accessibility challenges of working with commercial software and is looking to Temple for guidance on how to properly evaluate different software titles and provide adequate documentation. For libraries without developer support, we can expect that the focus will more likely follow Temple’s approach. For libraries with regular development efforts, supporting home-grown accessible design practices, such as those at OU, will be more of a central activity. Some libraries will be a mixture of the two, working to blend commercial and homegrown solutions. Regardless of a library’s approach, the major takeaways for other institutions to consider as they bring accessibility thinking into their VR programs include:

  • Plan for Accessibility from the Beginning: Libraries can save time and resources by thinking about accessibility issues at the start of a program or project.
  • Lack of Standards: As of 2020, there are no standards for accessible VR design, but there are related standards that could lay the groundwork for their development.
  • Developer Support is Essential: Libraries that intend to develop VR experiences need to have sufficient developer support with accessibility expertise.
  • Importance of Auditing and Reporting: Out-of-the-box VR experiences will pose different accessibility challenges from one person to the next and should be audited to better understand these barriers to access. If a library lacks a developer to modify software or create new software, at the very least, available software needs to be audited and have a corresponding accessibility report produced.
  • VR is Not the Pedagogy: VR should be another tool in an educator’s arsenal, not the sole focus of a class (unless VR is the course subject). As Fabris et al. (2019) suggest “Having VR for the sake of having VR won’t fly; the VR learning resources need to be built with learning outcomes in mind and the appropriate scaffolds in place to support the learning experience” (74).
  • Acknowledge the Limits of VR Accessibility: There are limits to making VR accessible. The reality is that there will be students who are unable to use VR for a variety of reasons. Therefore, there should always be an alternative access plan developed so that students have access to non-VR learning methods as well.

Considering these best practices will better enable libraries to approach the challenges of making VR accessible. Putting them into action will directly benefit disabled users, improve librarians’ abilities to make their innovative technology spaces more inclusive, and will help administrators to better plan and allocate resources for supporting the missions of their institutions. While these guidelines are focused on supporting academic libraries, they will likely benefit higher education applications outside of the library, too.

Additionally, while it is true that there is extensive work to be done, there are existing inclusive instructional approaches that can be integrated into VR based coursework by individuals. Multimodal course design and Universal Design for Learning ( are frameworks that can be applied to VR coursework with approaches like collaborative assignments and activities. It is also worth reviewing a 2015 special issue of Journal of Interactive Technology and Pedagogy that considers the benefits of introducing perspectives from disability studies into the context of designing innovative pedagogies. One of the important takeaways from this collection is that embracing disability and the alternative perspectives that it can provide, presents the potential for new learning opportunities (Lucchesi 2015).

Regardless of whichever pedagogical approach educators adopt, it is imperative that, unless VR is the subject of the course, they remember it is not the pedagogy. Instead, faculty should keep a diverse array of tools in their pedagogical toolkit that will support an equally diverse set of learners. As librarians, faculty, and instructional designers become familiar with inclusive learning frameworks, they are better positioned for more targeted, meaningful advocacy within their institutions. Because, while it is true that there is a lot of work to be done, it is equally true that it can only be done together through active involvement in institutional committees and task forces and by ensuring that discussions about accessibility occur in strategic planning and budgeting meetings with administrators. Accessibility awareness needs to be raised throughout libraries and other academic institutions so that the accessibility challenges of emerging technologies are addressed at the design stage and built into pedagogical implementations from the beginning. This will help to ensure that pedagogies founded on emerging technologies will be “born accessible,” for the benefit of learners and educators throughout the academic world.


[1] The use of identity-first (“disabled person”) vs. person-first (“person with disabilities”) language is debated. Disability is a complex set of identities and the language used should take into account the preferences of disabled people and other contextual factors. Our choice to use identity-first language is a conscious one.

[2] A library residency is a term position during which residents may rotate through different functional areas of the library or focus on one subject area, and often contribute to projects and initiatives at their host library to gain professional (vs. paraprofessional) experience.


Abhishek, Seth, Judy M. Vance, and James H. Oliver. 2011. “Virtual Reality for Assembly Methods Prototyping: A Review.” Virtual Reality 15, no. 1: 5–20.

Azenkot, Shiri, Larry Goldberg, Jessie Taft, and Sam Soloway. 2019. XR Symposium Report.

Bronack, Stephen, Amy L. Cheney, Richard Reidl, and Johan Tashner. 2008. “Designing Virtual Worlds to Facilitate Meaningful Communication: Issues, Considerations, and Lessons Learned.” Technical Communication 55, no. 3: 261–69.

Carr, Diane, Martin Oliver, and Andrew Burn. 2010. “Learning, Teaching and Ambiguity in Virtual Worlds.” In Researching Learning in Virtual Worlds, edited by Anna Peachey, Julia Gillen, and Daniel Livingstone, 17–31. London: Springer.

Chavez, Bayron and Sussy Bayona. 2018. “Virtual Reality in the Learning Process.” In Trends and Advances in Information Systems and Technologies, edited by Álvaro Rocha, Hojjat Adeli, Luís Paulo Reis and Sandra Costanzo, 1345–56. Cham, Switzerland: Springer International Publishing,

Cook, Matt and Betsy Van der Veer Martens. 2019. “Managing Exploratory Units in Academic Libraries.” Journal of Library Administration, 59, no. 6: 606–28.

Cook, Matt and Zack Lischer-Katz. 2019. “Integrating 3D and VR into Research and Pedagogy in Higher Education.” In Beyond Reality: Augmented, Virtual, and Mixed Reality in the Library, edited by Kenneth Varnum, 69–85. Chicago: ALA Editions.

Cruz-Neira, Carolina, Daniel J. Sandin, Thomas A. DeFanti, Robert V. Kenyon, and John C. Hart. 1992. “The CAVE: Audio Visual Experience Automatic Virtual Environment.” Communications of the ACM 35, no. 6: 64–73.

Deutschmann, Mats, Luisa Panichi, and Judith Molka-Danielsen. 2009. “Designing Oral Participation in Second Life: A Comparative Study of Two Language Proficiency Courses.” ReCALL 21, no. 2 (May): 206–26.

Donalek, Ciro, George Djorgovski, A. Cioc, A. Wang, J. Zhang, E. Lawler, S. Yeh, et al. 2014. “Immersive and Collaborative Data Visualization Using Virtual Reality Platforms.” In Proceedings of 2014 IEEE International Conference on Big Data, Washington, DC, Oct. 27–30, 609–14.

Ellis, Barrie, Gareth Ford-Williams, Lynsey Graham, Dimitris Grammenos, Ian Hamilton, Headstrong Games, Ed Lee, Jake Manion, and Thomas Westin. n.d. “Game accessibility guidelines.” Game accessibility guidelines. Accessed Dec. 13, 2019.

Enamorado, Sofia. 2019. “The CVAA & Video Game Accessibility.” 3Play Media.

Fabris, Christian, Joseph Rathner, Angelina Fong, and Charles Sevigny. 2019. “Virtual Reality in Higher Education.” International Journal of Innovation in Science and Mathematics Education 27: 69–80.

Holmberg, Kim and Isto Huvila. 2008. “Learning Together Apart: Distance Education in a Virtual World.” First Monday 13, no. 10 (October).

Jaeger, Paul T. 2018. “Designing for Diversity and Designing for Disability: New Opportunities for Libraries to Expand Their Support and Advocacy for People with Disabilities.” The International Journal of Information, Diversity, & Inclusion 2, no. 1–2: 52–66.

Jang, Susan, Jonathan M. Vitale, Robert W. Jyung, and John B. Black. 2017. “Direct Manipulation is Better than Passive Viewing for Learning Anatomy in a Three-dimensional Virtual Reality Environment.” Computers & Education 106: 150–65.

Johnson-Glenberg, Mina C. 2018. “Immersive VR and Education: Embodied Design Principles that Include Gesture and Hand Controls.” Frontiers in Robotics and AI 5, art. 81 (July): 1–19,

Johnston, Elizabeth, Gerald Olivas, Patricia Steele, Cassandra Smith and Liston Bailey. 2018. “Exploring Pedagogical Foundations of Existing Virtual Reality Educational Applications: A Content Analysis Study.” Journal of Educational Technology Systems 46, no. 4: 414–39.

Kersten-Oertel, Marta, Sean Jy-Shyang Chen, and D. Louis Collins. 2014. “An Evaluation of Depth Enhancing Perceptual Cues for Vascular Volume Visualization in Neurosurgery.” IEEE Transactions on Visualization and Computer Graphics 20, no. 3: 391–403.

Laha, Bireswar, Doug A. Bowman, and John J. Socha. 2014. “Effects of VR System Fidelity on Analyzing Isosurface Visualization of Volume Datasets.” IEEE Transactions on Visualization & Computer Graphics 4: 513–22.

Laurent Clerc National Deaf Education Center. 2015. “Working with Interpreters.” Gallaudet University.

Lischer-Katz, Zack, Matt Cook, and Kristal Boulden. 2018. “Evaluating the Impact of a Virtual Reality Workstation in an Academic Library: Methodology and Preliminary Findings.” In Proceedings of the Association for Information Science and Technology Annual Conference, Vancouver, Canada, Nov. 9–14, 300–8.

Lucchesi, Andres. 2015. “Introduction to Special Issue: Disability Studies Approaches to Pedagogy, Research, and Design.” Journal of Interactive Technology & Pedagogy 8.

Lund, Brady D. and Ting Wang. 2019. “Effect of Virtual Reality on Learning Motivation and Academic Performance: What Value May VR Have for Library Instruction?” Kansas Library Association College and University Libraries Section Proceedings 9, no. 1: 1–7.

Milovanovic, J. 2017. “Virtual and Augmented Reality in Architectural Design and Education.” In Proceedings of the 17th International Conference, CAAD Futures, Istanbul, Turkey, July.

Mirza, Rafia and Maura Seale. 2017. “Who Killed the World? White Masculinity and the Technocratic Library of the Future.” In Topographies of Whiteness: Mapping Whiteness in Library and Information Science, edited by Gina Schlesselman-Tarango, 171–97. Sacramento, CA: Library Juice Press.

Mott, Martez, Ed Cutrell, Mar Gonzalez Franco, Christian Holz, Eyal Ofek, Richard Stoakley, and Meredith Ringel Morris. 2019. “Accessible by Design: An Opportunity for Virtual Reality.” ISMAR 2019 Workshop on Mixed Reality and Accessibility.

Ni, Tao, Doug A. Bowman, and Jian Chen. 2006. “Increased Display Size and Resolution Improve Task Performance in Information-rich Virtual Environments.” In Proceedings of Graphics Interface, Quebec City, Canada, June 7–9 139–46.

Nicholson, Karen P. 2015. “The McDonaldization of Academic Libraries and the Values of Transformational Change.” College & Research Libraries 76, no. 3: 328–338.

Patterson, Brandon, Tallie Casucci, Thomas Ferrill and Greg Hatch. 2019. “Play, Education, and Research: Exploring Virtual Reality through Libraries.” In Beyond Reality: Augmented, Virtual, and Mixed Reality in the Library, edited by Kenneth J. Varnum, 47–56. Chicago: ALA Editions.

Pober, Elizabeth E. and Matt Cook. 2016. “The Design and Development of an Immersive Learning System for Spatial Analysis and Visual Cognition.” In Proceedings of 2016 Conference of the Design Communication Association, Bozeman, MT,

Prabhat, Andrew Forsberg, Michael Katzourin, Kristi Wharton, and Mel Slater. 2008. “A Comparative Study of Desktop, Fishtank, and CAVE Systems for the Exploration of Volume Rendered Confocal Data Sets.” In IEEE Transactions on Visualization and Computer Graphics 14, no. 3 (May-June): 551–63.

Praslova–Førland, Ekaterina, Alexei Sourin, and Olga Sourina. 2006. “Cybercampuses: Design Issues and Future Directions.” Visual Computer 22, no. 12: 1015–28.

Radianti, Jaziar, Tim A. Majchrzak, Jennifer Fromm, and Isabell Wohlgenannt. 2020. “A Systematic Review of Immersive Virtual Reality Applications for Higher Education: Design Elements, Lessons Learned, and Research Agenda.” Computers & Education 147: 1–29.

Ragan, Eric D., Regis Kopper, Philip Schuchardt, and Doug A. Bowman. 2013. “Studying the Effects of Stereo, Head Tracking, and Field of Regard on a Small-scale Spatial Judgment Task.” IEEE Transactions on Visualization and Computer Graphics 19, no. 5: 886–96.

Schaffhauser, Dian. 2017. “Multi-campus VR Session Tours Remote Cave Art.” Campus Technology, (October 9).

Schneider, Sven, Saskia Kuliga, Christoph Hölscher, Ruth Conroy-Dalton, André Kunert, Alexander Kulik, and Dirk Donath. 2013. “Educating Architecture Students to Design Buildings from the Inside Out.” In Proceedings of the 9th International Space Syntax Symposium, edited by Y.O. Kim, H.T. Park and K.W. Seo, Seoul, Korea.

Temple University Libraries. n.d. “Loretta C. Duckworth Scholars Studio.” Temple University Libraries. Accessed Dec. 13, 2019.

U.S. Department of Education, National Center for Education Statistics. 2019. Digest of Education Statistics, 2017 (2018–070),

U.S. General Services Administration. n.d. “About Us.”, GSA Government-wide IT Accessibility Program. Accessed Dec. 13, 2019.

Ware, Colin and Peter Mitchell. 2005. “Reevaluating Stereo and Motion Cues for Visualizing Graphs in Three Dimensions.” In Proceedings of the 2nd Symposium on Applied Perception in Graphics and Visualization, 51–8.

W3C. 2019. “Inclusive Design for Immersive Web standards.” W3C.

W3C Web Accessibility Initiative. 2019. “Making the Web Accessible.” Web Accessibility Initiative.

W3C Web Accessibility Initiative. 2019. “Web Content Accessibility Guidelines (WCAG) Overview.” Web Accessibility Initiative.

Wentz, Brian, Paul T. Jaeger, and Jonathan Lazar. 2011. “Retrofitting Accessibility: The Legal Inequality of After–the–fact Online Access for Persons with Disabilities in the United States.” First Monday 16, no. 11 (November).

Wiegand, Wayne A. 1999. “Tunnel Vision and Blind Spots: What the Past Tells Us about the Present; Reflections on the Twentieth-Century History of American Librarianship.” The Library Quarterly 69, no. 1: 1–32.

Winner, Langdon. 1986. The Whale and the Reactor: A Search for Limits in an Age of Technology. Chicago: University of Chicago Press.

Wong, Alice, Hannah Gillis, and Ben Peck. 2018. “VR Accessibility: Survey for People with Disabilities.” Disability Visibility Project & ILMxLAB.

Zhao, Yuhang, Edward Cutrell, Christian Holz, Meredith Ringel Morris, Eyal Ofek, and Andrew D. Wilson. 2019. “SeeingVR: A Set of Tools to Make Virtual Reality More Accessible to People with Low Vision.” In Proceedings of CHI 2019, Glasgow, Scotland, May 4–9.

About the Authors

Jasmine Clark is the Digital Scholarship Librarian at Temple University. Her primary areas of research are accessibility and metadata in emerging technology and emerging technology centers. Currently, she is co-leading The Virtual Blockson, a project to recreate the Charles L. Blockson Afro-American Collection in virtual reality, while also doing research on 3D metadata and the development of Section 508 compliant guidelines for virtual reality experiences. Jasmine has experience in a variety of functional areas and departments, including metadata, archives, digital scholarship, and communications and development. She is interested in the ways information organizations can integrate accessible, inclusive practices into their services, hiring, and management practices.

Zack Lischer-Katz is a postdoctoral research fellow at University of Oklahoma Libraries. From 2016 to 2018 he was a Council on Library and Information Resources (CLIR) Postdoctoral Fellow. He employs qualitative-interpretive methodologies to examine visual information preservation and curation in information institutions, with a focus on complex data types, such as virtual reality, 3D, and audiovisual formats. His research has appeared in Library Trends, International Journal of Digital Curation, Information Technology and Libraries, and First Monday. He received his PhD in Communication, Information, & Library Studies from Rutgers University and his MA in Cinema Studies from New York University.

360° photograph of Junipero Serra statue and campus lawn, displayed in Google Tour Creator interface with digital annotation icons.

Representing Indigenous Histories Using XR Technologies in the Classroom


In this article, we describe the major assignments from our team-taught course, Virtual Santa Clara, which drew on the affordances of extended reality (XR) technologies and public memory scholarship from the fields of rhetoric and anthropology to represent Native Ohlone history and culture on our campus. Based on our experience, we argue for the affordances of producing small-scale XR projects—using technologies such as 360° images and 3D models—to complement and contribute to larger-scale XR digital projects that are founded on deep community collaboration. In a landscape where exciting technological work so often tends to entail thoroughly developed, large-scale projects, we argue for the value of more modest contributions, both as scaffolded pathways into technology work for teachers and students and as a means of slowing down the process of technology adoption in order to better respond to ethical, humanistic, and decolonial considerations. Our own incremental process enabled us to proceed with more care, more caution, and, ultimately, a more collaborative framework going forward.

New technologies offer exciting possibilities for the intersections of public memory and pedagogy in post-secondary education. Heritage professionals in many parts of the world have used new media, including extended reality (XR), to create alternative ways of viewing, interacting with, and ultimately experiencing the heritage of particular places (e.g., Green and Jones 2019; Malpas 2008; Michon and Antably 2013). The appeal of these approaches, which in many instances can challenge what Smith (2006) refers to as the “authorized heritage discourse,” translates easily to the classroom, where students and professionals alike are eager to move beyond traditional coursework and make meaningful contributions through their research and composition (Watrall 2019). Yet the realm of digital cultural heritage opens new ethical considerations and in many cases requires deep collaboration with affected communities (Csoba DeHass and Taiit 2018; Haas 2005; Haukaas and Hodgetts 2016; Townsend et al. 2020). Accordingly, a slower pace of development may better serve our students and our community collaborators. In this article, we examine these issues as they relate to our attempts to engage students in collaborative digital projects at Santa Clara University in California.[1]

Hailed as the state’s oldest institution of higher education and the only university established at one of California’s 21 colonial-era missions, Santa Clara University (SCU) celebrates its history as central to its identity. Images of Mission Santa Clara are featured on the school’s official logo and the reconstructed mission church serves as the visual centerpoint of the institution’s built environment. The palm-lined entrance to campus and the ubiquity of mission revival architecture serves to extend the central imagery of the mission seamlessly into the surrounding neighborhoods. The effect is a beautiful and unified campus space, suggesting a unitary and uncomplicated sense of history. That is, the structures of “authorized heritage discourse” (Smith 2006) or “official memory” (Bodnar 1993) are firmly, if not exclusively, dedicated to celebrating the Mission, and the Western perspectives and values it represents.

The historicity of the contemporary campus, however, masks a more complicated colonial history (Trouillot 1995). Particularly absent is any meaningful public acknowledgment of the thousands of Native Americans who lived at Mission Santa Clara during the colonial period (ca. 1777–1840s) or the Indigenous groups, today known collectively as the Ohlone, who lived in the region for millennia prior to the arrival of Europeans. Indeed, this Native history has been erased by the construction of the SCU campus, and what Native recognition exists is confined to the margins: modest plaques at the edges of campus and small exhibits tucked away into basements. In these ways, Native experiences and histories are contained, rhetorically and materially isolated from the broader history and living memory at SCU. The unified aesthetic of the campus memoryscape is accomplished at the expense of both historical and ethical opportunities for learning and reflection among students, faculty, staff, and visitors alike.

Our recent team-taught course, Virtual Santa Clara, sought to use immersive technologies to address this omission of Native history and memory at SCU. Applying rhetorical and anthropological research methods and digital technologies, we sought ways of using undergraduate coursework to contribute to the work of reframing campus as a polysemous site of Indigenous history and culture. In this article, we describe our course design and implementation process to these ends, exploring the affordances and limitations of using immersive technologies in a public history course such as our own. Specifically, we recognize the ways the small-scale immersive projects we implemented complement and contribute to larger-scale XR digital projects founded on community collaboration.

We use terms like immersive projects or XR projects to designate those projects that utilize VR or AR functionality (like 3D imaging and manipulability), while not being fully fleshed out VR or AR experiences. In a landscape where exciting technological work so often tends to entail thoroughly developed, large-scale projects, we argue for the value of more modest contributions, both as scaffolded pathways into technology work for teachers and students and as a means of slowing down the process of technology adoption in order to better respond to ethical, humanistic, and decolonial considerations. Our own incremental process enabled us to proceed with more care, more caution, and, ultimately, a more collaborative framework going forward.

We begin by theorizing digital and immersive technologies as a means of engaging Native history and public memory in our course. We then discuss the three major projects students produced to experiment with this work: 360° immersive tours analyzing the campus as commemorative space, annotated 3D models of Ohlone artifacts, and proposals for large-scale projects using immersive technologies to represent Native history and culture on our campus. We close by sharing our reflections on how to use digital technologies to engage campus public memory work collaboratively and responsibly. While arguing for the affordances of immersive technologies for supplementing and speaking back to more formal, top-down commemorative features of the campus space as a “place of public memory” (Blair, Dickinson, and Ott 2010, 2), we explore the challenges of implementing technology projects in courses and share our initial insights and strategies for others interested in engaging this kind of work.

Course Background and the Role of Immersive Technologies

Virtual Santa Clara was collaboratively designed and taught by faculty in English and Anthropology in Spring 2019. The faculty members came together to teach this course after each having taught similar courses in their own departments. Amy had taught archival research and writing courses exploring the gendered and racialized histories of Santa Clara University, but was increasingly dissatisfied by the limited conception of campus stakeholders and histories communicated by that course design, and vexed by her inability to effectively account for Native histories and experiences in teaching it. Meanwhile, Lee had taught a course called Virtual Santa Clara from solely within Anthropology, but was interested in putting rhetorical perspectives and a more explicit attention to student writing development in service of historical content knowledge. Drawing on work by public memory scholars in both writing studies and heritage studies, the instructors hoped this new course would push students to consider the ways their own writing could contribute to the public memory work of the campus and enhance recognition of Native history and culture of that space.

As described in the syllabus, this new course explored what we called the “difficult history” of Mission Santa Clara, with a particular emphasis on archival and archaeological materials associated with the Indigenous people, particularly the Ohlone, whose lands and livelihoods were upended by Euro-American colonialism. Despite an ongoing lack of federal tribal recognition, the Ohlone trace their connection to this land, which they call Thamien, across millennia. During the colonial period, Franciscan missionaries working for the Spanish Crown sought to convert local Ohlone people not just to Catholicism but also to European lifeways. Labor was a cornerstone of the missionary project, and it was Ohlone people who built the original structures that comprised Mission Santa Clara on what is today our campus. The mission’s baptismal records hold the names of more than 11,000 individuals, the vast majority of whom were from Ohlone communities or other neighboring tribes. Despite the severe constraints of colonialism, these people outlasted the mission system and today comprise several interrelated tribal communities in the San Francisco Bay area (Leventhal et al. 1994; Panich 2020).

Students learned about this history through consideration of the primary documentary and archaeological record, its associated secondary literature, and through conversations with Andrew Galvan, a representative of one Ohlone group that traces its ancestry through Mission Santa Clara (and a person with decades of professional experience in the public interpretation of the California Missions [Galvan and Medina 2018]). By researching existing histories and representations of our university, students critically reflected on how we tell “our history”—who is included or excluded? What kinds of evidence is marshaled (or disregarded), and what social and material forces are accounted for in the production and preservation of that evidence? What social/political/material conditions in the present shape our conceptions of our past? As a result of these considerations, the question that this course ultimately raised was, what technologies and genres are available to us for re-writing these histories toward more just and equitable ends? Our assumption here was that, with the increased potential for access and circulation of student-authored work afforded by the internet and mobile technologies, we could leverage the labor and resources of the classroom to contribute to public education, helping to reshape the landscape of public commemoration (and thus public memory) on our campus. Aligned with similar efforts like the Georgetown Memory Project, we see this course as examining and redressing silences and violence in our historical narratives through engaged student research and writing.

The new version of the Virtual Santa Clara course was designed specifically with the “virtual” possibilities of public memory and historical representation at its core. While the class had always involved online composing for the public (specifically, the composition of websites), the new course emphasized specific rhetorical considerations of writing in online spaces and for public audiences (including considerations of style and arrangement), and sought to expand the effectiveness and interactive potential of student projects and, hence, their potential to shape public knowledge through immersive experiences.

Recognizing the inflexible and conservative nature of the campus built environment, we also chose to use immersive digital technologies for this course as a direct challenge to the limits of official, material installations, extending the “commemorative landscape” of the campus (Aden 2018) and empowering students to compose public remembrance, to “author the built environment” (Tinnell 2017, xii). As John Tinnell observes, “The discourse conventions that have regulated print texts and sculptural interventions in public space…hold little sway in contemporary digital cultures” (xviii).

Originally, this plan entailed the creation of a full AR walking tour using a platform such as BlippAR or LayAR. We were interested in AR technology in particular because, as Jacob Greene and Madison Jones argue, “By integrating digital counter-discourses within spaces where information is often tightly controlled and highly regulated (such [as] iconic city streets or busy urban intersections), location-based AR projects work to re-articulate dominant narratives about a given space” (Greene and Jones 2019, np). Working in collaboration with Andrew Galvan to guide our interpretations, we sought to enlist students in the production of such counter-discourses that would disrupt the unified Eurocentric memoryscape of our campus. However, we faced two limitations in this assignment design.

The primary limitation was simply time. Confined to a ten-week academic term, we were unable to design a course outline that did justice to both our historical and rhetorical learning outcomes in addition to the technical skills for creation and curation of digital assets for such a project (cf. Allred 2017).  The second concern we had was what technologists refer to as extensibility, or what public memory scholars might call durability (Blair 1999). Having taught courses in the past in which students produced digital projects that were either technologically unsupported over time or simply languished in isolation on the web, we were committed to creating projects that would have both real audiences and a future. We understood that existing proprietary platforms available for AR did not yet have a very long shelf life (see, for example, Greene and Jones’s use of Aurasma, which was purchased by HP, rebranded as HP Reveal, and then discontinued—an incident that they argue is “emblematic of the ongoing corporatization of augmented space” [np]). This is significant because, as Blair rightly observes, the durability and longevity of commemorative installations contributes to an audience’s sense of its importance (1999, 37)—a point that Hess expands to include digital commemorations as well (2007, 821). Unable to identify a reliable open-source platform for our AR project creation at the time, we altered the assignment scope to engage students in smaller-scale digital projects that would both function independently and also constitute a body of digital assets on which we could draw for large-scale immersive projects in the future. As we will argue, this incremental process served a valuable role in enabling a less colonizing approach to the production of digital public memory work in our class.[2]

Further, the assignments that students ultimately produced represented valuable Extended Reality (XR) projects in their own right, as they allowed students to immerse themselves and their users in digital locations and interact with digital objects as a means of engaging Native history. The three separate but interrelated immersive projects we developed—360° video tours analyzing the campus as commemorative space, annotated 3D models of Native artifacts from Mission Santa Clara, and large-scale project proposals for using immersive technologies to represent Native history and culture at SCU—allowed us to experiment with and analyze the potential of immersive technology for Native public memory work, engaging students in critical/analytical, productive, and imaginative postures, respectively, all while building a repository of digital assets to be leveraged in a more ambitious and comprehensive Ohlone-designed digital project in the future (to be discussed in more detail below). Of course, these projects were not without their own limitations as well. In what follows, we discuss these three projects and share samples of the resulting student work in order to consider the affordances and limitations of these nascent XR assignments for digital public memory work. By outlining these assignments, we hope to provide insights into the potentials of more modest XR projects for those in the early stages of adopting these technologies in the classroom.

360° tours

The first major project students undertook were 360° tours. Based on their knowledge of Native history and archaeology at SCU, students selected a site (that was either publicly marked or not), captured 360° images of it, and conducted a critical analysis of the history it represented, including attention to spatial arrangement and what evidence, figures, and experiences were emphasized and which were excluded. The learning outcomes of this assignment included evaluating the rhetorical effect of specific features of a commemorative site, applying course terms and concepts to the analysis of a local site of public memory, and using 360° technology to thoughtfully represent a physical site. Students were also tasked with composing with a consideration of the audience and the student’s own role in contributing to public memory (see Figure 1).

A vista of the university quad is visible, with the statue of Junipero Serra visible on the left.
Figure 1. Screenshot from 360° Virtual Tour composed by Raymond Hartjen and Aiden Rupert, 2019.

To do so, students first captured images of their selected site using Insta360 cameras.[4] These cameras captured 360° images that are then viewable in Google Cardboard viewers or head mounted displays (HMDs), as well as interactively viewable on a PC or mobile device. Students uploaded the image files to Google Tour Creator, where they annotated them to point out specific features of the site they were analyzing that contributed to or complicated their interpretation of the site. Using these digital annotations, students composed an evidence-based argument interpreting the location as a site of public memory. Some guiding questions they considered in their analyses included:

  • What is the explicit and implicit argument this site makes, and what specific features lend themselves to (or complicate) that argument?
  • Who is the audience or “public” imagined by the site?
  • How does this site represent or engage a sense of history, the present, and/or the future?
  • What are the roles of the body, movement, and space in the experience of this site?

While students were tasked with presenting their critical interpretations for a public reader, the main thrust of this assignment was critical/analytical—focused on understanding the rhetorical work of the campus rather than producing their own historical representations. Conversations with Andrew Galvan, the Ohlone representative, pushed the students to consider the relevance of historical monuments (or their absence) to descendant communities, for whom the colonial period remains vital to their ongoing struggle for autonomy and recognition.

Usually, this kind of spatial analysis assignment would entail students producing an extended description of the place to preface the analysis, translating material and spatial features of the environment into academic prose. While this process of translation has other features and benefits, one effect of it is that the rhetorical-spatial features and their functions are dislodged from their physical and material context. In this process, an immediacy and relevance is often lost, as the resulting academic arguments are similarly dislodged and dissociated from the real and semiotically abundant physical site itself. But using immersive technology allowed students to digitally mark-up their physical surroundings in (what they experienced as) a more immediate way. While still working with a representation, the ability to comment directly on features of their environment via digital annotation provided students (and their readers) with a less mediated experience of the environment than an alphabetic representation allowed. The texts they produced sought to capture the feeling of “being through there” (Dickinson and Aiello 2016) that they experienced, and encouraged them to attend to the rhetorical effect of embodied presence at the site. At the same time, the ability to consider the campus space while not present enabled a particular kind of critical-analytical work by defamiliarizing the place and, thus, generating critical distance and space for reflection among students.

Further, the technology provided an additional representational layer allowing students not only to analyze what is present in the commemorative landscape but also to reveal the histories that have been effectively erased from our campus, such as unmarked mission cemeteries. While this analysis and historical augmentation could be accomplished discursively, students and their readers benefited from the ability to map other commemorative possibilities directly onto the existing physical landscape. Just as mobile technologies allow users to access the “embodied knowledge of [a] city” by extending the affordances of digital mapping software into physical spaces of the everyday, so can immersive representations capture the physical spaces of daily life and subject them to the critical gaze of digital markup and manipulation (Kalin and Frith 2016). While it is true, as Jason Kalin and Jordan Frith argue, that these platforms privilege “engagement with a spatial representation over engagement with physical space,” losing out on the “optical knowledge” gathered from traversing a real, material environment (224), we also argue that capturing that unfolding experience of being in place and freezing it in time is a powerful tool for deepening students’ analysis as well as sharing their findings with those not present on site.

Annotated 3D models

Moving from a more analytical posture to a productive one, the next major assignment was the creation of interactive 3D models. For this assignment, students used a mid-tier 3D scanner (HP Pro S3) to produce 3D models of archaeological artifacts and annotate them with interpretive information for a public audience. The annotations described and contextualized the artifact, and provided an interpretive frame for what they thought the audience should notice or understand about the meaning of this object. Thus, students were asked to consider not only what the “factual history” of the object is, but also what narratives the object helps contribute to the public memory of our campus. Here, too, students were asked to compose with a consideration of audience and the student’s own role in contributing to public memory. A critical difference between this project and the 360° scans was the shift from a focus on SCU’s physical environs to the more intimate domain of objects that were made and/or used by Native people who lived at Mission Santa Clara, a difference that brings to the fore a host of practical and ethical concerns (e.g., Csoba DeHass and Taiit 2018; Haukaas and Hodgetts 2016), many of which we discussed with Andrew Galvan in a class visit prior to the beginning of the assignment (see Figure 2).

Figure 2. 3D Scan by Raymond Hartjen and Aiden Rupert 2019, housed on Sketchfab. Used with permission.

This assignment began with an exercise on writing descriptions that attend to the rhetorical work of detail selection and emphasis, helping students to disrupt the assumption of an objective scientific stance and recognize the rhetorical nature of all writing. Students then explored their ideas of the significance of the artifact, tracing their attributions of significance to their own personal experiences and biases, or what Burke calls their “terministic screens” (1966, 44). By comparing their descriptions to others and examining the effects of those decisions, students came to understand even this “simple” act of composing as highly rhetorical memory work. This issue was further illustrated by their conversations with Andrew Galvan, who pushed them to consider still other ideas about the significance of the objects they had chosen.

With this in mind, students selected features of their artifacts to highlight through digital annotation and composed a brief interpretive description of their chosen artifact. This task required the students to grapple with the materiality of the objects they had chosen, building competency in the visual analysis of objects (Macaulay-Lewis 2015). This was manifested both in terms of the technology (the software we used had trouble creating models of flat objects such as buttons or coins) and in questions about which of the object’s attributes would benefit from textual annotations. The annotated models were uploaded to the campus’s public-facing account on Sketchfab, a popular site for sharing 3D objects and models. Here, too, the students made rhetorical choices about how the objects are displayed, including lighting and initial orientation. By using Sketchfab, as opposed to non-public storage solutions, the 3D models contribute to a growing repository of digital assets that can be accessed by researchers and public users today, and also be leveraged for future digital projects, once a critical mass of cultural materials has been successfully created—a goal of our ongoing work with the Ohlone community.

Final proposals

Following these two initial assignments, students pushed the question of public memory by further researching and revising their 360° and/or 3D projects into digital exhibits. Using one of the previously used platforms or Google Sites to incorporate archival, historical, critical/theoretical, and/or archaeological research materials, they produced thoroughly researched and polished compositions that were to be suitable for a public audience (either on a traditional web browser or mobile VR).

However, recognizing the limits of the academic term, we wanted an opportunity to harness some of the students’ creative insights and technological ideas to inform future project possibilities as well. So, for their final assignment, students created project proposals (addressed either to the university or an outside granting agency) that would extend the work and thinking we had done in class beyond what we were able to accomplish in ten weeks. The resultant document proposed to change the way Native history is presented on campus through changes to the physical landscape and/or through virtual representations in order to demonstrate students’ overall understanding of Ohlone history and historical representation on our campus, articulate the significance of this kind of memory work, and apply our thinking about historical memory production beyond the limited projects and technologies with which we were able to work during the quarter.

To prepare for this project, students analyzed sample digital projects from other campuses and visited the Imaginarium VR lab on our campus to experiment with immersive games and experiences related to public history and Native culture, including Boulevard, Native American App, and Ward & Cartouches. These experiences were meant to inspire them to consider ways technology could be further utilized to engage commemorative work. We imagined that we could thus sidestep the challenges posed by technological expertise and harness the creative energies of students to seed future digital memory projects. In these projects, students showcased a wide range of creative approaches that far exceeded those we had imagined ourselves, from relatively modest suggestions relating to relocating existing statuary to ambitious interdisciplinary projects utilizing VR headsets. In all cases, the traces of previous analytical and compositional experiences were evident in these proposals, which almost uniformly attended to the significance of spatial rhetorics, presence, and interaction in thinking about the ways the public interacts with the history and memory of a place. We believe it was through both their own presence on campus and their use of immersive technologies to analyze the experience of presence as such that led to the most exciting insights in those projects, as students drew on their own deep knowledge of the campus space to inform their plans for digitally altering it (see Figure 3).

Map of Santa Clara University campus marked with red tour route.
Figure 3. Image from Raymond Hartjen’s proposal, 2019. Used with permission.

To take just one example, student Raymond Hartjen proposed an Augmented Reality tour, which he called the Augmented Native Santa Clara Experience (ANSCE). He explains the proposal:

Using AI/GPS tracking, AR digital reconstructions, and historical annotations, visitors will be able to experience aspects of the Native American past that are not easily accessible or understood today. Digital representations of historic Native settlements or Mission-era structures of Native occupation will be layered over the existing campus structures through smartphone camera functionalities, therefore immersing visitors in a world that has influenced as well as been impacted by the corresponding modern space. Ultimately, this will benefit both local and distant communities alike by creating a more inclusive representation of the Mission past that will be crucial in constructing future notions of public memory (Hartjen 2).

Hartjen’s discussion throughout the proposal merged his own deep knowledge of the campus space, his growing knowledge of Native histories and experiences, and his understanding of public commemoration as an unfolding and ever-shifting process. And, perhaps most importantly, Harjten and other students explicitly acknowledged the importance of ongoing Ohlone consultation in any project development efforts. In these ways, students developed critical technological literacies alongside attention to ongoing colonial violence and the need for decolonial methodologies in approaching this work

Cautions and Future Directions

“[B]y what measures shall we gauge the value or harm of various digital initiatives to author the built environment?” (Tinnell 2017, xviii)

The projects students produced and imagined in Virtual Santa Clara have begun to fill a gap in public commemoration on our campus, building a repository of immersive and interactive digital assets that will be drawn on in future courses and public memory efforts, as well as a pool of ideas to inspire our Native and non-Native collaborators in their designs. Students engaged XR critically, attentive to the political work they were engaging in representing stories that were not their own and the affordances and limitations of the technologies they used to do so.

Particularly given the legacy of colonialism that shaped both the history of the SCU campus and our own positions as teachers, students, and researchers, we were mindful to cultivate a “critical digital literacy”—one that went beyond the goal of inclusion to attend to colonizing, essentializing, fetishizing, or otherwise limiting potentials of the digital work we analyzed and produced.[5] Immersive technology contributes to the development of such critical literacies because it “encourages citizens to see their everyday environment as a networked phenomenon emerging from a series of rhetorically contingent relationships between material and immaterial (and human and non-human) entities” (Greene and Jones). That is, XR positions students as analysts as well as creators of commemorative landscapes, alert to the relations of power and influence that shape these constructions, both virtual and physical.

At the same time, engaging these digital projects has its own risks as well. As Tinnell cautions, “We are racing to adopt new information spaces, new archives, without giving much thought to the (unique) forms of expression they might enable and constrain. The lauded technical feats of digital-physical convergence do not come preinstalled with literary, artistic, or rhetorical innovations” (2017, 11). The risk of this headlong rush may be particularly pronounced in relation to cultural heritage projects such as ours, with the potential to re-colonize Native stories and experiences. A challenge in this regard has been facilitating student research and writing that could support what Angela Haas (building on the work of Scott Lyons) has termed “digital rhetorical sovereignty, where American Indians can share their own stories in their own words” (Haas 2005, np). We are still seeking more ethical and effective ways to work on digital public memory pedagogies that are guided by Native stakeholders and their priorities.

Using this course as a first step towards a more decolonial approach, we have continued to build relationships with Ohlone stakeholders in order to engage (and also study) this commemorative process. Based on the success of this initial effort, we have secured new grants to continue this conversation, including one grant to work with Ohlone tribal members to develop college-level curricular materials that are directly shaped by Ohlone priorities, values, and perspectives. Another grant will allow us to work in deep consultation with Ohlone members to design a large-scale digital public memory project that uses VR or AR technologies to engage the public with Ohlone history and culture related to this landbase.

A key part of this process is slowing down and making the challenges and opportunities of digital rhetorical sovereignty part of the process. As Jacob Greene and Madison Jones caution, “It is important that scholars of computers and writing continue to interrogate the rhetorical potential of this emerging computing paradigm by detailing the design choices made throughout the creation of a mobile media project” (Greene and Jones 2019, np). Our project serves as a reminder that, because a significant aspect of the design process in Native public memory projects must be consultation with the affected tribe(s), the pedagogical plans must allow sufficiently for that consultation in an iterative process of design and feedback—which may make larger and more formalized projects more challenging within the confines of an academic quarter or semester. Because navigating new technologies is a significant task, on top of the necessary work of building relationships with Native stakeholders, our experience underscores the risk of what Katrine Barber calls “soft technologies of violence,” such as the creation of deadlines that don’t permit sufficient reflection or thorough consultation within tribes (2013, 31). Thus, we argue that small-scale immersive projects can move the needle on more inclusive historical representation of our campuses while allowing the time for broader consultation and collaboration that is necessary for more fully decolonial practice. Sharing our process is our attempt to support such public memory work within XR and other digital media projects and pedagogies in the future.

As our work takes place on the campus of Santa Clara University, students are an integral component of the public memory projects we create. At the practical level, we hope to provide ethical, collaborative frameworks to our students, who may become part of the next generation of digital heritage practitioners. This means paying careful attention to community concerns and also, as we learned ourselves, choosing projects that are both scalable and, perhaps more importantly, achievable within the constraints of a particular academic term. But we also see students’ digital composition, in collaboration with local communities, as a way to realize the promise suggested by Malpas (2008) to use new media as a way to instill a deeper sense of place, and to actively use their rhetorical skills to shape public memory in response. Though left out of the official memoryscape dominated by physical monuments and markers, students at SCU are deeply concerned about the (lack of) representation of Native history related to Mission Santa Clara and the deeper Indigenous heritage of our campus. By offering digital projects that engage those histories, we hope to include our students in bringing about the changes that they and the local Native community collectively wish to see.


[1] Both authors are non-Natives who have come to the study of Indigenous history and representation through their respective disciplines, English and Anthropology, and their relation to the site of acute colonial activity that is Mission Santa Clara.

[2] At the same time, we take caution from la paperson that “only the bad guys build things that last forever” (2017, 70). In aspiring towards a more decolonial university, we want to remain alert to the ways colonial relations are continually (re)produced within institutions, and continue to critically reflect on the form and function of our pedagogical, technological, and commemorative goals.

[3] Following the model of Pamela VanHaitsma, who herself draws on Stacey Waite’s work on queer pedagogy, we approached students as fellow critics making meaning of our shared space together with us. Thus, this essay quotes and cites their work, but only with written permission, and identifies them or maintains anonymity based on their preferences (VanHaitsma 2019, 277; Waite 2017).

[4] Insta360 are affordable and compact cameras that many schools would be able to acquire for a class. However, students can also use the cameras on their cell phones to even more easily capture 360° images, which can be uploaded to a free platform like Google Tour Creator or ThingLink to annotate, augment, or link multiple sites together, or students could simply upload images to Google Earth, depending on the goals of the course and assignment.

[5] Following Karma Chavez (2006) and Barbara Biesecker (1992), we were suspicious of “inclusion” as a goal, given its ability to perpetuate rather than dismantle existing, oppressive structures of power and privilege. That is, we were cautious of absorbing Ohlone history seamlessly into a narrative of university-building—of “including” Ohlone in the existing story, which is, after all, one of ongoing colonial domination.


Aden, Roger C. 2018.  “Haunting, Public Memories, and the National Mall.” In Rhetorics Haunting the National Mall: Displaced and Ephemeral Public Memories, edited by Roger C. Aden, 3–14. Lanham, MD: Lexington Books.

Allred, Jeffrey. 2017. “A Professor Goes Overboard with Omeka and DH Box.” Journal of Interactive Technology and Pedagogy, Teaching Fails.

Barber, Katrine. 2013. “Shared Authority in the Context of Tribal Sovereignty.” The Public Historian 35, no. 4: 20–39.

Biesecker, Barbara. 1992. “Coming to Terms with Recent Attempts to Write Women Into the History of Rhetoric.” Philosophy and Rhetoric 25: 140–161.

Blair, Carole. 1999. “Contemporary US Memorial Sites as Exemplars of Rhetoric’s Materiality.” In Rhetorical Bodies, edited by Jack Selzer and Sharon Crowley, 16–57. Madison: University of Wisconsin Press.

Blair, Carole, Greg Dickinson, and Brian L. Ott. 2010.  “Introduction: Rhetoric/Memory/Place.” In Places of Public Memory: The Rhetoric of Museums and Memorials, edited by Greg Dickinson, et al. 1–56. Tuscaloosa: The University of Alabama Press.

Bodnar, John. 1993. Remaking America: Public Memory, Commemoration, and Patriotism in the Twentieth Century. Princeton: Princeton University Press.

Boulevard 23, developed by WoofbertVR. 2016. Boulevard.

Burke, Kenneth. 1966. Language as Symbolic Action: Essays on Life, Literature, and Method. Berkeley: University of California Press.

Chávez, Karma R. 2015. “Beyond Inclusion: Rethinking Rhetoric’s Historical Narrative.” Quarterly Journal of Speech 101, no. 1: 162–172.

Csoba DeHass, Medeia, and Alexandra Taitt. 2018. “3D Technology in Collaborative Heritage Preservation.” Museum Anthropology 12 no. 2:120–153.

Dickinson, Greg and Georgia Aiello. 2016. “Being Through There Matters: Materiality, Bodies, and Movement in Urban Communication Research.” International Journal of Communication 10: 1294–1308.

Galvan, Andrew and Vincent Medina. 2018. “Indian Memorials at California Missions.” In Franciscans and American Indians in Pan-Borderlands Perspective: Adaptation, Negotiation, and Resistance, edited by Jeffrey M. Burns and Timothy J. Johnson, 323–31. Oceanside, CA: American Academy of Franciscan History.

Greene, Jacob and Madison Jones. 2019. “Articulate Detroit: Visualizing Environments with Augmented Reality: An AR Walking Tour of Woodward Avenue.” Computers and Composition (Spring).

Haas, Angela M. 2005. “Making online spaces more native to American Indians: A digital diversity recommendation.” Computers and Composition Online. Retrieved from

Hartjen, Raymond. 2019. “Proposal for the Augmented Native Santa Clara Experience (ANSCE).” Assignment submission for ANTH149/ENGL100.

Hartjen, Raymond and Aiden Rupert. 2019. “Anth 149 Site Analysis (Junipero Serra Statue).” Google Tour Creator.

———. 2019. “Phoenix Button.” Sketchfab.

Haukaas, Colleen, and Lisa M. Hodgetts. 2016. “The Untapped Potential of Low-Cost Photogrammetry in Community-Based Archaeology: A Case Study from Banks Island, Arctic Canada.” Journal of Community Archaeology and Heritage 3, no. 1:40–56.

Hess, Aaron. 2007. “In Digital Remembrance: Vernacular Memory and the Rhetorical Construction of Web Memorials.” Media, Culture, and Society 29, no. 5: 812–830.

Kalin, Jason and Jordan Frith. 2016. “Wearing the City: Memory P(a)laces, Smartphones, and the Rhetorical Invention of Embodied Space.” Rhetoric Society Quarterly 46, no. 3: 222–235.

Leventhal, Alan, Les Field, Hank Alvarez, and Rosemary Cambra. 1994. “The Ohlone: Back from Extinction.” In The Ohlone Past and Present: Native Americans of the San Francisco Bay Region, edited by Lowell J. Bean, 297–336. Menlo Park, CA: Ballena Press.

Lyons, Scott. 2000. “Rhetorical Sovereignty: What Do American Indians Want from Writing?” College Composition and Communication 51 no. 3: 447–68.

Macaulay-Lewis, Elizabeth. 2015. “Transforming the Site and Object Reports for a Digital Age: Mentoring Students to Use Digital Technologies in Archaeology and Art History.” Journal of Interactive Technology and Pedagogy 7.

Malpas, J. 2008. “New Media, Cultural Heritage and the Sense of Place: Mapping the Conceptual Ground.” International Journal of Heritage Studies 14, no. 3:197–209.

Michon, D., and A. E. Antably. 2013. “It’s Hard to be Down When You’re Up: Interpreting Cultural Heritage Through Alternative Media.” International Journal of Heritage Studies 19, no. 1: 16–40.

Native American App 1.0.2, developed by Ogoki Learning Inc. 2017. Ogoki Learning Inc.

Panich, Lee M. 2020. Narratives of Persistence: Indigenous Negotiations of Colonialism in Alta and Baja California. Tucson: University of Arizona Press.

paperson, la. 2017. A Third University Is Possible. Minneapolis: University of Minnesota Press.

Smith, Laurajane. 2006. Uses of Heritage. London: Routledge.

Tinnell, John. 2017. Actionable Media: Digital Communication Beyond the Desktop. Oxford and New York: Oxford University Press.

Townsend, Russell, Kathryn Sampeck, Ethan Watrall, and Johi D. Griffin. 2020. “Digital Archaeology and the Living Cherokee Landscape.” International Journal of Historical Archaeology. DOI

Trouillot, Michel-Rolph. 1995. Silencing the Past: Power and the Production of History. Boston: Beacon Press.

VanHaitsma, Pamela. 2019. “Digital LGBTQ Archives as Sites of Public Memory and Pedagogy.” Rhetoric & Public Affairs 22, no. 2: 253–280.

Waite, Stacey. 2017. Teaching Queer: Radical Possibilities for Writing and Knowing Pittsburgh: University of Pittsburgh Press.

Ward & Cartouches 1.0, developed by SaPhiR Productions. 2018. ShiVa Games.

Watrall, Ethan. 2019. “Building Scholars and Communities of Practice in Digital Heritage and Archaeology.” Advances in Archaeological Practice 7, no. 2:140–151.

About the Authors

Amy J. Lueck is Assistant Professor of English at Santa Clara University, where she researches and teaches histories of rhetorical instruction and practice, women’s rhetorics, feminist historiography, and public memory. Her book, A Shared History: Writing in the High School, College, and University, 1856–1886 (SIU Press, 2020), brings together several of these research threads, interrogating the ostensible high school-college divide and the role it has played in shaping writing instruction in the U.S. Her work has previously appeared in journals such as College English, Rhetoric Review, Composition Studies, and Kairos.

Lee M. Panich is Associate Professor of Anthropology at Santa Clara University. His research employs a combination of archaeological, ethnographic, and archival data to examine the long-term entanglements between California’s Indigenous societies and colonial institutions, particularly the Spanish mission system. His scholarship has appeared in American Antiquity, Ethnohistory, and Historical Archaeology, among other venues. He is the author of Narratives of Persistence: Indigenous Negotiations of Colonialism in Alta and Baja California (University of Arizona Press, 2020).

Left, User testing the system with HoloLens headset in the historic home. Right, What the user sees through the HoloLens.

Blending Disciplines for a Blended Reality: Virtual Guides for a Living History Museum


This article describes the early stages of a virtual guide for onsite museum experiences, a project undertaken at Rochester Institute of Technology (RIT) involving students and faculty in computer science, museum studies, art and design, and theatre in conjunction with Genesee Country Village & Museum, the third-largest living history museum in the US and the largest in NY state. Our work focuses on the use of augmented reality, where technology and devices are used to superimpose digital assets over real elements in physical spaces, to demonstrate potential for enhancing storytelling within a historic village context. We outline our process—involving students and faculty from three colleges within our university, and staff from the museum partner—from exploration, research, and design to capture, delivery, and testing. With four faculty leading a cross-disciplinary collaboration among more than eighty students, three additional faculty from RIT (theatre and music), and six museum staff members thus far, our interest lies in facilitating opportunities for incidental learning (Crawford and Machemer 2008). We are keenly interested in pushing the boundaries of Pomerantz’s “spirit of experimentation” (2019) among the students and instructors where the former learn about the technology and subject matter, while the faculty forfeit prescriptive outcomes in an effort to foster experimentation within the context of courses and assignments where this project is facilitated. Ultimately, we see this application of XR as a mode for the conception, creation, and dissemination of storytelling within the classroom experience that simultaneously shares attributes of constructivist learning proffered in education and museums.


This project emerged from an existing partnership between Rochester Institute of Technology (RIT) and Genesee Country Village & Museum (GCV&M). This perhaps unlikely pairing between a research university with more than 19,000 students and the largest living history museum in New York provides opportunities for faculty, staff, and students from virtually every college within RIT to foster collaborations from a variety of disciplines. Many projects and research areas have multi-disciplinary or cross-disciplinary foci. This project, “Blending Disciplines for a Blended Reality: Virtual Guides for a Living History Museum,” is one such example where an interdisciplinary, research-inspired question forges connection among multiple constituencies within the university with the museum as the site for developing tangible skills and undertaking projects that have scholarly reach and long-term, mutual benefit. Because of this partnership, and the trust and history of association between the two organizations, we, as faculty researchers, have the freedom and flexibility to foster interdisciplinary collaboration in a meaningful way and to engage our students in developing skills in storytelling, digital composition, and multimodal literacy. The museum contributes to, and benefits from, the research and output of this collaboration, thereby serving as a site where our research can thrive.


While this concept at present involves faculty and students from several disciplines with the production geared toward realizing work around one historical person that we have developed and provided with historically accurate contextual narrative, our project began with a much broader framing that our students helped to refine. The collaboration began in 2018 between computer science and museum studies faculty who wanted to set a research problem at the museum, employ technology as a possible solution, and engage our students in the research and scholarship around this project. Inspired by ongoing research with intelligent virtual agents (IVA) (Norouzi et al. 2018), we pondered as to what role an IVA might play in the context of a living museum. We posed the question: “Could a stylized avatar, serving as a historical guide, be used to augment visitors’ physical experiences at Genesee Country Village & Museum?” Over two semesters, the faculty and students from museum studies and computer science, with the help of faculty and students from theatre, developed 38 historical narratives which were recorded via audio only or audio and motion capture. This earliest phase of exploration was evaluated by team members, Decker and Geigel, in November 2018 and in April 2019, which in turn enabled them to pivot the project in five ways over the past several months. The project team expanded to include collaborators from among art and design faculty. In turn, we began to focus on researching and developing one character initially; to create historically accurate clothing, props, and environment for the character; and to refine our workflow—all with the end goal of stacking historical narratives into a six-minute story, delivered in monologue form as a vignette to engage with the visitor. This article outlines the project over this entire span of 18 months, with primary focus on the past several months, which is the period of robust project development and testing by students and faculty.

Timeline Between Spring 2018 and Spring 2021 showing collaboration between 4 University programs and museum
Figure 1. Roles and tasks of AR storytelling team. Informal feedback will continue, and formalized user testing will be developed, through spring of 2021.


The earliest iteration of this project was exploratory in order to see the viability of our launching a long-term project. Led by two of our four-person faculty team (Decker and Geigel) from spring 2018 through spring 2019, we tested the technology and research/script-writing as well as recording. In terms of technology, we chose a Microsoft HoloLens as a delivery platform as it provides an intuitive, hands-free interface as well as built-in voice recognition. Furthermore, the HoloLens has been shown to be an effective platform in other museum contexts (Hammady et al. 2019). Development was done using Unity, a 3D software platform for rapid prototyping of VR and AR applications. We developed three short sprints using prescribed, pre-loaded character types in Unity, writing an application in Unity to allow for placement of the virtual storyteller at specific, appropriate spots on our campus (in lieu of the museum). As the app was running on a HoloLens, users had the opportunity to interact with the application by asking pointed questions of the avatar, to which the storyteller would respond, making users feel like they were having a conversation with a real person.


Initially, neither the text nor the visuals used in this exploratory phase were keyed to our historical site. However, simultaneous to the technology testing, we asked museum studies students to research the buildings situated at the museum and to develop “character types” who might be viable suggestions for developing an AR character for this project. Over two semesters of increasingly focused, exploratory research, the students created 38 one-to-three-minute monologues situated at 12 of the 68 historic structures at the museum. Each of these monologues was historically based and researched using resources from the museum as well as contextual sources (Bolger 1985). While only a small portion of this overall work was, in turn, used as part of our refined prototype (explored fully in this article), the initial research phase informed our workflow, as well as the decision to develop one character more fully to focus our team’s concept development and execution.

Over the summer of 2019, in consultation with the Genesee Country Village & Museum staff,[1] the team selected Dr. Frederick F. Backus (1794–1858) as the inspiration for our first fully developed character. First and foremost, Backus had myriad interests and connections to Rochester history, making his story rich with intersections that could, in turn, be amplified through research-informed narrative writing. Second, we chose this individual in order to tether our virtual character with his actual home, one of the first grand mansions in Rochester which Backus purchased in 1838. Third, and perhaps most interestingly in terms of creative output, no images exist of his appearance, thereby making him an opportunity to blend historical reality and interpretation.

After deciding upon a character, primary and secondary research guided the script-writing, with the immediate need to develop one narrative for this phase of testing. Immediately, the decision was made to situate the character a bit later in his life, so as to draw upon a wealth of experiences documented by Backus in letters. As the museum interprets the home to the year 1850, writing a script that would be situated at around the same time of the museum’s interpretation bolstered our ability to render a seamless integration between the AR experience and the museum environment.

Historically accurate assets were gathered as part of the research. These include professional and domestic contexts, including newspaper accounts from the years that Backus served in the New York State Senate and background information on Backus’s neighborhood gleaned from property records and maps of Rochester’s wealthy Third Ward. This portfolio of research was passed on to the museum studies students in the spring of 2020 to guide their development of academically and historically rigorous narratives for five characters (Backus and four additional characters).

The current student cohort (spring 2020) developed 15 monologues focused on individuals who lived in the region over the years that the museum interprets (Pioneer Settlement era of 1780s through the 1920s), with particular attention to the 1820s–1860s. These individuals included the aforementioned Frederick Fanning Backus (1794–1858); Candace Beach (1790–1850), a teacher at a one-room schoolhouse who lived through the historic “year without summer” that occurred in 1816, over a three-year period of climate change and uncertainty as a result of the eruption of Indonesia’s Mt. Tambora in the spring of 1815; John Carlin (1813–1891), a poet and painter who graduated in 1825 from Pennsylvania Institute for the Deaf and Dumb before traveling to England and France for a Grand Tour and returning to New York and picking up clients across the state; Austin Steward (1793–1869), who was born to enslaved parents in Virginia before moving to New York and becoming engaged in antislavery and temperance as well as the black convention movement, all the while being engaged as a merchant, publisher, and orator years before Frederick Douglass settled in this region; and Lavinia Fanning Watson (1818–1900), a Philadelphia socialite, with ties to the region, who was the first woman to commission a naval ship, the USS Germantown (1846). The monologues were sited at three of the buildings on the museum campus.[2]

Design: character, model, and rigging

Throughout the research phase, the faculty team had discussed how to proceed with the digital design phase. The decision process for creating the first 3D character (described below) would also inform projects and workflow for continued production, including the development of additional characters in spring 2020 and beyond.

The design process began with the choice to build a stylized avatar, rather than a realistic 3D animation, so as to avoid the “uncanny valley”: a feeling of unease and disconnect experienced when humans encounter robotic or audio/visual simulations that are too realistic. This key decision was informed by the work of Masahiro Mori who presented the theory of the uncanny valley five decades ago (Mori 2012). Mori posited that an individual’s feeling about a human-like robot would go from empathy to revulsion the closer the representation grew to reality, because the representation would naturally not achieve true realism. Mori’s premise has been applied to the development of digital characters as well, as the uncanny valley is often referenced vis-à-vis the film Polar Express (Noe 2012) and CGI characters that fail to achieve true realism and therefore alienate the viewer (Weschler 2011). For Weschler in particular, the “vacant” quality of the eyes and unrealistic movement are cited as features that foster the eeriness associated with the uncanny valley.

While some scholars are now exploring the ability of digital artists to create avatars realistic enough to foster trust and empathy, such production is at a level of digital artistry that requires mastery and extensive experience. Students would not have the expertise to overcome this valley and therefore we chose to pursue a stylized character. The choice meant the final agent would be distinctly unrealistic in an authentic historical environment. We accepted this anachronism as a way of attuning to the museum’s approach to onsite interpretation. GCV&M does not presume visitors are transported to 1850; it sets out to interpret and demonstrate the era authentically while acknowledging that the museum staff, chiefly the costumed interpreters and the guests, are inhabitants of the present. Additionally, we intended to utilize modern technology (HoloLens) to immerse the viewer in the experience of interacting with the character, further removing them from the idea of being transported to the past. Our digital agent, viewed through the HoloLens, would clearly be an AR animation and not an actual human interpreter, so the decision to opt for a stylized avatar meant students could design all aspects of the character with the burden of bridging the uncanny valley relieved.

The avatar needed to be approachable and warm in order to appeal to older adults and children alike. To avoid a sense of unease, certain attributes are exaggerated in digital human representations—most often the size of the head, hands, and feet. For continuity of design, the style developed would be carried through into additional avatars, to be executed by 3D digital design students.

As a character, Backus presented the unusual but fortunate position of having an actual historical figure for whom there is no visual record, only written references. With no extant images, the team was left to interpret his appearance through the use of his father’s portrait from Hamilton College and his own writings of his life experiences. The character design incorporates the physical input of Azel Backus as the subject’s father, with historic, social, and economic aspects of the 1851 time period. As we move forward with students to develop further avatars and agents for the museum, classes will follow the same pattern of character analysis in the design, regardless of any visual references we may have of subjects. Thorough research of the fashion of the period was balanced with the knowledge that Rochester, NY in 1851 was both rural and remote and therefore not on trend with the latest styles. It was also clear from the writings of our historic subject that he had traveled the area and experienced the hardships of practicing medicine in such a time and place.

Left, Artist engraving of Azel Backus from 1813. Right, Costume Design Sketch of Azel Backus
Figure 2. Portrait of Azel Backus and preliminary costume design.

It was also important to have knowledge of the character’s setting in the actual house and take color scheme into account. The AR device through which the character will be viewed will superimpose the image on the surroundings, so it was important to make sure the agent would stand out from the environment. The buildings at the Genesee Country Village & Museum are from several different decades and span a wide range of architectural styles. Everything from the number of windows in a building to the color trends and financial status of its residents will impact how well the avatar is seen in the setting. For Backus, this meant opting for cool, darker colors so he would be better distinguished amongst the tans, browns, and reds of the well-lit entryway.

We began with a rough sketch to outline the physical properties of the character before moving into 3D development. The 3D digital design program utilizes software from a variety of companies in order for students to experience the full range of programs in use throughout the professional industry. For this project we chose to use software from well-established and reliable companies with the idea that we will be able to upgrade and improve the designs as the software advances. In order to achieve the stylized character we had determined would best suit our needs, we utilized Character Creator by Reallusion, a 3D software that would allow us to morph realistic human proportions. This software utilizes an interface and key strokes that are common in several 3D programs, making it approachable and intuitive for students of 3D art. Facial and body features were exaggerated; the nasal, cheek, and chin areas were expanded to match historic drawings of Azel Backus, along with digitally sculpted hair and sideburns matching historic styles. The head and eyes were enlarged, as seen in many animated characters, to make them less realistic and more childlike. The avatar’s physique and appearance were also altered to better reflect that of an older gentleman of 1851.

Left, 3D model of exxagerated body propotions. Right, 3D model next to 2D patterns of costume garments.
Figure 3. Body and garment modeling.

We then used Marvelous Designer 8, a digital patterning and simulation software to build period-appropriate clothing for Backus. This software in particular is not only widely embraced by the 3D industry, but is advancing rapidly in its effectiveness and efficiency. As we develop further historic digital avatars for the museum, students will be utilizing this software to create historically accurate garments that are uncommon in the 3D world.

In selecting garments for Backus, as well as any future characters for the museum, it was important to keep in mind that clothing production was not yet industrialized, meaning it was not mass produced nor readily available (Holkeboer 1993; Gorsline 1994; Armstrong 1995; Tortora and Marcketti 2015). Most, if not all, of the garments worn by Backus would have been home or locally produced. Men’s shirts in particular were traditionally made by a wife or mother, but a man’s tailored waistcoat and frock coat would have been made by a skilled, male tailor. Additionally, the materials used would have been relatively expensive, so tailored menswear tended to be an investment that was worn for several years. Considering the remoteness of Rochester to any major metropolitan hub of 1851, it’s likely his garments could have been 5–10 years old at the time. To that end, we opted to dress Backus in a slightly dated frock coat with the soft, sloping shoulders and high back collar of the mid-1840s, and a waistcoat with a wide lapel and only slightly rounded hem of the 1840s. Here, we opted for a deep navy blue melton wool that would be a strong contrast to the wood staircase and tan wallpaper of the home’s entry. His trousers also bear the marks of the 1840s, with the relatively new center front fly closure, as opposed to the earlier fall front. Men’s trousers of this early Victorian era were tapered and narrow at the hem and tended towards large-scale patterns, especially plaids. We opted for a somewhat subdued gray wool plaid flannel as Backus was more an elder statesman than fashionable dandy. Students’ detailed character analysis informs these design decisions, and the design choices inform how the patterning software is used. These decisions, coupled with the research prepared by museum studies students in their development of monologues, inform the 3D students’ design choice, right down to the type of fabric used in a waistcoat and whether or not a collar is top stitched.


Transitioning from research and design to capture and render meant involving actors from performing arts faculty who, based on their vocal style, could offer a viable presentation of Dr. Frederick F. Backus. In order to preserve the legibility of the narrative in performance, the actor’s voice-over track was recorded in advance, which enabled the actors to adjust inflection and vocal emphasis of segments of the script in a sound-isolated recording booth. The performers then recreated the character’s movements in front of a motion capture system, using the audio playback as reference. Using Character Creator, the motions were then transferred to the avatar and any jitter was removed. Additionally, a digital face rig was created and lip-synched to the narrated audio track recorded earlier, and additional gestures were added as required.

Left, Actor and audio technician reviewing script outside audio recroding booth. Right, Actor performing for motion capture system
Figure 4. Audio and motion capture recording.


The rigged and animated model was exported as an FBX file from Character Creator into the Unity game engine for use in the HoloLens, which would simultaneously display the character in the museum space and create the user experience of interaction with the virtual guide. (A user interacting with Backus in the museum setting and the view seen through the HoloLens are shown in Figures 8 and 9.) Using the speech-recognition capabilities of the device, the application can recognize key spoken phrases to which the character will respond with a predefined and prerecorded monologue.

Left, User testing the system with HoloLens headset in the historic home. Right, What the user sees through the HoloLens.
Figure 5. User interacting with the character in the museum setting.

In this way, the narrative is designed as contextually rich, narrative-driven storytelling delivered by a historical character, set in the home that the historical Backus did, at one time, inhabit. The AR character, house, and site are woven together in a storytelling construct that engenders historical information, situates a conversation between an agent and a visitor at a site that the character may have once visited, and presents an opportunity for incidental learning—the learning along the edges that Falk and Dierking proclaim as critical to the museum visitor experience (Falk and Dierking 2012). In short, the monologue as written, performed in audio and motion capture, and associated with the digital asset, which includes the character creation as well as historical treatment, are tethered and presented through the HoloLens. These facets come together to create an experience that provides an opportunity for visitors to engage with a person from the past that is only possible through this medium.

Informal feedback

Our ultimate goal is to deploy at GCV&M with visitors, wearing HoloLens, who have entered the threshold of the doorway of the Livingston-Backus house at the museum. While we have not yet deployed at the museum, as of November 2019, our team has reached a significant milestone of creating the Backus who can deliver a monologue and respond to voice commands from the user. As of this writing (March 2020), our research and design has continued with four new characters that will be captured in the coming months.
Our progress thus far has been informed by preliminary informal feedback in two phases in 2019, both working toward the goal of user testing this system and content onsite at Genesee Country Village & Museum.[3] These two iterations of informal feedback, while very different in design and nature, have offered us the opportunity to see an increase in ease of use and interest, as well as fulfillment. These facets will be measured again as we move into our third iteration of informal feedback, involving students from across the collaboration team, as well as museum staff. We plan to develop and conduct formalized user testing at the museum in the summer of 2020. Each of these feedback scenarios has enabled us to reflect on our work as faculty, and, in coordination with our students, to assess our pedagogical goals and structure our next advancement.

Left, Student learning navigation on HoloLens. Right, Student adjusting HoloLens head straps.
Figure 6. Students Lizzy (left) and Brie (right) learned how to use the HoloLens in order to facilitate informal feedback as part of our university’s annual AR/VR/XR symposium. November 22, 2019.

Authenticity and living history museums

The creation of a virtual museum guide may seem at odds with the history and context of our museum partner and our intended location for delivering the XR experience, Genesee Country Village & Museum, which belongs to the classification of living history museums. This genre grew out of world’s fairs and international displays in the 19th century that offered exhibitions arranged in village-like settings to provide viewers with an engaging sense of culture and history simultaneously (Alexander, Alexander, and Decker, 118). Founded in 1966 and open to the public a decade later, GCV&M has sought, from its earliest days, “an endeavor to visualize and interpret this bygone era…[and] has assembled authentic examples—functional buildings and artifacts of the period—from a score of area towns. It has not endeavored to recreate any specific village but to recapture and portray the character and atmosphere of the village era” (McKelvey in Bolger 1985, 2).

Walking through the gates of the toll house and entering into the historic village, visitors are treated to a vision of the past before their eyes. Such a treatment of living history museums affirms folklorist Jay Anderson’s (1985) definition of living history as “the simulation of life in another time.” Museum interpretation at living history museums is often mediated through the costumed interpreters who may take on a particular role, often with the premise that they are conveying what it was like to live in the past, and the modern visitor has encountered them in their daily life (Reid 2001; Roth 2005; Thierer 2010). Because they are the primary communicators with museum visitors, costumed interpreters are essential to the interpretation function of living history museums, which are entirely re-contextualized environments. Interpreters serve as the key factor of on-site engagement for visitors. They communicate with visitors through demonstration and conversation. As theorists Handler and Saxton (1988) argue, living history practitioners are keenly concerned with authenticity and that the role of the interpreter is to bridge past and present.

This connection between past and present while simultaneously seeking authenticity is key to our project which utilizes extended reality as a medium for the dissemination of a first-person narrative keyed to the identity of a known, historical person. These choices were made by the project team so as to distance the digital work and its outlay from the onsite, face-to-face, interpreter-to-visitor experience. In addition, we wanted to push the limits of this medium to see the extent to which our virtual tour guide can convey authenticity even while avoiding the aforementioned uncanny valley.

Whereas traditionally, onsite at the museum, visitors come into contact with costumed interpreters who staff approximately a dozen buildings and engage in third-person dialogue, meaning that they are dressed in historically accurate costume yet use contemporary language and are fully aware of the present, our virtual tour guide offers the opportunity to hear from a character speaking in first-person, performing a role for visitors, and speaking in paraphrases or direct quotes from diaries, notes, and primary sources. Both methods of interpretation—the third-person, interpreter-based and first-person, avatar-based—seek to serve as relevant, authentic, and historically accurate bridges between past and present for visitors.[4]


Our project design has been informed by pedagogy, as this project was conceived from the outset as a collaboration among faculty and student researchers across several disciplines. Over the eighteen months of this project, students and faculty have been involved at every phase of our project (see figure 1, Roles and Tasks). Some aspects have been developed within the framework of a course assignment for museum studies students, including research, monologue development, participation in audio and motion capture, and collection of feedback. The early iteration of the virtual museum guide was developed by computer science students enrolled in Applications in Virtual Reality, a course focusing on the use of VR/AR technologies in creating unique mixed reality experiences. And, enhancements of the application have been taken on by several master’s students in computer science as part of their capstone projects. Other facets took place outside of the classroom assignment or context; students self-selected to become involved in that phase of the work. For instance, theatre students were involved as actors for motion and audio capture, a 3D design student facilitated the motion capture as part of advanced study toward her thesis project, and museum studies students facilitated informal feedback in November 2019.

The application of XR as a mode for the conception, creation, and dissemination of storytelling within the classroom experience shares attributes of constructivist learning of educational systems in general (Dewey 1998). Specifically, our project—involving students and faculty from three colleges at a research university, along with a museum partner—encourages discourse during knowledge construction. For instance, the collaboration necessary for success of this project provides a unique learning opportunity for computer science students. Though the focus of the work of the computer science students may be technical in nature, the design, implementation, and approach of the application development are shaped by the continual interaction with the creative team. Back and forth communication regarding the assets, both visual and aural, guides the development activities of the application, and, at the same time, directs the work of the design team creating the assets as they must assure proper formatting, timing, and synchronization of the models and animations to work on the HoloLens device.

Faculty have served as mentors to one another and students, but also have let go of prescriptive outcomes for classroom assignments or milestones of our project in an effort to foster experimentation within the context of our collaboration. We have embraced key facets of Pomerantz’s “spirit of experimentation” (2019) which contends that success can be measured by virtue of experimentation rather than meeting criteria on a traditional rubric. As Pomerantz notes, “Sometimes experimentation is the point.” As faculty, too, our learning experiences as collaborators and facilitators guiding our students’ work throughout this project have embraced this facet of experimentation.

Our blending of disciplines to create a blended-reality experience realizes constructivist pedagogy and further mirrors attributes of visitor experiences at museums, where knowledge is actively produced by the learner. For instance, throughout this project, students engage in incidental learning, which may be defined as “unplanned or tacit learning, stemming from the learner’s actions” which is “an often hidden aspect of higher education” (Crawford and Machemer 2008, 106, 109). These attributes are hallmarks of a “learner-centered environment” (104) and are key to understanding the pedagogical outcomes of our project.

Our conception of a virtual museum guide to be developed among a cohort of interdisciplinary researchers and their students intended, from the outset, for incidental learning to occur as a means of individual student work (as an assignment or other framework for involvement in this project). In fact, we found, in review of Crawford and Machemer’s characterization of 19 incidental learning skills associated with project-based learning, that students across the project were developing (and continue to develop) each of these skills at various points throughout the project.[5] In addition to particular facets skills gained by particular cohorts of students involved in our project, all students and faculty gained “teamwork skills,” “time management skills,” and “potential to apply what is learned here to other situations” (variables 2, 4, and 19 of Crawford and Machemer). Each of these attributes described above enabled the students to develop skills that were not part of the initial project requirements, indeed, but they also fostered a sense of real-world experience. That is, the workflow and processes defined above—with collaborators having domain knowledge and expertise entering into a project for a particular purpose and then exiting until called upon again—mirrors the work world of industry where various aspects of a large-scale project are completed independently in contribution of a larger whole. Importantly, the undergraduate students across all disciplines expressed an interest in continuing to be updated on the project’s progress, long after their semester or other engagement had come to an end, thereby affirming the pedagogical impact of this project.

While much of our decision-making was informed by pedagogical aims and aspirations for cross-disciplinary learning, we were collectively interested in how XR can inform storytelling practices. Our conception of storytelling based at a living history museum was informed by Bedford’s proclamation of storytelling as a key attribute of museum work (2001) and Lowe’s articulation of who defines stories as the “interpretive tales we craft” and narrative as “the way that we consciously and unconsciously shape those stories” (Lowe 2015, 45). Such a framing of the past impacts the process of meaning making. As David Allison notes, “The way that museums present the history and the prejudices and biases they bring to the design process [of living history interpretation] will affect the meaning that individuals construct for themselves” (2016, 29). Allison thus affirms Lowe’s assertion that particular institutions do a “much better job explaining the complexity of history making—the craft, the methods, and the narrative construction” and sees such places as sites of innovation where leveraging “the old, bad history” (Lowe 2015, 47, 52) can—through storytelling—foster multivocality and inclusive interpretations of the past. Such museum-focused outcomes cross over to our pedagogical aims of storytelling and our project’s framing by affirming the value, relevance, and importance of storytelling as a form of historical communication, bridging between past and present as well as opportunities for authenticity, empathy, and inclusion.


[1] Museum staff involved in this discussion included the museum director and curator of collections. In developing further characters, we also consulted the senior director of interpretation and interpretation office manager. Two costumed interpreters will be involved in motion capture in the spring of 2020.

[2] The buildings include: Livingston-Backus House, the Land Office, and the Schoolhouse. The Livingston-Backus House is a plausible location for the Backus monologues as well as those involving his niece, Lavinia Fanning Watson, and the painter John Carlin, who befriended the Backuses. The Land Office is a reasonable location for Steward, who worked for Henry Towar when the structure was onsite in Lyons, New York. The Schoolhouse (built in 1822) is a reasonable site for the monologues by Candace Beach who, although employed as teacher in the region before this structure was built, is positioned much later in life, as she reminisces on her years teaching. Our process of monologue creation has involved the expertise of the museum staff, taking cues from Maria Roussou et al.’s understanding and assessment of the importance of collaborative participatory creation (2015) while also being mindful of the developments, research, and outcomes of storytelling on mobile devices in the cultural heritage sector (Lombardo 2012).

[3] Our first testing took place in April 2019 within the context of a museum studies course where students were familiar with the project because each student had contributed to it by writing monologues for characters. The feedback at that time indicated that only 31.6% of the users felt that the experience fulfilled their desire of a museum experience (Decker 2019). Because the results were promising in terms of desirability of use and potential for engagement, we expanded the team to include collaborators from among art and design faculty; to focus on researching and developing one character initially; to create historically accurate clothing, props, and environment for the character; and to refine our workflow. Each of these facets was accomplished in the intervening months, leading to a second phase of informal feedback in November 2019, when we deployed the HoloLens with the Backus content as part of a demonstration at our university’s annual AR/VR/XR symposium (Carr and Johnson-Morris, 2019).

[4] Such a bridging of past and present is part of the living history tradition, as defined by Scott Magelssen who reads living history interpretation through the lens of performance practices and argues that living history has fallen into a comfort zone of merely “undoing history” and tracing time back to a past moment directly, and effortlessly, from today. Such homogeneity, Magelssen argues, is native to the work of museum professionals who may aspire to a linear format rather than addressing the ebbs and flows of history on the margins (2007, xiii, 59). Beyond the scope of this study is David Allison’s examination of museum staff who use costumed interpretation in museums that are not entirely living history museums, such as the Children’s Museum of Indianapolis which employed live, first-person accounts in the gallery for the program The Power of Children to tell the stories of Anne Frank, Ruby Bridges, and Ryan White. See David B. Allison, Living History: Effective Costumed Interpretation and Enactment at Museums and Historic Sites, Lanham: AASLH, 2016, 41–61.

[5] For instance, “communication skills” and “leadership skills” (variables 1 and 3 of Crawford and Machemer) were developed in particular by 3D student Hannah Chase who guided the theatre actors in the fall 2019 and communicated what the software needed from them in order to correctly/effectively acquire what we needed. Her directives not to cross hands over the body and how to gesture properly put her in the position of domain knowledge (motion capture) that would supersede domain knowledge from theatre by asking actors to act unlike actors in order to yield the results that we needed for the motion capture.

In addition, “understanding through social interaction” and “flexibility in day-to-day project management skills” (variables 13 and 5 of Crawford and Machemer) were gained by computer science student Kunal Shitut as he received the rigged and animated model from the design team which was used to create the application for the HoloLens. The navigation back-and-forth between computer science and 3D digital design guided the way the application was created and achieved an outcome that would not have been otherwise achieved if working on one’s own, without conversation and input from the art and design faculty.

Further, museum studies students gained the “ability to direct [their] own learning” and “ability to identify needs and tasks” (variables 15 and 16 of Crawford and Machemer), particularly through their research and writing of monologues. Finally, it is anticipated that 3D digital design costume students, in the spring 2020, will also gain incidental learning skills as they develop costumes for our virtual museum guide and additional characters that we will develop over the next several months.


Alexander, Edward P., Mary Alexander, Juilee Decker. 2017. Museums in Motion: An Introduction to the History and Functions of Museums. Lanham, MD: Rowman & Littlefield for the AASLH.

Allison, David B. 2016. Living History: Effective Costumed Interpretation and Enactment at Museums and Historic Sites. Lanham: AASLH.

Anderson, Jay. 1982. “Living History: Simulating Everyday Life in Living Museums.” American Quarterly 343: 290–306.

Armstrong, Helen Joseph. 1995. Patternmaking for Fashion Design. New York: HarperCollins Publishers.
Bedford, Leslie. 2001. “Storytelling: The Real Work of Museums.” Curator: The Museum Journal 441: 27–34.

Bolger, Stuart. 1985. Genesee Country Museum: Scenes of Town & Country in the Nineteenth Century. Rochester, New York: Flower City Printing.

Carr, Lizzy, and Brienna Johnson-Morris. 2019. “User Testing at Frameless Labs Symposium, November 22.” Unpublished, anecdotal evidence.

Crawford, Pat, and Patricia Machemer. 2008. “Measuring Incidental Learning in a PBL Environment.” Journal of Faculty Development 22, no. 2 (May): 104–111.

Decker, Juilee. 2019. “MUSE 360 User Testing #4: Virtual Museum Assistant.” 19 respondents, April 9. Unpublished data set.

Dewey, John. 1998. The Essential Dewey, edited by Larry Hickman and Thomas M. Alexander. Bloomington, Indiana: Indiana University Press.

Falk, John H., and Lynn D. Dierking. 2012. The Museum Experience Revisited. London: Routledge.
Gorsline, Douglas. 1994. What People Wore: 1,800 Illustrations from Ancient Times to the Early Twentieth Century. New York: Dover Publications.

Hammady, Ramy, Minhua Ma, and Carl Strathearn. 2019. “User Experience Design for Mixed Reality: A Case Study of HoloLens in Museum.” International Journal of Technology Marketing 13, no. 3/4.

Handler, Richard and William Saxton. 1988. “Dyssimulation: Reflexivity, Narrative, and the Quest for Authenticity in ‘Living History.’” Cultural Anthropology 33: 242–260.

Holkeboer, Katherine Strand. 1993. Patterns for Theatrical Costumes: Garments, Trims, and Accessories from Ancient Egypt to 1915. New York: Drama Book Publishers.

Lombardo, Vincenzo, and Rossana Damiano. 2012. “Storytelling on Mobile Devices for Cultural Heritage.” New Review of Hypermedia and Multimedia 18, no. 1/2: 11–35. doi:10.1080/13614568.2012.617846.

Lowe, Hilary Iris. 2015. “Dwelling in Possibility: Revisiting Narrative in the Historic House Museum,” The Public Historian 37, no. 2 (May): 42-60.

Magelssen, Scott. 2007. Living History Museums: Undoing History through Performance
Lanham, Scarecrow.

Mori, Masahiro. 2012. “The Uncanny Valley: The Original Essay by Masahiro Mori,” translated by Karl F. MacDorman and Norri Kageki. IEEE Spectrum. Previously published as Masahiro Mori, “The Uncanny Valley,” Energy 7, no. 4 (1970): 33–35.

Noe, A. 2012. “Storytelling and the Uncanny Valley.” NPR, January 20.

Norouzi, Nahal, Kangsoo Kim, Jason Hochreiter, Myungho Lee, Salam Daher, Gerd Bruder, and Greg Welch. 2018. “A Systematic Survey of 15 Years of User Studies Published in the Intelligent Virtual Agents Conference.” In Proceedings of the 18th International Conference on Intelligent Virtual Agents IVA ’18. Association for Computing Machinery, Sydney, NSW, Australia, 17–22.

Pomerantz, Jeff. 2019. “XR for Teaching and Learning: Year 2 of the EDUCAUSE/HP Campus of the Future Project,” Educause October 10.

Reid, Debra Ann. 2001. Living History-Social History or Post-Modernism: Toward a Historiography of Open-Air Museum Interpretation in the United States. Charleston, IL: Eastern Illinois University.

Roth, Stacy Flora. 2005. Past into Present: Effective Techniques for First-Person Historical Interpretation. Chapel Hill: University of North Carolina Press.

Roussou, Maria, Laia Pujol, Akrivi Katifori, Angeliki Chrysanthi, Sara Perry, and Maria Vayanou, 2015. “The Museum as Digital Storyteller: Collaborative Participatory Creation of Interactive Digital Experiences.” MW2015: Museums and the Web 2015. Published January 31.

Thierer, Joyce M. 2010. Telling History: A Manual for Performers and Presenters of First-Person Narratives. Lanham, MD: AltaMira for AASLH.

Tortora, Phyllis G., and Sara B. Marcketti 2015. Survey of Historic Costume. New York: Fairchild Books.

Weschler, Lawrence. 2011. Uncanny Valley and Other Adventures in the Narrative. Berkeley: Counterpoint.


The authors are grateful to the staff of Genesee Country Village & Museum, in particular Becky Wehle and Peter Wisbey. At RIT, we thank students Hao Su, Kunal Shitut, Hannah Chase, Lizzy Carr, Brienna Johnson-Morris as well as a number of students from the following courses: MUSE 360/Visitor Engagement and Museum Technologies; MUSE 225/Museums and the Digital Age; FNRT 231/Fundamentals of Acting; DDDD 517/Costume Hair and Makeup; DDDD 521/Character Design and Rigging; and CSCI 715/Applications in Virtual Reality. In addition, we are grateful to faculty colleagues in theatre at RIT, Andy Head and David Munnell, and Katherine Collett, Archivist, Hamilton College Archives, for providing historical images.

About the Authors

Juilee Decker is an Associate Professor of Museum Studies at Rochester Institute of Technology (RIT). She has served as Editor of Collections: A Journal for Museums and Archives Professionals since 2008. She earned her PhD in 2003 from the joint program in Art History and Museum Studies at Case Western Reserve University and the Cleveland Museum of Art.

Amanda Doherty is an Adjunct Professor in the department of 3D Digital Design at Rochester Institute of Technology (RIT). She is a costume designer and historian who has been working principally in the entertainment industry and is now teaching character development and costume design for digital characters. She received her MFA in Design from Penn State University.

Joe Geigel is a Professor of Computer Science at Rochester Institute of Technology (RIT) and co-director of the CS Graphics and Applied Perception Lab there. He earned his DSc. in Computer Science from George Washington University in 2000. His research interests focus on mixed reality multimedia projects that combine computer science, real-time graphics, art, music, and theatre to create interactive, live experiences.

Gary D. Jacobs is an Assistant Professor of 3D Digital Design at Rochester Institute of Technology (RIT). He has designed public spaces, stage productions, and themed environments for over 15 years. He is a certified LEGO® Serious Play facilitator and leads Design Thinking workshops for creative teams. Gary received his MFA in Entertainment Design from Pennsylvania State University.

Grinnell College students examine a double-pen slave cabin in Vacherie, Louisiana.

Using Virtual Reality to Expand Teaching and Research in the Liberal Arts


Grinnell College has established a lab for teaching undergraduate liberal arts students the hard and soft skills necessary to develop extended reality (XR) experiences. This lab helps the College respond to external social and economic pressures while retaining its core liberal arts values. Within the lab, students develop the metacognitive skills, technical training, and problem-solving strategies that will make them competitive candidates in a global twenty-first–century marketplace. For other institutions interested in implementing an XR lab on their campuses, we provide key takeaways in the following areas: how we launched our lab, the funding instruments that support lab activities, the hardware and software used to develop XR experiences, the development team structure and member responsibilities, lessons learned from the pilot project, and projects currently in development.


Grinnell College, like many small liberal arts colleges, has questioned how to remain robust and relevant in a digital age (Selingo 2013; 2017). We value knowledge for its own sake, social justice, and critical thinking; yet, we accept responsibility for equipping our students with the skills that allow them to adapt to a world of rapidly changing professional opportunities. We refused to sacrifice the former for the latter. Instead, we created a learning environment to promote both our traditional values and practical job skills. In our lab, when students research, create, and evaluate extended reality (XR) experiences, they develop the technical, social-awareness, and problem-solving skills that make them attractive candidates for twenty-first–century jobs while exhibiting liberal arts sensibilities. By developing marketable skills within the framework of core liberal arts questions and experiences, the College moves toward a future in which our educational offerings are both highly relevant and eminently sustainable.

Various characteristics of and cultures within the institution have influenced how the College has responded to the pressures of a changing academic and digital landscape. Grinnell College is a small, residential, undergraduate-only liberal arts college in rural central Iowa. The College was established in 1846 with a basis in individual intellectual pursuit for the betterment of humankind that has remained strong to today and is in evidence with the individually advised curriculum. The teaching culture is centered around small, face-to-face, discussion-based classes that explore topics according to professor interests. The College includes disciplines in the arts, social sciences, and natural sciences; but we do not have professional programs such as journalism, business, and nursing, perhaps because corporate or practical pursuits are viewed as less intellectually rigorous. The College also functions with a conservative curriculum and traditional views of faculty, who are the College employees and experts primarily responsible for helping students to grow in their own knowledge. Challenges arise when new developments conflict with traditional conditions. For example, we have seen the professionalization of College staff, with highly educated, non-faculty employees taking on more significant roles in students’ educational experiences. Additionally, we have seen changes in what students need and want from their college experience to help them succeed beyond school. Similar to other institutions and labs developing projects in XR, the College wrestles with how to remain true to our essential values while accommodating emerging needs (Szabo 2019).

The Grinnell College Immersive Experiences Lab (GCIEL) emerged from discussions at the administrative level, which identified a need to synthesize a twenty-first–century liberal arts education using emerging digital visualization technologies. GCIEL is an interdisciplinary community of inquiry and practice that allows students, faculty, and staff at the College to explore the liberal arts through XR technologies (Brown, Collins, and Duguid 1989; Wenger 1998; Wenger, McDermott, and Snyder 2002). XR is an umbrella term encapsulating immersive technologies such as virtual reality (VR), augmented reality (AR), and mixed reality (MR). Of these technologies, lab activity started with a focus on developing VR experiences that completely immerse the user in a simulated three-dimensional (3D) environment (Bailenson 2019; Greengard 2019; Rubin 2018; Jerald 2015); we plan on expanding into AR and MR in the future.

Participating in the hands-on process of developing VR experiences has resulted in educational benefits for students. First, students gain critical-thinking and technical skills. When working in project teams to create immersive digital content, students experience an authentic development environment using industry-standard hardware and software, which prepares them to succeed in a rapidly changing job market. From a liberal arts perspective, the development process challenges students to explore deep questions and make interdisciplinary connections. The research required for developing culturally sensitive, ethical, and historically accurate immersive digital content is both demanding and comprehensive. Compared to research methodologies privileging linear subject matter presentations, such as a term paper or a video, research for VR projects compels students to consider how elements of their chosen topic function together as an interconnected, object-oriented activity system (Engeström, Miettinen, and Punamäki 1999; Jonassen and Rohrer-Murphy 1999). To do this, students must consider multiple context-specific variables for the system they investigate, how these variables interact within historical, spatial, and social contexts, and how end users will ultimately interact with the variables in an VR environment. Second, students develop soft skills including communication and collaboration. Interdisciplinary teamwork between students, faculty, and staff is a key feature of the problem-solving experience and establishes a collaborative knowledge-generation framework. The faculty role shifts from a lecturer focused on content coverage to a coach who guides students as they navigate the “real world” challenges they encounter. Staff member roles shift from assistant to technical advisors and mentors. Student roles shift from being passive recipients of knowledge to co-creators in the learning experience. These shifts allow team members to learn from each other as they integrate their own discipline knowledge and methods into the project.


Pedagogical approaches

Inspired by Jonassen’s concepts about teaching for solving ill-structured problems and active learning (Jonassen 2000), GCIEL’s pedagogical practices guide students through a problem-solving process in which they integrate several content domains and negotiate the unpredictable paths that emerge along the way. Jonassen, Carr, and Yueh (1998) conceptualize technology as knowledge construction “Mindtools” that students learn with, not from. Using this framework, GCIEL allows learners to function as designers using VR technologies to explore their subject matter, critically evaluate the content they are studying, and represent their knowledge in a meaningful way. This approach challenges certain traditional liberal arts attitudes about what kinds of learning are valued. While the liberal arts shies away from anything that resembles “vocational” training, GCIEL fully embraces training in practical hard and soft skills as an integrated part of content knowledge acquisition and critical thinking. We recognize skills such as software and hardware competence, digital file management, project and time management, troubleshooting, and team communication as foundations for the higher-order thinking skills that liberal arts college graduates will need throughout their lives. Thus, we intentionally teach these competencies alongside more traditional humanities topics rather than hope that learners acquire them incidentally through trial and error. In this way, GCIEL builds effective learning experiences that result in students thinking critically about VR technologies and using these technologies to examine, interrogate, and represent core liberal arts topics.

GCIEL seeks to optimize learning by maintaining a flexible, inclusive, and student-centered educational environment in which instructors “pay close attention to the knowledge, skills, and attitudes that learners bring” (National Research Council 2000, 23) to the research and development experience. By treating learners “as cocreators in the teaching and learning process, as individuals with ideas and issues that deserve attention and consideration” (McCombs and Whistler 1997, 11), GCIEL allows students to take an active role in reinventing their liberal arts experience. Heeding advice that “supplementing or replacing lectures with active learning strategies and engaging students in discovery and scientific process improves learning and knowledge retention” (Handelsman et al. 2004, 521), GCIEL emphasizes hands-on, authentic learning. Students develop products aligned with their interests and wield digital technologies in socially conscious ways within widely-ranging content domains. Students, in a focus group interview, viewed the experience as highly beneficial to their overall education. One student team member particularly valued the opportunity to learn “interdisciplinary communication on a long-term project” of a scale and duration that far exceeded what could be done within just one semester of a class (GCIEL Focus Group 2018). Another student observed that one of the most important parts of the project was how, “It feels like we’re on a team with our bosses…instead of it being very much top down” (GCIEL Focus Group 2018).

When developing VR experiences in GCIEL, Grinnell College students cultivate skills which help them adapt to rapidly-changing professional opportunities and contribute to others’ learning. Because the student-developed VR products are released as open educational resources (OER) under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License, students anywhere in the world can augment their education by using and contributing to the custom-built immersive experiences. As an educational tool, VR is particularly useful for enhancing spatial knowledge representation, promoting experiential learning opportunities, increasing motivation and engagement, and contextualizing the learning experience (Dalgarno and Lee 2010; Steffen et al. 2019). The embodied experiences in VR have been found to promote empathy (Herrera et al. 2018; van Loon et al. 2018) and perspective taking (Ahn, Le, and Bailenson 2013; Yee and Bailenson 2006), both of which are particularly important within liberal education contexts that focus on preparing students to deal with complexity, diversity, and change and to promote social responsibility (“What Is Liberal Education” n.d.). The VR projects developed in GCIEL (detailed below) offer new ways to engage students in various learning experiences across widely ranging domains from history (Ijaz, Bogdanoych, and Trescak 2017; Taranilla et al. 2019; Wood, William, and Copeland 2019; Yildirim, Elban, and Yildirim 2018), second language and culture acquisition (Blyth 2018; Dolgunsoz, Yildirim, and Yildirim 2018; Legault et al. 2019), and mathematics (Sundaram et al. 2019; Nathal et al. 2018; Putman and Id-Deen 2019).

Funding instruments

Dr. David Neville, a Digital Liberal Arts Specialist at Grinnell College, spearheaded the GCIEL initiative. Dr. Neville’s background in instructional technology and design, digital game-based learning, 3D modeling, and Unity development gives him the expertise to serve as the director of the lab and act as the technical advisor on all GCIEL projects. In Fall 2016, Dr. Neville received a $10,000 planning grant from Grinnell College’s Innovation Fund (IF) to investigate the feasibility of implementing a VR lab at the College. He used the grant funds to educate faculty and staff, bring in external experts, purchase equipment, and hire students with the following financial breakdown: First, about 45% of the IF monies supported participant stipends for a summer workshop led by Dr. David Neville and Dr. Damian Kelty-Stephen. This workshop helped 10 faculty and staff members at Grinnell College learn how to use VR technologies in a curricular setting. Tweets about the workshop are archived under the #gcielsw17 hashtag. Because more people showed interest in the topic than originally anticipated, the Center for Teaching, Learning, and Assessment provided an additional $1,920 to support the extra participants who registered for the workshop. Second, about 4% of the funds paid for VR experts to present their research at the workshop. Dr. Joel Beeson, Associate Professor in West Virginia University’s Reed College of Media, presented his work on the Bridging Selma Project and the Fractured Tour app. Dr. Glenn Gunhouse, Senior Lecturer of Art History in the School of Art and Design at Georgia State University, presented a general introduction to his cultural heritage projects in virtual reality, with observations about how the technology can provide access to otherwise inaccessible objects of study (Sinclair and Gunhouse 2016). Third, roughly 15% went towards purchasing new VR hardware and software (e.g., Dell Precision 5810 with NVIDIA Quadro M5000 GPU, Oculus Rift, and Wacom tablet). Finally, about 37% of IF monies paid wages for students working on the development team for the lab’s first VR project. Supporting student development work on this project, the Institute for Global Engagement at Grinnell College contributed $6,200 to fund a one-week visit to Louisiana for site-based research.

In Fall 2018, GCIEL received $144,000 for a three-year pilot project IF grant. These monies were utilized in ways which allowed the lab to expand its influence on campus and widen its project portfolio. First, about 10% of the IF award supported a new XR speaker series, which involved bringing academics and industry representatives to Grinnell College. These experts presented on the current state of XR in their fields, shared their vision for how XR will grow in the future, and demonstrated how a liberal arts education can prepare students for a career in XR. Students gained networking opportunities with these influential thought and industry leaders. Second, about 78% of the award paid personnel costs for the development teams, including student wages (72%) for four development teams and site-based research costs (6%). Finally, GCIEL used the remaining 12% of IF monies to purchase software and hardware necessary for developing VR experiences. These included software licenses, online training, digital assets, an additional VR-capable workstation with associated hardware, and an HTC Vive. This IF support ends in Summer 2021, at which time the College will consider whether to provide permanent institutional funding for GCIEL.

Team structure and technology pipeline

After confirming faculty interest in VR at the summer workshop, we began to assemble a VR development team for a pilot project. Forming the team proved that our small liberal arts college had sufficient resources and talent on site to shoulder an ambitious digital project. This was a considerable achievement considering that larger game design studios typically have development teams with hundreds of members, each contributing deep subject-matter knowledge, software and programming expertise, visual and 3D design capabilities, technical support, and project guidance. Echoing the development team experience that students might encounter in the XR industry, we envisioned our scaled-down version of the team to include a faculty adviser, a technical staff member, and students functioning as Subject-Matter Experts (SMEs), 3D Artists, and Unity Developers. The faculty adviser would come from a field related to the project’s topic and focus on helping students learn the subject matter. The technical staff member would help students manage the project and learn essential technological skills. Each student role had unique requirements.

Typically, we recruited the project SME through the faculty member, who invited an advanced undergraduate student major from their discipline. This student may have demonstrated relevant skills while working with the faculty member on prior academic projects. Only the SME had a personal invitation to join the VR development project, unlike the 3D Artist and Unity Developer, who were recruited using an open application and interview process through the student employment portal. The SME was responsible for (a) finding, evaluating, and utilizing resources to guide project development; (b) disseminating research findings to other team members in an understandable manner; and (c) leading the team’s process and progress. We considered giving SMEs more responsibilities in directing and managing a project in order to offset the marginalization that SMEs from humanities fields may feel during the coding-heavy portions of the project when they lack technical experience compared to their teammates. This may require the SME to learn and apply instructional design theories and models, Agile software development methods (e.g., Scrum), and the Unified Modeling Language (UML) to the project.

We selected a 3D Artist based on this individual’s technical experience or interests. The Artist needed to be able to use software such as Autodesk 3ds Max, Substance Painter, and Adobe Creative Cloud software platforms (e.g., Illustrator and Photoshop), and also be willing to engage in 3D modeling and texturing, UV mapping and unwrapping, model rigging and animation, developing concept art, and storyboarding. We chose 3ds Max because it is an industry-recognized tool, and familiarity with this system should better prepare students for internships and employment opportunities. The Artist is primarily responsible for 3D asset development and animation in 3ds Max and texture creation in Substance Painter. Artists may also contribute to other aspects of the project such as writing entries for a project blog, creating turntable animations of project assets for the GCIEL YouTube channel, or presenting to students and faculty about the lessons learned during project development. The Artist’s workflow included (a) evaluating primary and secondary resources identified by the Subject-Matter Expert and any data collected through site-based research; (b) utilizing these resources and data to create 3D models and animations in 3ds Max for the VR experience; and (c) importing the FBX file of the models into Substance Painter and Unity. Within Substance Painter, the Artist uses the physically based rendering and shading (PBR) capabilities of the software platform to create albedo transparency, specular smoothness, normal, occlusion, and emission texture maps. Within Unity, the Artist creates materials with a standard specular shader and then applies the texture maps to the 3D models. The Artist may also create lighting and particle effects for the VR experience inside Unity.

We selected Unity Developers based on their technical experience or interests in the Unity integrated development environment (IDE), object-oriented design and programming principles, Unity script writing in the C# programming language, and version control and collaboration with Git and GitHub. The Unity Developers were primarily responsible for writing the code that drives the VR experience; the information provided from the SME and the team’s site-based research informs how the Unity Developer programs the functionality of the experience. The Unity Developer also needed to be familiar with or willing to learn the SteamVR Unity plugin, which allows Unity to interact with and receive input from attached VR hardware (e.g., Oculus Rift-S and HTC Vive). The workflow for the Unity Developers entailed (a) brainstorming the interactivity in the VR experience; (b) bodystorming the experience with the team to flesh out what the user experience (UX) should look and feel like and how users would potentially interact with the experience; (c) utilizing whiteboxing and method stubbing to quickly make experience prototypes; (d) running through prototype tests of the VR experience to elicit user feedback; and (e) producing a minimal viable product (MVP) that could be used to secure external grant funding or to gather data in research experiments. The MVP is a version of the VR experience with just enough features to demonstrate proof of concept and provide feedback for future product development. We uploaded major versions of the VR experiences and their MVPs to the lab GitHub repos to serve as our backups, include in students’ portfolios, and share open source resources with other educational institutions interested in developing VR experiences.

Pilot project

Dr. Sarah Purcell, the L. F. Parker Professor of History at Grinnell College, and Dr. David Neville, Director of GCIEL, launched the pilot project in late Spring 2017. They hired four students for the project development team including history student Sam Nakahira as the SME, studio art student Rachel Swoap as the 3D Artist, and computer science students Zachary Segall and Eli Most as the Unity Developers. The project began as an ambitious attempt to build a VR experience of the Uncle Sam Plantation, a nineteenth-century Louisiana sugar plantation. Unfortunately, the project had an unintentionally slow, rolling start as two team members went to study in Europe for a semester. In Summer 2017, Sam Nakahira worked with Dr. Sarah Purcell to research and write about the Uncle Sam Plantation and its inhabitants, the 19th-century sugar production methods, and the historical context that would guide the team’s development process. During Fall 2017, Zachary Segall began prototyping the VR experience, deepening proficiency in the Unity IDE, and choosing a VR interaction system for the project. Based on development problems at the time with the Virtual Reality Toolkit (VRTK), Zachary Segall chose SteamVR v. 1.2.3 as the VR interaction system. With all the team members back on campus by early 2018, the full development team visited Louisiana in January 2018 for site-based research (see Figure 1). They met immediately afterwards to begin building the VR experience. At this point, we encountered a brand new series of challenges.

Grinnell College students examine a double-pen slave cabin in Vacherie, Louisiana.
Figure 1. Site-based research. Members of the GCIEL student development team (from left to right: Sam Nakahira and Zachary Segall) conduct site-based research of a double-pen slave cabin at Laura Plantation in Vacherie, Louisiana (January 2018). Photo by David Neville.

Initially, the team intended to simulate the 19th-century sugarhouse and steam-powered sugar mill that had operated on the Uncle Sam Plantation. The team could access the plot plan and survey data of the plantation mansion and larger outbuildings (see Figure 2); however, we had difficulty locating documentation for the sugarhouse and sugar mill. Additionally, modeling and animating the sugar mill exceeded the skill level of our 3D Artist, who was new to the 3ds Max modeling software. We soon realized the project’s scale was far beyond what we could reasonably handle with our current resources and timeframe; so, we opted to start small and then iterate toward the larger-scale goal.

Plot plan of the Uncle Sam Plantation made by the Historic American Buildings Survey (HABS) in 1940.
Figure 2. Plot plan of the Uncle Sam Plantation. Plot plan of the Uncle Sam Plantation (Leimkuehler 1940) made by the Historic American Buildings Survey (HABS) in 1940 and one of the historical documents utilized by the GCIEL student development team for developing the VR experience.

To provide a common ground for historical understanding across team members, all participated in Dr. Sarah Purcell’s two credit-hour guided-reading course on the history of American slavery that focused on Louisiana, museum curation, and public history theory. Course readings inspired the new direction for our project. To honor the humanity of the enslaved people who lived on the plantation, the team decided to refocus the project on teaching users how to interpret the home life of the enslaved. Having agreed on a new approach, the team began recreating a double-pen slave cabin, which our site-based research provided sufficient data for a digital model (see Figures 3 and 4), and designing plans for structuring the VR experience itself (see Figures 5 and 6).

The 3ds Max interface showing a high quality render of a double-pen slave cabin.
Figure 3. The 3ds Max interface. This screenshot shows a high quality render of the double-pen slave cabin currently in development. The render uses the NVIDIA Mentalray Renderer with Sunlight and Daylight Systems set to 7 AM on 21 October 1868 in Baton Rouge, Louisiana. A turntable render of this 3D model is available on the GCIEL YouTube channel. Screenshot and model by David Neville.
The Unity interface showing models of the double-pen slave cabin and the plantation mansion.
Figure 4. Importing models into the Unity game engine. GCIEL student development team members import the models they developed in 3ds Max into the Unity game engine for programming user interactivity. The HABS plot plan is used as a reference image to ensure proper scale of the VR experience and approximate distances between its features. Screenshot by David Neville.
Students on the GCIEL development team discuss the Uncle Sam Plantation VR project.
Figure 5. Development team discussion. GCIEL student development team members (from left to right: Sam Nakahira, Zachary Segall, and Rachel Swoap) reflect on how to reconstruct the lived spaces of the plantation complex as authentically and sensitively as possible, and brainstorm possible directions that a VR experience could take. Photo by David Neville.
Experience flowchart for the Uncle Sam Plantation VR project.
Figure 6. VR experience flowchart for the proposed structure of prototype Uncle Sam Plantation VR experience. Image by David Neville.

We came to four critical insights as we found ourselves frequently adjusting our development pipeline. First, we needed to design the curricular content around the problems arising in the project. We initially held the course meetings separate from project-development meetings to prevent talk about the project’s technical details from overshadowing discussion about the historical topics. However, we discovered that the course topics could easily become divorced from and less relevant to the specific historical challenges that emerged naturally from the project work. We actually needed to let the project work and the historical topics inform one another in real time. Second, working together closely as an interdisciplinary team to identify problems and brainstorm solutions was essential. At first, everyone worked on their own and within their own disciplinary perspective in a disconnected divide-and-conquer approach. This left little overlap for noticing how the separate parts were not quite fitting together as a whole. Had the team been working together more closely, we could have saved time by realizing sooner that researching the sugar production was a dead end. Third, we needed alignment between the project goals and the team members’ skills, especially for technology-heavy projects. If the team members did not already have the skills when they started, the team needed to re-think the goal or to devote time and resources to help the team members acquire the necessary skills.

Fourth, and perhaps most crucially, we discovered that team members must adapt themselves to different disciplinary expectations and research styles. In particular, the approaches used in computer science and history were quite different and led to some tension. Computer science professionals reduce a design problem into small, manageable components and then rapidly iterate through prototypes to find the most effective and efficient solution. In contrast, history professionals start with library and archival research to shape the research questions, then they produce a polished document with the conclusions about the subject of inquiry. Risking oversimplification, it was as if the computer science approach tried building a complex whole from smaller, simpler parts and the history approach tried contemplating a complex whole to extract a few smaller, concrete understandings. Puzzling over how to merge these distinctly different problem-solving approaches, we began implementing a new project workflow based loosely on Scrum with two-week sprints (Ashmore and Runyam 2015; Deemer et al. 2009; Rubin 2013). This process provided a common framework for approaching the problem by breaking the whole project into smaller chunks so the SME would have a more narrow issue to explore and the Unity Developers had more tangible components to start building.

Scrum is a software development project framework that embraces iterative and incremental practices, collaborative teamwork, and self-organization. A Scrum sprint is a fixed space of time in which a product of the highest possible value is created. The sprint began with team members meeting in the GCIEL space to brainstorm and assign project tasks (see Figure 7). Members tracked their progress on these tasks using Trello, a web-based project management platform, and a whiteboard located in the team space and collaboratively addressed questions as they arose (see Figures 8 and 9). At the end of the sprint, team members met to debrief, identify new areas that needed to be developed, and reflect on what they learned with regard to both the historical subject matter and project technical skills. At appropriate stages in developing the VR experience, the development team included prototype testing in their workflow to ensure the end-users would have a favorable experience (see Figure 10). By involving all team members in this process, we improved the interdisciplinary communication and problem solving.

Students on the GCIEL development team launch a Scrum sprint for the Uncle Sam Plantation VR project.
Figure 7. Two-week Scrum sprint. The start of a two-week Scrum sprint utilized the community-building spaces of the Digital Liberal Arts Lab (DLAB) at Grinnell College, as well as the Media:Scape technology available there. GCIEL student dev team members (clockwise around the table): Rachel Swoap, Sam Nakahira, Zachary Segall, and Eli Most. Photo by David Neville.
The Trello interface showing lists and cards used for managing the Uncle Sam Plantation VR project.
Figure 8. High-tech project management. Trello, a web-based project management platform, was critical for implementing a Scrum framework that included brainstorming new ideas for the project and who was in charge of completing assigned tasks. Screenshot by David Neville.
The whiteboard in the GCIEL workspace functions as a Scrum board.
Figure 9. Low-tech project management. In addition to Trello, a Scrum board located in the GCIEL space helped student development team members keep track of project-related tasks, who they were assigned to, and their status. Photo by David Neville.
Prototype testing a VR experience in the GCIEL workspace.
Figure 10. Prototype testing. Zachary Segall tests a prototype VR experience with an unidentified Grinnell College computer science student. User testing allows GCIEL development teams to think critically about their own work. Photo by David Neville.

Second-generation projects

Having learned valuable lessons about the VR design process through the pilot project, GCIEL moved forward with three new VR projects spanning the liberal arts disciplines at Grinnell College, including recreating a Viking meadhall, creating an environment to help students visualize mathematical ideas, and creating an immersive experience to teach German language and culture.

Dr. Tim D. Arner, Associate Dean and Associate Professor of English, and Dr. David Neville lead the Envisioning Heorot Project that is building a VR experience of Heorot, the meadhall from the Old English poem Beowulf where much of the narrative happens. This immersive experience is modeled on archeological excavations of meadhalls in Denmark, England, and Iceland (see Figure 11) and on accounts from historical and poetic records from the early Middle Ages. Grinnell College students involved in the project include Ethan Huelskamp, Joseph Robertson, Maddy Smith, Anna Brew, Brenna Hanlon, Zoe Cui, Tal Rastopchin, and Michael Andrzejewski. The team plans to fill the VR meadhall with people and objects from the poem in order to help the participants exploring the space sense how the room’s layout contributes to its function as a political and social arena. The Envisioning Heorot Project will help student researchers and people reading Anglo-Saxon poetry, especially Beowulf, to understand how such civic spaces functioned in Anglo-Saxon and medieval Scandinavian culture and helped shape Anglo-Saxon social structures. While building or exploring this virtual space, students will learn to analyze how the meadhall functions in Beowulf and its analogues, to locate northern European cultures within a global network of trade and cultural influence, and to consider how movement through physical space is defined by and reinforces social roles in a particular cultural context.

Grinnell College students conducting site-based research at the Reykjavik City Museum, Iceland.
Figure 11. Site-based research in Iceland. Site-based research in Iceland and Denmark has been invaluable for students working on the Envisioning Heorot Project: Development work in 3ds Max and Substance Painter has been strongly influenced by findings and impressions made on these trips. Here students (from left to right) Ethan Huelskamp, Joseph Robertson, Maddy Smith, and Megan Gardner, examine a Viking hearth in Iceland with a representative from The Settlement Exhibition at the Reykjavik City Museum, Iceland. Photo by Tim Arner.

Dr. Chris French, Professor of Mathematics, and Dr. David Neville lead the Math Museum Project, which allows participants to explore and interact with mathematical ideas in VR. Grinnell College students involved in this project are Nikunj Agrawal, Ziwen Chen, Alexander Hiser, Yuya Kawakami, HaoYang Li, Robert Lorch, Tal Rastopchin, Lang Song, Charun Upara, and Hongyuan Zhang. This project is inspired by the mathematical models from the late 19th century when mathematicians partnered with industrialists to model new kinds of surfaces out of plaster, cardboard, or wire. These models brought new developments in algebraic geometry and new notions of non-Euclidean geometry. Immersed within the virtual Math Museum, students can interact with visualized mathematical concepts thereby experiencing greater enjoyment and comprehension of mathematical ideas.

In one room of the virtual museum, players walk around on a large ellipsoid surface, so they experience the shape in much the same way as an insect might move around on a plaster model. The player can find the umbilic points of the shape by using a tool that measures the curvature of the ellipsoid at the current location whenever the player triggers the measuring device. Another room is inspired by models created by the German mathematician Kummer. In this space, the player can manipulate a surface by adjusting certain parameters and then can watch how the surface evolves. The player’s task is to find the values for the surfaces that Kummer built. In a third room, the player must assign colors to the vertices of a graph consisting of edges and vertices so adjacent vertices take different colors. The goal is to use the minimal number of colors. This activity teaches the notion of the chromatic number of a graph. Also, students are currently developing another room in which the player learns about graph isomorphisms by manipulating the vertices of a graph to make it look like another graph.

Dr. David Neville leads the German VR Project, a game for teaching environmentalism in authentic German linguistic and sociocultural contexts. Originally developed as a flat screen 3D game focusing on glass recycling and waste management systems in German public spaces, Zachary Segall and Eli Most ported the game in 2018 to create an alpha-level VR prototype (see Figure 12). Grinnell College students involved in the project include Savannah Crenshaw, Martin Pollack, Yinan Hui, Bojia Hu, Jin Hwi, Tal Rastopchin, and Michael Andrzejewski. Research on the 3D game found that goal-directed player activity provided learners of a second language and culture with a more nuanced view of the activity systems that constitute a target culture, and also apparently influenced how learners invoked and structured language in order to describe these systems (Neville 2014). The VR version of the game will expand the scope of the 3D game by including more narrative to situate the user in an authentic German cultural situation and more in-game tasks related to recycling and waste management practices. We hope that increased immersion and sense of presence in a completely virtual environment will target greater learning outcomes in second language and culture acquisition, and perhaps even realize outcomes that were not discovered in the 3D version of the game.

Screen capture from the German VR project showing a German public space, a beer bottle, and a VR hand controller with directions in German.
Figure 12. Screen capture from the German VR project. The German VR project situates second language and culture acquisition within authentic sociocultural contexts and activities. Screenshot by David Neville.

Next steps

We are currently refining the teams’ workflows to use Scrum methods for project management and incorporating problem-based learning theory to intentionally teach metacognitive skills (Barrows 1996; Edens 2000; Hmelo-Silver 2004; Dunlop 2005; Yew and Schmidt 2011). A victim of our own success, we face a number of challenges while scaling up the lab to support multiple VR projects simultaneously. It has been difficult to find a dedicated physical space on campus which can support a growing community of practice. As a result, GCIEL’s work remains somewhat decentralized. It also remains to be seen how much these discrete cross-curricular VR projects will transform Grinnell College’s core curricula. Likely, GCIEL’s future projects will rely on external grant support, and it may be difficult for small-scale liberal arts teams to compete with large R1 research and development labs for funding. While we are excited to see our established team members graduate and move on to high-powered tech jobs and graduate schools, this leaves recurring gaps in our project teams, so we must constantly train new students to join the project teams. Successful project teams need consistent faculty and staff time and attention; yet, College employees find themselves increasingly burdened with competing responsibilities. Overcoming these challenges depends on our ability to convince the College to change some traditional structures and to provide sufficient time and resources for experimentation. Success is not guaranteed, but we believe the effort is worthwhile.

The future of GCIEL beyond our grant funding is still in discussion. As a well-resourced institution with an individually advised curriculum, Grinnell College has a few options that we can harness to secure GCIEL’s future. For example, the Writing Lab pays student writing mentors out of their general operations budget and these students do not receive academic credit, though they do take an introductory writing course to ensure they have the necessary skills. GCIEL could adopt a similar model and teach an VR basics course to develop a pool of potential student employees as VR mentors. Another possibility is integrating the lab into existing or emerging curricular structures. VR project development would fit most seamlessly into the Mentored Advanced Project (MAP) structure as a group research project supervised by a faculty member. These MAP experiences allow students to register for 2- or 4-credit MAP research credits and work closely with faculty advisers on independent research projects. We might also be able to utilize the “Plus 2” option, which allows professors and individual students enrolled in a regularly scheduled course to plan work that would go beyond the standard syllabus. GCIEL and student VR projects may also find a place within the emerging Digital Studies Concentration or the new Film and Media Studies Program. Grinnell College’s concentrations typically involve a cross-departmental listing of various courses that meet the concentration’s themes and goals, but GCIEL could provide the seed for a concentration-specific seminar that is listed as a requirement or additional way for the students to complete credits towards the concentration. Ultimately, we want to find ways to leverage the benefits of housing GCIEL within the curriculum (e.g., rewarding students with class credits and guaranteed team members) along with the benefits of being independent from the curriculum (e.g., freedom from semester limits and ability to form multidisciplinary collaborations with skilled students, staff, and faculty). Fortunately, Grinnell College has a history of offering student learning opportunities that take many forms, including those that exist outside of traditional classroom environments.

We think all these efforts will pay off in the long run. Opening the traditional classroom format to integrate technological expertise and domain-specific content across disciplinary divides will expand student assessments beyond term papers to include scholarly products that will excite and engage a new generation of scholars in the twenty-first century. We will also have to ask: what is the best way to assess learning outcome achievement for interdisciplinary projects related to creating VR experiences? Can we identify meaningful learning outcomes we should expect of all students, such as project management and effective communication? Do we need to assess students on their domain-specific skills and knowledge, such as software troubleshooting, graphical design, or archival research? Who would be responsible for designing and evaluating these assessments? How do we more closely integrate staff and faculty roles in collaborative curriculum design, which breaks down the traditional barriers between faculty and staff roles? How do we challenge College organizational structures to harness staff expertise alongside faculty domain knowledge?

Learning from the successes of vocational and professional schools, we can reinvigorate liberal arts education with hands-on cooperative training, yet retain the focus on our traditional values that makes us unique. This new model could help to transform liberal arts institutions into laboratories for innovation in solving twenty-first–century problems. In the end we believe liberal arts graduates can—and should—have the best of both worlds: knowledge and the skills to apply it. 

Key Takeaways

  • Complex projects, especially ones using technology, require teams consisting of people with different technical and subject-matter competence. These projects provide excellent opportunities for interdisciplinary collaboration and teaching.
  • To develop transferable skills and knowledge, model the project experience on “real-world” structures. This includes treating student collaborators as equals who participate in decision-making and receive compensation (e.g., stipends or academic credit).
  • Time-intensive projects will require focused, concentrated effort by team members. These projects may require institutional support for faculty involvement (e.g., reassigned time) and students to commit at least 10 hours a week to project development.
  • Long-term, complex projects benefit from a permanent physical space that is equipped to support the technology, comfortably hold team meetings, and accommodate team members’ work styles, including access outside of business hours.
  • The project curriculum must provide team members with the necessary prerequisite technical and subject-matter knowledge to start the project, and it must also be flexible enough in time and resources to adapt to questions that emerge during project development. As VR projects require new ways of configuring faculty-staff-student interaction and budgets to support developments, they provide excellent opportunities for institutional growth and external funding.
  • When properly configured teams work on developing well-designed VR experiences, students learn valuable skills related to communication, self-directed learning, attention to detail, problem solving, negotiation, and time management.
  • Development team members need to be well-versed in the ethical, psychological, and pedagogical affordances of VR and how these impact the project.
  • Start small with complex projects and iterate towards larger goals.
  • Open lines of communication between all team members—staff, faculty, and students—are essential to project success. Avoid isolation by encouraging teammates to pair up, even when working on components that traditionally involve many hours of individual work, such as archival research or programming. In this way, teammates can learn from the others’ processes. This supports cross-training and allows cross-pollination from diverse backgrounds/expertises. Web-based project management platforms, when used appropriately, help to facilitate this communication.
  • To truly transform, institutions will have to examine deep structures: curricula, staff/faculty time, majors, and funding.


Ahn, Sun Joo (Grace), Amanda Minh Tran Le, and Jeremy Bailenson. 2013. “The Effect of Embodied Experiences on Self-other Merging, Attitude, and Helping Behavior.” Media Psychology 16, no. 1: 7–38.

Ashmore, Sondra, and Kristin Runyan. 2015. Introduction to Agile Methods. New Jersey: Pearson Education.

Bailenson, Jeremy. 2019. Experience on Demand: What Virtual Reality Is, How It Works, and What It Can Do. New York: W. W. Norton & Company.

Barrows, Howard. 1996. “Problem-Based Learning in Medicine and Beyond: A Brief Overview.” New Directions for Teaching and Learning 68 (Winter): 3–12.

Blyth, Carl. 2018. “Immersive Technologies and Language Learning.” Foreign Language Annals 51: 225–232.

Brown, John Seely, Allan Collins, and Paul Duguid. 1989. “Situated Cognition and the Culture of Learning.” Educational Researcher 18, no. 1 (January–February): 32–42.

Dalgarno, Barney, and Mark J. W. Lee. 2010. “What are the Learning Affordances of 3-D Virtual Environments?” British Journal of Educational Technology 41, no.1 (January): 10–32.

Deemer, Pete, Gabrielle Benefield, Craig Larman, and Bas Vodde. 2012. The Scrum Primer: A Lightweight Guide to the Theory and Practice of Scrum. Version 2.0.

Dolgunsoz, Emrah, Gurkan Ylidirim, Serkan Yildirim. 2018. “The Effect of Virtual Reality on EFL Writing Performance.” Journal of Language and Linguistic Studies 14, no. 1: 278–292.

Dunlop, Joanna C. 2005. “Problem-Based Learning and Self-Efficacy: How a Capstone Experience Prepares Students for a Profession.” Educational Technology Research and Development 53, no. l (March): 65–85.

Edens, Kellah M. 2000. “Preparing Problem Solvers for the 21st Century through Problem-Based Learning.” College Teaching 48, no. 2: 55–60.

Engeström, Yrjö, Reijo Miettinen, Raija-Leena Punamäki. 1999. Perspectives on Activity Theory. New York: Cambridge University Press.

GCIEL Focus Group. 2018. Interview by Vanessa Preast. Report of Interviews with Student Team Members. Grinnell College, April 2.

Greengard, Samuel. 2019. Virtual Reality. Cambridge, Massachusetts: MIT Press Essential Knowledge Series.

Handelsman, Diane Ebert-May, Robert Beichner, Peter Bruns, Amy Chang, Robert DeHaan, Jim Gentile, Sarah Lauffer, James Stewart, Shirley M. Tilghman, and William B. Wood. 2004. “Scientific Teaching.” Science 304, no. 5670 (April): 521–22.

Herrera, Fernanda, Jeremy Bailenson, Erika Weisz, Elise Ogle, and Jamil Zaki. 2018. “Building Long-term Empathy: A Large-scale Comparison of Traditional and Virtual Reality Perspective-taking.” PLoS ONE 13, no. 10:

Hmelo-Silver, Cindy E. 2004. “Problem-Based Learning: What and How Do Students Learn?” Educational Psychology Review 16, no. 3 (September): 235–66.

Ijaz, Kiran, Anton Bogdanovych, and Tomas Trescak. 2017. “Virtual Worlds vs Books and Videos in History Education.” Interactive Learning Environments 25, no. 7: 904–929.

Jerald, Jason. 2015. The VR Book: Human-Centered Design for Virtual Reality. Williston, VT: Morgan & Claypool Publishers.

Jonassen, David H. 2000. “Toward a Design Theory of Problem Solving.” Educational Technology Research and Development 48, no. 4 (December): 63–85.

Jonassen, David H., Chad Carr, and Hsiu-Ping Yueh. 1998. “Computers as Mindtools for Engaging Learners in Critical Thinking.” TechTrends 43, no. 2 (March): 24–32.

Jonassen, David H., and Lucia Rohrer-Murphy. 1999. “Activity Theory as a Framework for Designing Constructivist Learning Environments.” Educational Technology Research and Development 47, no. 1 (March): 61–79.

Legault, Jennifer, Jiayan Zhao, Ying-An Chi, Weitao Chen, Alexander Klippel, and Ping Li. 2019. “Immersive Virtual Reality as an Effective Tool for Second Language Vocabulary Learning.” Languages 4, no. 13:

Leimkuehler, F. Ray, field team supervisor. 1940. Uncle Sam Plantation. From the Library of Congress, Historic American Building Survey.

McCombs, Barbara L. and Jo Sue Whistler. 1997. The Learner-Centered Classroom and School: Strategies for Increasing Student Motivation and Achievement. San Francisco: Jossey-Bass Publishers.

Nathal, Karla Liliana Puga, María Eugenia Puga Nathal, Humberto Bracamontes del Toro, Marco Antonio Guzmán Solano, and Juan Carlos Martínez Sandoval. 2018. “The Immersive Virtual Reality: A Study in Three-dimensional Euclidean Space.” American Journal of Educational Research 6, no. 3: 170–174.

National Research Council. 2000. How People Learn: Brain, Mind, Experience, and School: Expanded Edition. Washington, DC: The National Academies Press.

Neville, David. 2014. “The Story in the Mind: The Effect of 3D Gameplay on the Structuring of Written L2 Narratives.” ReCALL: The Journal of the European Association for Computer Assisted Language Learning 27, no. 1: 1–17.

Putman, Shannon, and Lateefah Id-Deen. 2019. “I Can See It! Math Understanding through Virtual Reality.” Educational Leadership 76, no. 5 (February): 36–40.

Rubin, Kenneth. 2013. Essential Scrum: A Practical Guide to the Most Popular Agile Process. New Jersey: Pearson Education.

Rubin, Peter. 2018. Future Presence: How Virtual Reality Is Changing Human Connection, Intimacy, and the Limits of Ordinary Life. New York: Harper Collins.

Selingo, Jeffrey. 2013. College (Un)bound: The Future of Higher Education and What It Means for Students. New York: Houghton Mifflin Harcourt.

———. 2017. There Is Life after College. New York: William Morrow.

Sinclair, Bryan and Glenn Gunhouse. 2016. “The Promise of Virtual Reality in Higher Education” EDUCAUSE Review:

Steffen, Jacob, James E. Gaskin, Thomas O. Meservy, Jeffrey L. Jenkins, and Iopa Wolman. 2019. “Framework of Affordances for Virtual Reality and Augmented Reality.” Journal of Management Information Systems 36, no. 3: 683–729.

Sundaram, Shirsh, Ashish Khanna, Deepak Gupta, and Ruby Mann. 2019. “Assisting Students to Understand Mathematical Graphs Using Virtual Reality Application.” In Advanced Computational Intelligence Techniques for Virtual Reality in Healthcare, edited by Deepak Gupta, Aboul Ella Hassanien, and Ashish Khanna, 49–62. Studies in Computational Intelligence, vol 875. Cham, Switzerland: Springer.

Szabo, Victoria. 2019. “Collaborative and Lab-Based Approaches to 3D and VR/AR in the Humanities.” In 3D/VR in the Academic Library: Emerging Practices and Trends, edited by Jennifer Grayburn, Zack Lisher-Katz, Kristina Golubiewski-Davis, and Veronica Ikeshoji-Orlati, 12–23. Council on Library and Information Resources Report 176.

van Loon, Austin, Jeremy Bailenson, Jamil Zaki, Joshua Bostick, and Robb Willer. 2018. “Virtual Reality Perspective-taking Increases Cognitive Empathy for Specific Others.” PLoS ONE 13, no. 8:

Villena Taranilla, Rafael, Ramón Cózar-Gutiérrez, José Antonio González-Calero, and Isabel López Cirugeda. 2019. “Strolling through a City of the Roman Empire: An Analysis of the Potential of Virtual Reality to Teach History in Primary Education.” Interactive Learning Environments.

Wenger, Etienne. 1998. Communities of Practice: Learning, Meaning, and Identity. Cambridge: Cambridge University Press.

Wenger, Etienne, Richard McDermott, William M. Snyder. 2002. Cultivating Communities of Practice. Cambridge, MA: Harvard Business Press.

“What Is Liberal Education.” n.d. Association of American Colleges & Universities, accessed March 03, 2020,

Wood, Zebulon M., Albert William, and Andrea Copeland. 2019. “Virtual Reality for Preservation: Production of Virtual Reality Heritage Spaces in the Classroom.” In 3D/VR in the Academic Library: Emerging Practices and Trends, edited by Jennifer Grayburn, Zack Lisher-Katz, Kristina Golubiewski-Davis, and Veronica Ikeshoji-Orlati, 39–53. Council on Library and Information Resources Report 176.

Yee, Nick, and Jeremy Bailenson. 2006. “Walk a Mile in Digital Shoes: The Impact of Embodied Perspective-taking on the Reduction of Negative Stereotyping in Immersive Virtual Environments.” Proceedings of PRESENCE 2006: The 9th Annual International Workshop on Presence. Cleveland, Ohio, August 24–26.

Yew, Elaine H. J. and Henk G. Schmidt. 2011. “What Students Learn in Problem-Based Learning: A Process Analysis.” Instructional Science 40, no. 2 (March): 371–95.

Yildirim, Gürkan, Mehmet Elban, and Serkan Yildirim. 2018. “Analysis of Use of Virtual Reality Technologies in History Education: A Case Study.” Asian Journal of Education and Training 4: 62–69.

About the Authors

David O. Neville (PhD, Washington University in St. Louis; MS, Utah State University) is a Digital Liberal Arts Specialist and Director of the Immersive Experiences Lab at Grinnell College.

Vanessa Preast (PhD, Iowa State University; DVM, University of Florida) is Associate Director of the Center for Teaching, Learning, and Assessment at Grinnell College.

Sarah J. Purcell (PhD, Brown University) is the L.F. Parker Professor of History at Grinnell College.

Damian Kelty-Stephen (PhD, University of Connecticut-Storrs) is Assistant Professor of Psychology at Grinnell College.

Timothy D. Arner (PhD, Pennsylvania State University) is Associate Dean of Curriculum and Academic Programs and Associate Professor of English at Grinnell College.

Justin Thomas (MFA, University of Maryland) is Associate Professor of Scenic and Lighting Design and Chair of the Theatre and Dance Department at Grinnell College.

Christopher P. French (PhD, University of Chicago) is Professor of Mathematics at Grinnell College.

A workshop filled with the tools of a silversmith. In the left half of the frame, a man in colonial attire sits with his back to the viewer. In the center-right of the frame, the player’s pointer rests on a colorful print and declares “Landing Print.”

Mission US TimeSnap: Developing Historical Thinking Skills through Virtual Reality


Mission US: TimeSnap is a blended learning experience, marrying the capacity of a virtual reality mission with consolidation, support, and deeper exploration in the classroom. This article investigates the affordances of virtual reality as a teaching tool and the challenges of designing for today’s classroom. The game developers of Electric Funstuff were drawn to virtual reality by research that suggests it has great potential to support the kind of inquiry-based learning that many high school history classrooms struggle to provide. The result is Mission 1: King Street, 1770, the first in a series of history-based virtual reality missions that model and scaffold the use of critical historical thinking skills. After several rounds of testing and iteration, Mission 1 is poised for a final classroom evaluation, and this paper shares the developers’ insights and best practices for other classroom-VR creators.


There’s an argument brewing in the Royal Exchange Tavern on King Street. Two men cluster at the end of a sturdy wooden table, deep in conversation and visibly agitated. The tavern keeper ignores their quarrel, distracted by an advertisement in the Gazette. Across the room, a man slouches over his tankard and re-reads, astonished, a letter the author never meant to share with the people of Boston. Hundreds of miles away, hundreds of years into the future, yet, impossibly, also present in this moment, in Boston, in April, in 1770, a high school student considers her options: “Hm, do I really want to do that? No, don’t go there…”

This student is playing Mission US: TimeSnap, a game-based virtual reality experience designed to critically engage high school learners in US History. Before her mission is through, this student will explore three richly detailed and interactive locations in 1770 Boston, on the way gathering evidence that will help her explain why, only weeks earlier, five civilians were gunned down in the middle of the street by British soldiers. And, because TimeSnap is a blended learning experience, the journey won’t end when she removes her headset. Outside of the virtual world, this student will collaborate with her classmates and receive support from her teacher in order to understand and articulate not only the causes of the Boston Massacre but also the different ways this event was interpreted and why this matters to America’s Revolutionary history. In short, she will be “doing history” by grappling with contextualization, causation, and other essential historical thinking skills.

This paper describes the design and implementation of TimeSnap from the perspective of both its developers and researchers and offers lessons learned for would-be practitioners.[1] These lessons include (a) how to allow for the time and technological constraints of today’s classroom, (b) how to manage cognitive load in virtual learning environments, and (c) how to use design to support active learning.

Educational Affordances of Virtual Reality

Since the computer arrived in the classroom, history educators have sought to harness digital technologies to innovate instruction. Advocates saw exciting opportunities to digitize primary sources, scaffold learning with hypermedia, and build two- and three-dimensional virtual spaces for exploration and engagement (Dede 1992; Evans and Brown 1998; Cornell and Dagefoerde 1996). The use of technology in the classroom arose side-by-side with a shift in pedagogical practice in the social sciences. Over the past few decades, professional organizations like the Stanford History Education Group, National Center for History in Schools, American Social History Project, and Roy Rosenzweig Center for History News and Media have developed strategies and resources to help each learner to “read like a historian,” or practice disciplinary literacy, by grappling with historical evidence. Inquiry-based learning, where teachers guide students through the process of evidence-gathering, source evaluation, and argumentation, has emerged as the most promising instructional mode for building these historical thinking skills (Voet and De Wever 2017). Assessment tools have also evolved: the document-based question (DBQ)––in which students analyze primary and secondary sources to explain past events and make arguments––has been widely adopted as the most reliable measure of student learning. Technology, particularly digital media, has been singled out for its significant potential for scaffolding learning (Dede 1992; Saye and Brush 2006). Hypermedia and other digital supports foster inquiry into the “ill-structured” problems of history by providing hard scaffolds and promoting independent exploration and problem-solving (Saye and Brush 2006). However, despite calls for an inquiry-based classroom and even after the wide adoption of digital tools in many classrooms, according to one survey, half of high school history teachers still regularly lecture for three-quarters of the class period––and some for the entire period (Wiggins 2015). Implementing these methods poses a challenge for teachers trained in conventional practices as well as for students who struggle to analyze complex texts.

Reflecting on the need for both effectively modeled historical thinking skills and more compelling practice environments, we saw an opportunity for innovation. After ten years’ experience using the affordances of games and interactives to deepen middle school social studies through Mission US, our game developers wanted to harness the unique capacities of virtual reality (VR) to build historical literacy. We drew on the insights of the Stanford History Education Group (SHEG) and the National Center for History in the Schools (NCHS), selecting essential historical thinking skills like contextualization, causation, and sourcing to model and develop in high school history classrooms through a blended VR experience.

Virtual reality has strong potential for teaching history. Like living history museums, VR assembles a three-dimensional historical world to explore––putting students “inside” the past and, through embodied learning, making historical investigations more memorable and motivational. Theorists of embodied learning assert that learning is a product of sensorimotor interaction with the world rather than the result solely of mental activities that occur within the brain’s physical confines (Lakoff and Johnson 1999; Osgood-Campbell 2015). Proponents of experiential learning argue that the most powerful learning experiences are those that allow people to experiment (or “take action”) physically as well as mentally through hands-on activities, reflect on the outcomes, and make changes as required to advance toward goals (Kolb 2014; Kontra, Goldin-Meadow, and Beilock 2012). In this frame, learning activities should be designed to allow students to interact in meaningful ways with their environments to facilitate deeper encoding of knowledge.

Researchers speculate that VR can promote embodied and experiential learning by facilitating presence, or the illusory perception of physically “being there” in a non-physical space (Schubert, Friedmann, and Regenbrecht 2001). Accordingly, students can interact with content in ways not possible with books, video, or even games. They can, for example, pick up and rotate objects, and, in in-room VR, move toward and away from sounds, giving them an intimate sense for the distinctive material culture of a historical era. Students may also be more likely to practice the skills of historical thinking after having them modeled by characters in the VR space and then trying themselves. Similarly, VR may promote embodied learning by enhancing episodic memory (memory of autobiographical events) and visuospatial processing (the ability to identify objects and the spatial relationships among them) (Parsons et al. 2013; Repetto, Serino, Macedonia, and Riva 2016). Some researchers have proposed that the formation of memories is closely tied to the ability to take action on the information being encoded by the brain. According to Glenberg (1997), “conceptualization is the encoding of patterns of possible physical interaction with a three-dimensional world. These patterns are constrained by the structure of the environment, the structure of our bodies, and memory” (see Osgood-Campbell 2015). If this is the case, a learner in a VR setting, who must perform actions (albeit limited gestures, not fine motor movement) to navigate their virtual environment and unlock knowledge, would form more meaningful memories than a student reading the same information.

VR has educational potential beyond the affordances of embodied learning. Research suggests, for example, that the novelty and interactive possibility of VR improves student motivation and increases student recall (Chiang, Yang, and Hwang 2014; Ijaz, Bogdonovych, and Trescak 2017). Furthermore, learning in a realistic virtual space aligns with the methodologies of anchored instruction. Anchored instruction theory posits that more meaningful learning takes place when students are placed in a realistic context, for example by solving a problem presented in a case study (Yilmaz 2011). Often, anchored instruction is supported by technology like video or VR, which supplies the realism of an otherwise unfamiliar situation. In a virtual scenario, students activate “inert” knowledge when they encounter situations to which that knowledge can be applied (Love 2005).

TimeSnap is designed to bring these advantages to the US History classroom. With its immersive and interactive historical spaces, TimeSnap aims to model the work of history as it builds knowledge of historical people, places, events, and ideas. Working under the assumption that inquiry-based learning experiences are the most powerful, our theory of change posits that a brief (fifteen-to-thirty minute) VR experience that models historical thinking skills, followed by a lesson plan that helps students to apply their new knowledge and skills, will be demonstrably more effective at helping students retain and apply historical knowledge and skills than a traditional, paper-based lesson.

Virtual Tour of TimeSnap Mission 1: King Street, 1770

Mission US: TimeSnap is a blended learning experience that marries the capacity of the virtual reality mission with consolidation, support, and deeper exploration in the classroom. It was developed in Unity and optimized for the Oculus Go. The production and research process has been funded by a Small Business Innovation Research grant through the US Department of Education. In each TimeSnap mission, students “travel” back in time to investigate a pivotal period in American history. The core of each lesson is a VR mission in which students explore historical locations, encounter local people, collect and analyze artifacts, and bring back evidence to construct an interpretation of what happened and why. The following case study is focused on the development and testing of Mission 1: King Street, 1770, an investigation of the Boston Massacre. Later missions will build upon this research to explore the Fugitive Slave Law, westward expansion, and turn-of-the-century immigrant communities.

To encourage the critical inquiry and problem-solving skills at the heart of inquiry-based learning, TimeSnap is animated by “missions,” questions that form the basis for the VR task and the lesson to follow. A simplified in-game mission (Find the causes of the Boston Massacre) keeps students focused on a single task during their time in the VR. The classroom lesson poses a more complex question (How did the Patriots and the British each explain the causes of the Boston Massacre?), to be answered using evidence collected in the VR in collaboration with classmates and with teacher support and guidance. For more advanced students, an optional DBQ challenges them to apply what they have learned to a new set of documents and interpret the larger significance of the Boston Massacre in American history.

Entering a three-dimensional virtual space allows players to feel physically immersed in a new world, but TimeSnap extends this opportunity for immersion by including worldbuilding. Students do not simply put on the VR headset and immediately see 1770: they enter a future society with its own fractured history before embarking on their mission. Students are deputized as agents of the Chronological Advanced Research Projects Agency (C.A.R.P.A.), a future government department. C.A.R.P.A. was founded to rebuild the world’s archives using the agency’s signature technology, a virtual form of time travel that replicates historic environments, artifacts, and organisms. C.A.R.P.A. created this technology to repopulate their digital collections and expand their understanding of the past. Agents search for objects and information to fill gaps in the historical record that have puzzled the agency’s scholars. In the King Street Mission, for example, C.A.R.P.A. is aware of the Boston Massacre but does not have the evidence necessary to explain why five civilians were shot by soldiers of their own government. Room by room, students uncover the clues necessary to explain the many factors contributing to the Massacre.

Overview of User Experience

In a three-minute tutorial, agents meet C.A.R.P.A.’s Director Wells, who will be their guide and model for historical thinking. Director Wells outfits agents with a TimeSnap device (the handheld VR controller) that enables time travel and other helpful powers. Wells gives agents a mission: to go back in time and verify or collect historical accounts in order to respond to the mission question (e.g., What caused the Boston Massacre?). Their mission begins with a key piece of evidence––a challenging text or visual primary source. Wells poses focus questions about the evidence and prompts the players to learn all they can by investigating historical figures, locations, and other documents and artifacts. In King Street, 1770, Wells leads agents through three rooms, which are carefully researched recreations of colonial historical settings. Players explore rooms to gather sources and contextual information and to collect and study additional primary documents.

VR Features


Players use the Oculus pointer (or an equivalent) to navigate to, and through, rooms in the VR environment.

Audio Guide

Voiceover (VO) support, in the form of C.A.R.P.A.’s Director Wells, guides players through the space, assigns tasks connected to the lesson question, and models historical thinking skills, including sourcing and contextualization.

Scan, Mind Meld, and Analyze

Players use the pointer to click on people and objects in the VR environment, view hot spots providing background information, “hear” thoughts (Mind Melds), and zoom in for a closer look. This feature is the primary way that players interact with the VR rooms and items.

Close-up of Paul Revere’s “Landing of the Troops” print. In the center of the frame is a transcription of the cursive letters from a corner of the print. Text supports are highlighted in aqua. At the bottom of the frame, the text supports explain that His Majesty’s Secretary of State for America was “the official responsible for overseeing the American colonies.”
Figure 1. Transcription and text supports for Paul Revere’s “Landing of the Troops.”
In the center of the frame, a textbox contains the transcription of a Mind Meld with the tavern keeper. Below the text are two prompts labeled with an ear, inviting the student to listen further to one of the options.
Figure 2. Transcription of the tavern keeper’s branching Mind Meld.


Each room is divided into discrete scenes, known as tableaux. Each is a collection of objects and Mind Melds, typically providing interrelated information. Players must complete a minimum number of interactions with the items in a single tableau before they move on.

Field Notes

Players automatically collect field notes during their interactions with certain objects and people. Notes are sorted into pre-set categories as they are found. Players can track their progress unlocking categories and collecting notes when they return to the C.A.R.P.A. Lab. At the end of their mission, players are emailed copies of their notes.

In the left half of the frame, a text box indicates how many Field Notes the player has collected. The Field Notes are grouped by category (ex. “The Aftermath”, “Taxation”, etc).
Figure 3. Collected field notes displayed in the C.A.R.P.A. Lab.

Evidence Locker, Room, and Exit Questions

After they complete each room, players are asked to select one of three objects to return with them to the C.A.R.P.A. Lab. These objects are held in the Lab’s evidence locker for the duration of the mission. When they return to the C.A.R.P.A. Lab, players answer questions about the items they have selected from each room and about the conclusions they are drawing about the mission question.

In figure 4, holograms of three objects are projected on top of the final Revere Workshop tableau. The closed caption prompts players to choose the object that “best helps you understand the causes of the massacre.”
Figure 4. A room question in Revere’s Workshop.
In figure 5, a follow-up question about the Revere Workshop artifact is projected over the C.A.R.P.A. Lab scene.
Figure 5. The follow-up question from Revere’s Workshop in the C.A.R.P.A. Lab.

Virtual rooms

C.A.R.P.A. Lab

A large space with irregular grey and white walls. In the center of the field of view, a fragment of Paul Revere’s “Bloody Massacre” spins on a small holographic pedestal. The closed caption reads, “It’s a primary source from 1770 that another agent retrieved.”
Figure 6. A fragmentary source is analyzed in the C.A.R.P.A. Lab.

Players begin and end their mission in the C.A.R.P.A. Lab, a cavernous industrial space with an evidence locker for the artifacts students collect from the historic spaces.

  • Room Objective: Acclimate players to the VR environment, introduce them to the mission and Director Wells, and help students reflect on and consolidate information between VR rooms.
  • Number of Objects: 1
  • Number of Mind Melds: 0

Paul Revere’s Workshop

A workshop filled with the tools of a silversmith. In the left half of the frame, a man in colonial attire sits with his back to the viewer. In the center-right of the frame, the player’s pointer rests on a colorful print and declares “Landing Print.”
Figure 7. Paul Revere in his Workshop.
  • Room Objective: Discover the complete “Bloody Massacre” print and explore Paul Revere’s perspective on the Boston Massacre.
  • Tableaux: Revere’s Workbench, Revere in 1770, Revere in 1768
  • Number of Objects: 5
  • Number of Mind Melds: 2

Royal Exchange Tavern

A dim tavern interior with wooden tables and a large fireplace. Two men stand in the center-right foreground. A third sits in the left background. There is a wooden stick on the table behind one of the standing men.
Figure 8. Customers at the Royal Exchange Tavern.
  • Room Objective: Encounter conflicting perspectives and evidence on the Boston Massacre.
  • Tableaux: An Argument, The Tavernkeeper, An Editorial
  • Number of Objects: 4
  • Number of Mind Melds: 4

Boston Gaol

A narrow jail cell. In the center of the frame, a man sits in his shirtsleeves with his back to the viewer. To his left, the iconic red coat of a British soldier lies on a cramped metal bed. To his right, a player’s pointer lands on a piece of paper with the question “What is he writing?”
Figure 9. Captain Preston in Boston Gaol.
  • Room Objective: Hear Captain Preston’s account of the Massacre.
  • Tableaux: Preston Asleep, Preston Awake
  • Number of Objects: 4
  • Number of Mind Melds: 1

TimeSnap Lesson

The King Street, 1770 VR mission is followed by a classroom lesson that helps students apply the knowledge and skills presented in the VR mission. Teachers are asked to lead their students in a mission debrief discussion that helps students review and consolidate the information they were exposed to in the VR. Students are provided a copy of their field notes, pre-sorted into relevant categories to support their inquiry into the causes of the Boston Massacre. Classroom activities and teacher-led discussions lead students to expand their inquiry into the Massacre, from naming and explaining the causes of the Boston Massacre to a critical evaluation of the sources of their evidence. Ultimately, students are expected to use the historical thinking skills modeled by Wells and practiced in the lesson to analyze a new set of documents pertaining to the Boston Massacre and the American Revolution.

Testing and Evaluation

Over the course of its ongoing development, the usability, feasibility, and promise of efficacy of TimeSnap have been evaluated in numerous settings. The final version of TimeSnap, described above, has been substantially revised based on recommendations from two pilot studies (conducted in December 2017 and January 2019), but final testing is still in progress. The results of this summative study, including the extent to which learning was positively impacted by the VR experience, will be shared via the project website.

Initial Phase I pilot study

In December of 2017, a pilot study of the initial Mission 1 VR and lesson activities was conducted with two US history teachers and fifty-nine students in two public high school classrooms (a ninth grade class in Queens, New York, and an eleventh grade class in suburban New Jersey) to determine the project’s feasibility. Students in each class were randomly assigned to a treatment group or control group by their teachers. Prior to the beginning of the pilot, participating teachers and students in the treatment group were asked to complete a Student Immersive Tendencies Questionnaire. During the two-day classroom pilot, all students had the opportunity to engage in the TimeSnap VR experience and were also asked to read, annotate, and respond to questions about four primary source documents related to the Boston Massacre, though students in the control group were asked to complete their analysis of documents before engaging in the VR experience. Many students in the treatment group reported that the immersive nature of the VR experience heightened their engagement and focus during the lesson in addition to aiding their ability to visualize and recall important information about the historical context. Students also reported that they enjoyed having a personal, distraction-free learning space in which to explore and progress at their own pace. Both teachers were able to successfully incorporate TimeSnap into their regular instructional approaches and noted an interest in using VR with their students in the future. To ensure that this novel instructional experience was not going to adversely affect learning, the pilot study also included preliminary measures for efficacy. The treatment group actually showed slight, but not statistically significant, improvements in retention of historical facts. More importantly for the goals of the study, students and educators affirmed the potential for the game to impact students’ sourcing and contextualization skills.

Phase II formative research

In January 2019, a newer iteration of the TimeSnap: Mission 1 VR prototype and accompanying curriculum materials were tested with a group of five eleventh-grade students and one facilitating teacher at a public high school in lower Manhattan in New York City. The instructional session took place in an after school setting over ninety minutes (designed to simulate a condensed version of two individual instructional periods) and was immediately followed by a thirty-minute group interview with all participating students and a forty-minute interview with the facilitating teacher later that week (see Appendix for additional information.) The small sample of student participants (n=5) allowed for in-depth analysis of students’ written responses to open-ended questions, which provided some insight into the nuances of their misconceptions and gaps in understanding.

Key Findings and Implications

All participating students exhibited a high degree of engagement in the VR and subsequent class discussions and collaborative writing activities. The two features of the VR experience students found most compelling were picking up and manipulating objects and Mind Melding with different historical figures. Ironically, though students were able to vividly recall objects they had “touched” in VR, they ultimately struggled to articulate how these interactions informed their understanding of the relevant 2D primary source documents. This suggested that future iterations of the prototype might benefit from attaching deeper, more meaningful content to these popular mechanics, in an effort to better engage and support students in making sense of difficult language and more relevant contextual details. At the same time, it remained important to consider what could get lost through such enhancements to the Mind Meld mechanic, insofar as this feature was intended to function as a support—not a replacement—for the heavy lifting work of document analysis.

All students demonstrated an appropriate degree of intuition about how to interact with key VR features, though most expressed a desire for more opportunities to “click around to figure it out yourself.” Nevertheless, and in spite of the degree to which they chose to engage with in-game scaffolds, all students exhibited difficulty recalling and articulating specific mission goals 
following the experience and there were only minor differences in their performance on a six-question multiple choice pre-/post- VR assessment. When asked to recall important information associated within each VR room, student responses primarily focused on people and objects, with a tendency to describe these elements broadly rather than explicitly referencing their significance to the mission question or historical context (e.g. “three men,” “tools,” “a bowl with writing on it,” etc.). Though students were able to work together to answer sourcing and contextualization questions about “The Case of Capt. Preston of the 29th Regiment,” they were less successful in building and supporting independent arguments related to the “Bloody Massacre” print, where they either interpreted the print as a photographic representation of historical events or failed to acknowledge the broader historical events which informed the creation of the document. Students’ failure to fully meet the lesson’s learning objective, coupled with their professed desire for additional agency and freedom to choose their own level of scaffolding, suggested a need for the incorporation of additional prompts and moments aimed at inspiring students to pause, reflect, and revise their initial impressions as the VR experience unfolds, rather than postponing such activities until students’ return to the “real world.”

Phase II Full Study

TimeSnap is currently undergoing final testing. In December and January 2019–20, the revised build of Mission 1 and accompanying instructional material were piloted in three “treatment” classrooms, while three “business-as-usual” classrooms completed a paper-based lesson on the same content and skills. The first build of Mission 2 will be tested at the same three sites at a date to be determined. Our research partners are evaluating TimeSnap on the following criteria:

  • Usability: Are students able to navigate the VR setting successfully and accomplish the goals of the lesson?
  • Feasibility: Is the teacher able to integrate the students’ experiences in VR with the associated classroom activities to achieve the learning objectives?
  • Fidelity of Implementation: What modifications does the teacher make to the lesson activities or curriculum materials and why?
  • Student Impact: As compared to peers in business-as-usual classes, do high school students who participate in TimeSnap lessons demonstrate greater gains in history content knowledge about topics in American history and in historical thinking skills? How do students relate to and experience history content in a VR-supplemented lesson?

Lessons, Revisions, and Conclusions

Testing has repeatedly shown that students find TimeSnap to be appealing and immersive, a welcome change in the way they approach course material. However, measurable change in students’ approach to historical thinking remains elusive. Since January 2019, our team has drawn on research findings and other insights from our partners at the Education Development Center, the American Social History Project, and other expert advisors in history pedagogy to revise and strengthen TimeSnap: Mission 1. We have taken steps to clarify the mission goal, expand the role of the in-game audio guide, and create space for reflection and synthesis. The Virtual Tour of Mission 1 included earlier in this article reflects those revisions to the design of the game. We believe that the simplified mission, enhanced support from Wells, and deliberately reflective room questions will produce meaningful learning opportunities. We launched Phase II testing in December 2019 in three New Jersey high schools. As of the submission of this article, those tests were ongoing. While we wait for the data and results, we have reflected on our process and identified three critical best practices for would-be developers. As you embark on your own VR production process, here are lessons to keep in mind.

Lesson #1: Plan for Classroom Realities

Bringing interactive technology into the classroom means designing for conditions of scarcity. Even in school districts that value technology in the classroom or experimental instructional design, there are limits on the amount of time and money departments can dedicate to VR. We knew it was essential to design a teaching tool that teachers would actually have the resources to implement. To keep TimeSnap teacher friendly, we have adhered to three core principles:

  • Short: Here, VR best practices align with classroom needs. Industry guidelines suggest that users should not exceed 30 minutes of continuous play, as they then become more likely to report symptoms of simulator sickness, such as nausea, disorientation, and eyestrain (Smith and Burd 2019). In our experience, most students reported little to no discomfort when adhering to these limits. Classroom time available for novel learning experiences is also limited, making the time constraints on VR a compatible limitation. The VR portion of the TimeSnap mission takes twenty to thirty minutes to complete, less than a standard class period.
  • Mobile: Mobile headsets, like the Oculus Go, are less expensive than room-scale VR systems and require far less set up. While mobile VR headsets do not offer the ability to walk or grasp objects in virtual space, they still provide an immersive experience without breaking a district’s technology budget. While new mobile headsets like the Oculus Quest provide six-degrees of freedom, consider that it is not very practical to have twenty-five students trying to walk around the actual classroom!
  • Flexible: Some teachers may have a week to explore the nuances of a single historical event, but most must move speedily through their curriculum. We have created lesson materials that teachers may select from or adapt to their own purposes, from a simple worksheet to guide students through field note analysis to a full DBQ. We also believe that our focus on teaching historical thinking skills (beyond the specific historical context) helps justify additional time spent.

Lesson #2: Less is Still More

Educators have looked to digital technologies to support student learning, including to help shoulder a student’s cognitive load while they wrestle with new or complex information. However, in a VR experience like TimeSnap, there is a risk that the very supports meant to be helpful will inundate students with new information and without any time or mechanism to process that information. Earlier iterations of TimeSnap provided more detail and allowed students more freedom to explore each room at the cost of their comprehension. This is why, despite the fact that some players have requested more interactions and greater freedom of movement, we embarked on a program of simplification ahead of our Phase II testing.

  • Clear Mission: It is critical for students to understand their primary purpose while in VR. We found that having a secondary mission sapped students’ cognitive load without adding to their interest or learning. We pared down the in-game mission question, saving the subtleties of sourcing for the classroom exercise.
  • Audio Guidance: Use a narrator or guide to support students as they navigate virtual space. A guide can do more than just give orders or answer basic questions: she can shape the way students think about the information they encounter. We expanded the role of our in-game audio guide, Director Wells; in addition to her existing function answering hotspot questions, Wells will prime students for a focus task within a room (“I wonder if these people would agree with Revere’s version of events…”) and act as an external memory (“This must be the same print we saw…”).
  • Structured Discovery: Be wary of calls for free exploration of the VR landscape. Some creators understandably believe that more unstructured experiences that allow students to move and explore at will must generate high user engagement. More freedom, however, often creates instructional and logistical problems. Students who are free to explore are also free to miss essential information, and the ability to transition back and forth between rooms makes the VR experience longer and increasingly uncomfortable. We introduced the tableaux system, curtailing player ability to move between sections of a room and complete in their preferred order. This allows us to control the flow of information to students; we feed them information in an order that makes sense. This has the added benefit of reducing the need to script complex conditional answers based on what a student has or has not encountered yet.
  • Repetition: The primary affordance of VR—immersion in a new and exciting virtual space—can be distracting. With so much to look at and absorb, students can easily miss key details unless they are exposed to them multiple times. We threw off our fear of repetition and began recycling key phrases and ideas. The language of the mission question and the various potential causes resurface again and again in the script. Repeated exposure to these ideas, some of them quite unfamiliar, gives students a chance to recognize that this information might be worth hanging on to.

Lesson #3: Cultivating Curiosity

Historical VR experiences are often framed as time travel, where learners can visit the past in the same way they might visit Paris. But what kind of tourists will they be? Sometimes, exploring a nominally “interactive” virtual environment is downright passive. To ensure that students are pursuing and synthesizing information, not just hitting “next,” our production team deliberately constructed a mission that students could get excited to complete. Our developer team designed game mechanics that motivate students to explore widely and make meaningful choices, even within the constraints set by cognitive load. These Phase II revisions reflect recommendations from Phase I and interim testing to encourage more student reflection and synthesis within the VR.

  • Tools for Problem-Solving: Make gameplay captivating by presenting a problem and equipping students with the tools to solve it. By setting a mission goal and creating opportunities for interaction and meaningful choice, TimeSnap presents a compelling problem space for students to navigate. In the C.A.R.P.A. Lab, Director Wells assigns the student a task and models the “hotspot” method they will use to extract information from the objects they encounter on their mission.
  • Meaningful Choice: Prevent students from passively clicking through interactives by prompting them to make decisions. Mind Melds and Room Questions provide TimeSnap’s primary opportunity for meaningful choice. Unlike documents, Mind Melds offer branching choices. When a student selects a follow up option in a Mind Meld, they cannot return to listen to the other option later. This encourages students to select the most interesting or relevant information and can lead to variation in student experience and field note collection. At the end of each room, students are prompted to select a significant artifact to return to C.A.R.P.A. While this ultimately does not affect the outcome of the mission or the information in their field notes, students must use their judgment in choosing what artifact they believe is most relevant to the mission.
  • Rewards: Use in-game reward systems to encourage learning behaviors and help students monitor their improvement or progress. We developed specific game mechanics meant to motivate users to actively explore their environment. For example, students can see how many field notes they have collected on return to the C.A.R.P.A. Lab. This in-game feedback informs students that they are making progress towards their goal. However, equally as motivating is the thrill and wonder of “hands”-on discovery. Phase I research and interim testing indicated that students were most excited to “touch” virtual objects (rather than to read virtual documents) and enjoy discovering “hidden” items. These encounters drive them to keep interacting with the virtual environment in search of new secrets to uncover.

Mission 1: King Street, 1770 received a final round of classroom testing in December and January 2019–2020; Mission 2 is currently in production and will begin testing, conditions permitting, in Fall 2020. We are eager to see how this current iteration of Mission 1 can produce measurable improvements in student knowledge-acquisition[2] and historical thinking, and we will carry forward any new insights we glean from these tests into Mission 2 and beyond. Grappling with the technical and cognitive challenges of VR in the classroom has been a productive process; each frustration forced us to adapt and innovate and ultimately create a better product.

Though it is hardly still “early days” for VR, this technology remains underutilized in education because of significant logistical impediments, and our work to mitigate these obstacles is one part in a long process to make VR a practical and effective pedagogical tool. Including educator voices is an essential component of that long-term mission and one that developers would do well to prioritize. Our developer team is perhaps uniquely well-positioned to partner with educators: after a decade of interdisciplinary collaboration on the Mission US game series, Electric Funstuff has built a robust network of educational researchers, curriculum specialists, and classroom instructors. Even with our considerable experience designing and developing educational games, we actively solicited insight and guidance from these partners. Developers best understand the technical possibilities afforded by new and evolving technologies, but only educators can point us to the areas of greatest need in their classrooms. Seeking balance between freedom and structure, depth of content and cognitive load limits, we will continue to iterate a compelling educational instrument where even the laws of physics are no barrier to historical learning.


[1] Mission US: Timesnap was developed by Electric Funstuff in partnership with the Educational Development Center and the American Social History Project/Center for Media and Learning at the CUNY Graduate Center. The authors would like to further acknowledge Dr. William Tally, who kindly reviewed drafts of this article and provided invaluable feedback, along with Dr. James Diamond, James Hung, Valentine Burr, Pennee Bender, Donna Thompson, Joshua Brown, Michelle Chen, Jill Peters, Dale Gordon, Benjamin Galynker, Robert Duncan, Caitlin Burns and Peter Wood, each of whom has contributed their talents and expertise to this project.

[2] While pandemic control measures have indefinitely delayed our test of the second TimeSnap mission, we are excited to share preliminary data from the Mission 1 tests from December through January. Independent sample t-tests were conducted to compare treatment and comparison students’ change from pre- to post-assessment on a historical thinking subscale (possible range 1 to 6) and a historical knowledge subscale (possible range 0 to 1). Treatment students showed statistically significant greater pre-post change (M=.19) on the Historical Knowledge sub-scale than the comparison students (M=.04) (t=-2.7, p=.007, Cohen’s d=.54 indicating a medium effect size). Further analysis is ongoing. A full report will be made available on the project website when it is complete.


American Social History Project. 2019. “Home.” Accessed December 12, 2019.

Chiang, Tosti., Stephen J. H. Yang, and Gwo-Jen Hwang. 2014. “An Augmented Reality-based Mobile Learning System to Improve Students’ Learning Achievements and Motivations in Natural Science Inquiry Activities.” Educational Technology & Society 17, no. 4: 352–365.

Cornell, Saul, and Diane Dagefoerde. 1996. “Multimedia Presentations: Lecturing in the Age of MTV.” Perspectives on History (January).

Dede, Christopher J. 1992. “The Future of Multimedia: Bridging to Virtual Worlds.” Educational Technology 32, no. 5: 54–60.

Evans, Charles T., and Robert Brown. 1998. “Teaching the History Survey Course Using Multimedia Techniques.” Perspectives on History (February).

Glenberg, Arthur M. 1997. “What Memory Is For.” Behavioral and Brain Sciences 20, no. 1 (March): 1–19.

Ijaz, Kiran, Anton Bogdanovych, and Tomas Trescak. 2017. “Virtual worlds vs Books and Videos in History Education.” Interactive Learning Environments 25, no. 7: 904–929.

Kolb, David A. 2014. Experiential Learning: Experience as the Source of Learning and Development, 2nd ed. Upper Saddle River, New Jersey: Pearson Education, Inc.

Kontra, Carly, Susan Goldin-Meadow, and Sian L. Beilock. 2012. “Embodied Learning Across the Life Span.” Topics in Cognitive Science 4, no. 4, 731–739.

Lakoff, George, and Mark Johnson 1999. Philosophy in the Flesh: the Embodied Mind and Its Challenge to Western Thought. New York: Basic Books.

Love, Mary Susan. 2004. “Multimodality of Learning Through Anchored Instruction.” Journal of Adolescent & Adult Literacy 48, no. 4: 300–10.

National Center for History in the Schools. 2019. “Introduction to Standards in Historical Thinking.” Accessed December 12, 2019.

Osgood-Campbell, E. 2015. “Investigating the Educational Implications of Embodied Cognition: A Model Interdisciplinary Inquiry in Mind, Brain, and Education Curricula.” Mind, Brain, and Education 9, no. 1: 3–9.

Parsons, Thomas D., Christopher G. Courtney, Michael E. Dawson, Albert A. Rizzo, and Brian J. Arizmendi. 2013. “Visuospatial processing and learning effects in virtual reality based mental rotation and navigational tasks.” International Conference on Engineering Psychology and Cognitive Ergonomics: Understanding Human Cognition. EPCE 2013. Lecture Notes in Computer Science, vol. 8019.

Repetto, Claudia, Silvia Serino, Manuela Macedonia, and Giuseppe Riva. 2016. “Virtual Reality as an Embodied Tool to Enhance Episodic Memory in Elderly.” Frontiers in Psychology 7, 1839.

Roy Rosenzweig Center for History News and Media. n.d. “Mission.” Accessed December 12, 2019.

Smith, Shamus P. and Elizabeth L. Burd. 2019. “Response Activation and Inhibition After Exposure to Virtual Reality.” Vol. 3–4.

Saye, John W. and Thomas Brush. 2006. “Comparing Teachers’ Strategies for Supporting Student Inquiry in a Problem-Based Multimedia-Enhanced History Unit.” Theory and Research in Social Education 34, no. 2 (Spring): 183–212.

Schubert, Thomas, Frank Friedmann, and Hoiger Regenbrecht. 2001. “The Experience of Presence: Factor Analytic Insights.” Presence: Teleoperators and Virtual Environments 10, no. 3 (June): 266–81.

Stanford History Education Group. n.d. “History Assessments.” Accessed December 12, 2019.

Voet, Michiel, and Bram De Wever. 2017. “Preparing Pre-service History Teachers for Organizing Inquiry-Based Learning: The Effects of an Introductory Training Program.” Teaching and Teacher Education 63: 206–17.

Wiggins, Grant. 2015. “Why Do So Many HS History Teachers Lecture So Much?” Granted, and… (blog). April 24, 2015.

Yilmaz, Kaya. 2011. “The Cognitive Perspective on Learning: Its Theoretical Underpinnings and Implications for Classroom Practices.” The Clearing House: A Journal of Educational Strategies, Issues and Ideas 84, no. 5: 204–12.

Appendix: January 2019 Testing Session Overview

Approximate Duration Research Activities
2 min. The research team shared a brief introduction to study activities and objectives.
5 min. Students completed an online 10-question pre-assessment, which included six multiple choice and four open-ended questions.
5 min. The teacher introduced the lesson with a provided script and by sharing a player onboarding video.
15–22 min. Students engaged with the VR experience.
10 min. Students completed an online Simulator Sickness Questionnaire, Self-Assessment Manikin survey, and a post-assessment that was identical to the pre-assessment taken earlier.
3 min. Students filled in a “memory map” graphic organizer in which they were asked to record all the relevant information they remembered from each VR room visited.
5–10 min. The teacher then facilitated a class discussion in response to the following three prompts:

  • Whose account of the massacre was more believable to you?
  • Did witnesses agree that Preston gave the order to fire? Can this fact be corroborated? Or is it contested?
  • Do you think Preston should have been found guilty?
15 min. Students were grouped according to whether or not they thought Preston was guilty, with two students in the “guilty” group, and three students in the “not guilty” group. In these groups they:

  • Discussed and responded to two sourcing questions in reference to Captain Preston.
  • Read and discussed an excerpt from “The Case of Capt. Preston of the 29th Regiment.”
  • Discussed and responded to two contextualization questions and
    two close reading questions.
5 min. Individually, students assessed the trustworthiness of Paul Revere’s Bloody Massacre print by circling details in the document and briefly describing their significance.
5 min. The teacher led a discussion by sharing John Adams’ role in the trial and connecting the case to the concept of “presumption of innocence.”
25 min. An Education Development Center (EDC) researcher led a debrief interview with students with support from other members of the project team.
40 min. An EDC researcher and the EFS PI interviewed the facilitating teacher at a later date.

About the Authors

Alison Burke is an instructional designer and writer at Electric Funstuff. She leads the research and writing for Mission US: TimeSnap. An educator and public historian, she creates meaningful and accessible encounters with the past for audiences of all ages. She holds an MA in Public History from New York University.

Elana Blinder is the curriculum director at The League of Young Inventors, an interdisciplinary STEAM + Social Studies program for students in grades K–5. In her previous role as a design researcher at The Center for Children and Technology | EDC, she conducted formative and summative research to support the ongoing development of Mission US: TimeSnap and a variety of other educational media products.

Leah Potter is a senior instructional designer and writer at Electric Funstuff. She is also co-founder and president of Hats & Ladders Inc., a social impact organization dedicated to helping all youth become more confident and better-informed career thinkers.

David Langendoen is the President and lead game designer of Electric Funstuff, an NYC educational game studio, makers of Mission US––the critically acclaimed history learning games, produced by WNET, with over 2 million users.

Skip to toolbar