Tagged virtual reality

Screen capture from computer-generated virtual reality software showing the user's virtual hand reaching for controls in a simulated space. In the middle of the screen are multi-colored, three-dimensional models of spiraling biochemical proteins and floating controls with various labels "uploader, ON, Sun Position, Model, Position, Rotation, Skybox."
4

Barriers to Supporting Accessible VR in Academic Libraries

Abstract

Virtual reality (VR) shows great promise for enhancing the learning experience of students in higher education and academic libraries are at the forefront of efforts to bring VR into the curriculum as an innovative learning tool. This paper reviews some of the growing applications and benefits of VR technologies for supporting pedagogy in academic libraries and outlines the challenges of making VR accessible for disabled students. It defines existing regulations and guidelines for designing accessible digital technologies and offers two case studies drawn from each of the authors’ own academic libraries, at Temple University and at the University of Oklahoma, in order to provide insight into the challenges and benefits of making VR more accessible for students. The paper argues that to continue to serve their mission of equitable access to information for the entire student population, academic libraries that implement VR programs need to balance innovation with inclusion by allocating sufficient staff time and technical resources and bringing accessibility thinking into VR projects from the beginning. To accomplish this, libraries will need the assistance of software developers and accessibility experts, and librarians will need to act as strong advocates for better support from commercial software and hardware vendors and to promote change in their institutions.

Introduction

Virtual reality (VR) and other extended reality (XR) technologies show great promise for supporting pedagogy in higher education. VR gives students the chance to immerse themselves in virtual worlds and engage with rich three-dimensional (3D) models of learning content, ranging from biochemical models of complex protein structures to cultural heritage sites and artifacts. Research shows that VR can increase student engagement, support the development of spatial cognitive skills, and enhance the outcomes of design-based activities in fields such as architecture and engineering. With these benefits, however, come the risks that VR will exacerbate inequality and exclusions for disabled students.[1] Disability is typically defined as a combination of physical (e.g., not having use of one’s legs) and participation (e.g., not having a ramp so that a wheelchair user can access services) barriers. According to the Center for Disease Control, 26% of adults in the United States have a disability. These include cognitive, mobility, hearing, visual, and other types of disability.

As a class of technologies that engage multiple senses, VR has the capacity to engage users’ bodies and senses in a holistic, immersive experience. This suggests that VR holds great potential for supporting users with a diverse range of sensory, motor, or cognitive capabilities; however, there is no guarantee that the affordances of VR will be deployed in accessible ways. In fact, the cultural tendency to ignore disability coupled with the rapid pace of technological innovation have led to VR programs that exclude a variety of users. Within higher education, the exclusion of disabled students from the benefits of these new technologies being deployed risks leaving behind a significant portion of the student population. The U.S. Department of Education, National Center for Education Statistics (2019) has found that 19.4% of undergraduates and 11.9% of graduate students have some form of disability. Libraries have long been leaders in supporting accessibility (Jaeger 2018) and the rise of immersive technologies presents an opportunity for them to continue to be leaders in making information available to all users. Academic libraries, the focus of this paper, are particularly well positioned to address the challenges of VR accessibility given their leadership in innovative information services and existing close relationships with the research and pedagogy communities at their institutions.

In what follows, we present a brief outline of the recent emergence of VR technologies in academic libraries, introduce recent research on VR accessibility, and conclude with a discussion of two brief case studies drawn from the authors’ institutions that illustrate the benefits and barriers associated with implementing accessibility programs for VR in academic libraries.

VR in Higher Education

“Virtual reality” or “VR” refers to a class of technologies that enable interactive and immersive experiences of computer-generated worlds, produced through a mixture of visual, auditory, haptic, and/or olfactory stimuli that engage with the human sensory system and provide the user with an experience of being present in a virtual world. In most VR systems, visual and auditory senses are primarily engaged, with increasing research being done on integrating haptics and other stimuli. Different levels of immersion and interaction are possible depending on the specific configuration of devices, from relatively low immersion and low interaction provided by inexpensive 3D cardboard viewers for use with mobile devices (e.g., Google Cardboard) to expensive head-mounted displays (HMDs) such as the HTC Vive and Oculus Rift systems that use headsets and head and body tracking sensors to capture users’ movements along “six degrees of freedom” (three dimensions of translational movement along x, y, and z axes, plus three dimensions of rotational movement, roll, pitch, and yaw). At present, HMDs are more commonly used than CAVEs, or “Cave Automatic Virtual Environment,” room-sized VR environments that use 3D video projectors, head and body tracking, and 3D glasses to provide multi-user VR experiences (Cruz-Neira et al. 1992), which have been used in academic contexts since the 1990s. This interest in new information technologies that provide library users with access to computer-generated worlds is not new for librarians. The current interest in VR follows experimentation conducted in libraries beginning earlier in the 2000s on “virtual worlds,” 3D computer-generated social spaces, such as Second Life, that users interacted with through a typical configuration of 2D computer monitor, mouse, and keyboard. Libraries envisioned these technologies as potential tools for expanding library services and enhancing support for student learning and research evaluated the pedagogical efficacy of these new tools (e.g., Bronack et al. 2008; Carr, Oliver, and Burn 2010; Deutschmann, Panichi, and Molka-Danielsen 2009; Holmberg and Huvila 2008; Praslova, Sourin, and Sourina 2006).

Since the commercial release of affordable VR systems such as the HTC Vive and Oculus Rift in 2016 (and now cheaper, lower-resolution variants such as Oculus Go and Oculus Quest), academic libraries have started seriously exploring the possibility of VR to support research and pedagogy. They have begun to conceptualize VR as a platform for immersive user engagement with high-resolution 3D models that support existing curricular activities, such as the use of archaeological, architectural, or scientific models in classroom exercises. Cook and Lischer-Katz (2019) argue

the realistic nature of immersive virtual reality learning environments supports scholarship in new ways that are impossible with traditional two-dimensional displays (e.g., textbook illustrations, computer screens, etc.). … Virtual reality succeeds (or fails), then, insofar as it places the user in a learning environment within which the object of study can be analyzed as if that object were physically present and fully interactive in the user’s near visual field. (70)

VR has been used to support student learning in a variety of fields, such as anthropology and biochemistry (Lischer-Katz, Cook, and Boulden 2018), architecture (Milovanovic 2017; Pober and Cook, 2016; Schneider et al. 2013), and anatomy (Jang et al. 2017). Patterson et al. (2019) describe how the librarians at the University of Utah have been incorporating VR technologies into a wide variety of classes, supporting architecture students, geography students, dental students, fine arts students, and nursing students. From this perspective, VR is envisioned as a tool for accessing digital proxies of physical artifacts or locations that students would ordinarily engage with as physical models (for instance, casts of hominid skull specimens), artifacts, or locations, but which are often too expensive or difficult to access directly.

In addition to providing enhanced modes of access to learning materials, using VR can also enhance student engagement and self-efficacy if implemented in close consultation with faculty (Lischer-Katz, Cook, and Boulden 2018). The technical affordances of VR, when deployed with care, are able to support a range of pedagogical objectives. Dalgarno and Lee (2010) identified representational fidelity (i.e., realistic display of objects, realistic motion, etc.) and learner interaction (i.e., student interaction with educational content) as key affordances of VR technologies, which they suggest can support a range of learning benefits they identified, including spatial knowledge representation, experiential learning, engagement, contextual learning, and collaborative learning. Chavez and Bayona (2018) surveyed the research on literature on VR and identified interaction and immersion as the two aspects of VR that should be considered when designing VR learning applications. Similarly, Johnson-Glenberg (2018) identified a set of design principles for using VR in education based on related affordances of VR—“the sense of presence and the embodied affordances of gesture and manipulation in the third dimension” (1) and found that “active and embodied learning in mediated educational environments results in significantly higher learning gains” (9). Research also suggests that the special visual aspects of VR, such as depth perception and motion cues (Ware and Mitchell 2005), head tracking (Ragan et al. 2013), and immersive displays (Ni, Bowman and Chen 2006) are able to enhance the analytic capabilities of human perception. VR has been shown to enhance human abilities of visual pattern-recognition and decision-making, particularly when working with big data (Donalek et al. 2014), prototyping (Abhishek, Vance, and Oliver 2011), or understanding complex spatial relationships and structures in data sets (Prabhat et al. 2008; Kersten-Oertel, Chen and Collins 2014; Laha, Bowman and Socha 2014).

Immersion is often identified by researchers as a key characteristic of VR technologies that is applicable to enhancing the learning experiences of students. Fowler (2015) identified three types of VR immersion relevant to pedagogy: Conceptual immersion, which supports development of abstract knowledge through students’ self-directed exploration of learning materials, for instance, molecular models; task immersion, in which students begin to engage with and manipulate learning materials; and social immersion, in which students engage in dialogue with others to test and expand upon their understanding. One critique of the applications of VR-based pedagogy is that instructional designers and instructors rarely indicate their underlying learning models or theories (Johnston et al. 2018). For instance, Lund and Wang (2019) found that VR can improve student engagement in library instruction, but do not specify which pedagogical models are effective, instead comparing a particular classroom activity with traditional classroom methods versus the same activity using VR, measuring impact on academic performance and motivation. Radianti et al. (2020), in their review of 38 recent empirical studies on VR pedagogy, acknowledge that while immersion is a critical component of the pedagogical affordances of VR, different studies define the term differently. They also found that only 32% of the studies reviewed indicated which learning theories or models underpin research studies, which makes it difficult to generalize approaches and apply them to other contexts. Radianti et al. (2020) point out that “in some domains such as engineering and computer science, certain VR applications have been used on a regular basis to teach certain skills, especially those that require declarative knowledge and procedural–practical knowledge. However, in most domains, VR is still experimental and its usage is not systematic or based on best practices” (26).

What these trends suggest is that VR shows great potential for use in supporting classroom instruction in higher education institutions, even though pedagogical models and methods of evaluation are still being developed and most projects are in the experimental phase of development. Some fields have already been adopting VR into their departments, such as computer science, engineering, and health science programs, but academic libraries are leading the way in promoting VR for their wider campus communities (Cook and Lischer-Katz 2019). Since many libraries are emerging as leaders in supporting VR, it is essential for them to have policies and support services in place to ensure that these new technologies are usable by all potential users at their institution.

As librarians consider adopting these innovative technologies, discourses of innovation can sometimes lead to oversights that may exclude some users. VR technologies enter libraries alongside other emerging technologies and innovative library services. The current discourse of transformational change promoted by the corporate information technology sector are often at odds with critical approaches to librarianship that stress inclusion and social justice (Nicholson 2015). These conceptions of radical innovation and disruption construct institutions, their policies, and regulations as structures that only function to slow down and constrain innovation. The assumption is that innovative technology is inherently neutral in terms of its ethics and politics, and that it does not require institutional processes to constrain or limit its negative effects; however, by decoupling technological change from institutionalized processes that protect the rights of historically marginalized groups of library patrons, technological change inevitably reinscribes exclusion into the infrastructures of learning. As Mirza and Seale (2017) argue

technocratic visions of the future of libraries aspire to a world outside of politics and ideology, to the unmarked space of white masculinity, but such visions are embedded in multiple layers and axes of privilege. They elide the fact that technology is not benevolently impartial but is subject to the same inequities inherent to the social world. (187)

The idea that technologies embed biases and cultural assumptions is not a new idea—scholars in the field of Science and Technology Studies have argued for decades that technologies are never neutral (e.g., Winner 1986)—but librarians, library administrators, and library science researchers often forget to examine their own “tunnel vision and blind spots” (Wiegand, 1999), or more precisely, their unreflected implicit biases that shape decision making about which technologies to adapt and how to deploy them in libraries. On the other hand, this also means that it is possible to balance innovation with inclusivity by foregrounding library values at the start of the process of innovation, rather than by retrofitting designs, which can yield results that are less equitable and more costly (Wentz, Jaeger and Lazar 2011). Clearly, the learning affordances of VR (Dalgarno and Lee 2010), as they are currently designed, need to be reimagined for disabled users.

VR and Accessibility

Aside from these ethical considerations, as VR becomes increasingly common in education, business, and other disciplines, it becomes answerable to legal guidelines. Federal guidelines for more established information and communication technology can be found in Section 508 of the Rehab Act (see U.S. General Services Administration n.d.), which utilizes Web Content Accessibility Guidelines (WCAG) 2.0 as a standard for web technology (W3C Web Accessibility Initiative 2019). WCAG provide guidance on how to make web content accessible to disabled people and they are overseen by the Web Accessibility Initiative (WAI), part of the World Wide Web Consortium (W3C) (see W3C Web Accessibility Initiative 2019). While they provide a valuable framework, WCAG do not directly apply to immersive technologies and there are currently no accessibility guidelines that do so. Work has been done to develop individual accessibility extensions, hardware, and features, but measurable guidelines that would aid in accessible design are still needed. Only in the last few years have accessibility specialists started adapting existing guidelines by examining existing initiatives and mapping them to the success criteria in WCAG. This includes the XR Access Symposium that was held in the summer of 2019 (see Azenkot, Goldberg, Taft, and Soloway 2019), as well as W3C’s Inclusive Design for Immersive Web Standards Workshop held in the fall of 2019 (see W3C 2019). There are also more specific guidelines that can contribute to design considerations, such as the Game Accessibility Guidelines that are more focused on game design (see Ellis et al. n.d.). Increasing the urgency of this matter, as of December 31, 2018, any video game communication functionality released in 2019 or later must be accessible to disabled people under the 21st Century Communications and Video Accessibility Act (Enamorado 2019), which expands the group of industries mandated to meet accessibility guidelines to include the video game industry.

Those interested in learning more about the accessible design of VR and other immersive technologies should consider reading “Accessible by Design: An Opportunity for Virtual Reality” (Mott et al. 2019), which provides general guidelines for designing accessible VR. For an example of designing accessible tools for a specific user group, see Zhao et al. (2019), which details the developments of a VR toolkit for supporting low-vision users.

Before going any further, it is important to distinguish between VR in its current, popularized form vs. the affordances of VR as a medium. The initiatives, guidelines, and research projects referred to in this section are still largely focused on analyzing the design of the former. However, in order for the technology to become truly accessible, critical inquiry must continue to progress in its understanding of the broader capabilities, limitations, and levels of interaction that construct the latter. The design practices and recommendations that have been developed to support the accessibility of VR are largely individualized and prototypical, which means that each institution’s particular experiences tackling the challenges of accessible VR will vary based on a number of factors. These factors include their individual histories supporting VR, staffing levels and development support, resources, and institutional commitments to accessibility. As librarians at Temple University and University of Oklahoma, we are now in the process of developing guidelines and tools to meet these challenges.

VR at Temple University’s Loretta C. Duckworth Scholars Studio

Temple University’s Loretta C. Duckworth Scholars Studio (LCDSS) “serves as a space for student and faculty consultations, workshops, and collaborative research in digital humanities, digital arts, cultural analytics, and critical making” (Temple University Libraries n.d.). Before the main library’s relocation to its new building, the LCDSS, formerly known as the Digital Scholarship Center (DSC), was located in the basement of Paley Library. Upon its 2015 opening, the DSC had two Oculus Rift DK2 headsets available for interested users. Its space in the new Charles Library includes an Immersive Visualization Studio designed for up to 10 people to simultaneously participate in immersive experiences, and as of 2019 has twelve headsets from a variety of manufacturers, in addition to mobile based headsets with an eye towards continuous acquisition of newer technologies. There are six full-time staff members, one of whom is responsible for the upkeep and management of the Immersive Studio among their other duties.

In August of 2017, I (Jasmine Clark) began researching the accessibility of VR as part of a project I was developing during my library residency.[2] Upon reviewing existing literature, it was apparent that research on the usability of VR for disabled users was in its early stages. Most notable was a report, “VR Accessibility: Survey for People with Disabilities,” resulting from a survey of disabled VR users produced in partnership by ILMxLab and the Disability Visibility Project (see Wong, Gillis, and Peck 2018). However, the majority of research and resources exploring the applications of VR to disabled people were composed of one-off solutions and extensions. This included cases of VR being used as an assistive technology (e.g., spatial training for blind individuals), unique hardware solutions (e.g., the haptic cane), and known issues for specific types of users (e.g., assumed standing position in games being disorienting for wheelchair users). These developments, while valuable, were not design standards or solutions broadly adopted by the game industry. Another concern was the fact that, in the context of the DSC, VR was not just a technology, but also a service that included training and assistance in its use for library patrons. This added an additional layer of complexity because, while there have been discussions on disability in the context of making and makerspaces, there was no literature on accessible service policies, best practices, and documentation for digital scholarship as a whole. In response to these challenges, I began examining existing guidelines and assessing their applicability to emerging technologies. Because WCAG is the federal standard, I joined a working group that guided me through reading the supporting documents and success criteria of WCAG, as well as examining the major legislative changes that were happening around accessibility at that time. I also began working with Jordan Hample, the DSC’s (now LCDSS’s) main technical support staff member, to understand whether or not these guidelines were applicable to immersive technologies.

Because we also needed to address service practices and policies, I decided that user testing would be necessary. User testing would consist of three phases that would take place during a single visit: a pre-interview (to ensure safety and gain an understanding of a user’s disability and previous technical experience), a use test (where users would use VR headsets), and a post-interview (to solicit feedback). I coordinated with Temple’s Disability Resources and Services (DRS) and DSC staff to bring in disabled stakeholders (students, alumni, and other members of the Temple community) in an attempt to 1) determine whether or not they would be able to utilize the equipment, and 2) determine if there were barriers to providing them with the same level of service as other patrons. As Wong, Gillis, and Peck (2018) point out in their report, “people with disabilities are not a monolith—accessibility and inclusion is different for everyone” (1). In order to scope the research to a manageable scale, I decided we would begin with visually impaired, deaf/Hard-of-Hearing (HOH), and hearing impaired users (hearing impairment would include individuals with tinnitus, or other auditory conditions not included under the umbrella of deaf/HOH). Working with Jordan, as well as Alex Wermer-Colan, a Council on Library and Information Resources (CLIR) postdoctoral fellow, I proceeded to draft a research protocol that consisted of interview questions and an explanation for participants of what VR is and the purpose of the research being conducted. These were all sent out via DRS listservs to solicit participants. VR services in the DSC involved a lot of hands-on onboarding and orientation from staff. Often, patrons would drop in and simply want to get acquainted with the technology. As a result, the goal of the research project was for disabled participants in our user testing to be able to navigate to our space and successfully work with the staff members responsible for providing VR assistance to identify experiences that would be as usable as possible for them. There was also a need to better understand staff preparedness in providing assistance to disabled patrons. In the months leading up to the testing, I had preliminary discussions with staff, and also inquired into staff training on accessibility and disability more generally at the library and university level. I found that training was not formalized, so I gathered and shared resources with my colleagues to ensure the safety and dignity of participants. This included referring to the Gallaudet University’s guide on working with American Sign Language (ASL) interpreters (see Laurent Clerc National Deaf Education Center 2015) and various video tutorials on acting as a sighted guide for blind/low-vision people, and maintaining active discussions and explanations around ableism and disability. The discussions also allowed for better understanding of gaps in training and norms.

Once staff were sufficiently prepared, user testing commenced in the summer of 2018. Four participants were invited to the center, three of whom had various visual impairments and one of whom was deaf. On the days of their visits, I would go to the library entrance to greet and guide anyone who needed assistance. Upon arrival, they were brought into a meeting room for a pre-interview that would reintroduce the purpose of user testing, gauge any previous experience with the technology, and identify safety concerns by asking if they had other sensitivities that they felt would be a problem in VR (e.g., sensory sensitivities, sensitivity to flashing lights, etc.). We also asked about level of hearing/vision to get a better idea of which types of experiences worked for different types of hearing/vision. Some immediate questions brought up by participants were around accuracy of sound, depth perception, and similarity to real-world visual experience. Once the initial interview was completed, they were guided out to work with Jordan to identify potential experiences, similar to the way he typically worked with students. I took notes on the interactions, and Alex assisted as needed. Alex’s presence became particularly important when it came to the deaf user. It was brought to our attention that 1) due to variations in inner ear formation, those who were deaf/HOH were at higher risk for vertigo and, 2) a user reliant upon an ASL interpreter would not be able to see the interpreter while in the headset, complicating human assistance. In response, Alex took on the role of surrogate for this participant while they watched his activity on a monitor and gave instructions and feedback. Jordan took on the role of listening to the participants’ verbal feedback on each experience and, utilizing his knowledge of the DSC’s licenses for different VR programs, selected experiences that would be more accommodating to their specific hearing/visual needs.

Upon completion of this phase, participants were then brought back into the meeting room for a post interview. Responses to both interviews, as well as observations made during the interactions, were compiled and summarized into an internal report for our team. We had initially planned to have more users come in, but found that feedback on the limitations of the technology was consistent and addressable enough for us to make adjustments that would allow us to improve services and collect more nuanced data moving forward. For example, it was clear that the software varied so drastically that, in order to provide safe and effective services, it would be necessary to index the features and capabilities of various VR experiences.

The timing of this work was crucial, as we were a year away from the move to our new space, and the findings from the study helped us plan for it. The LCDSS is significantly larger than the DSC, and much more visible. However, while it has required that we re-envision our service policies and programming, it has also given us the opportunity to integrate accessibility into our work from the beginning. One way we are doing this is by developing an auditing workflow that would allow any staff member or student worker to examine newly-licensed VR experiences and produce an accessibility report, as there is a glaring lack of Voluntary Product Accessibility Templates (VPAT) for VR products. These reports would detail accessibility concerns and limitations at the beginning, allowing us to better serve disabled patrons. We are also working with the university’s central Information Technology Services to look at how this can be incorporated into broader LCDSS purchasing practices and documentation workflows.

Once this workflow is finalized, it will be used to support LCDSS staff in aiding faculty and researchers in the development of Equally Effective Alternative Access Plans (EEAAP) for their research and teaching. An EEAAP documents how a technology will be used in a class or program, its accessibility barriers, the plan to ensure equitable participation for disabled people, and the parties responsible for ensuring the plan is carried out. LCDSS staff frequently consult with faculty who wish to integrate LCDSS resources into their pedagogical practices. This can include feedback on assignment structure and design, recommended technologies, and other vital information required for pedagogical efficacy. By generating accessibility reports that identify technical limitations, LCDSS staff can aid faculty in developing multimodal approaches to integrating these technologies into their teaching. This means that, not only are we bringing accessibility to their attention early, but that we are also able to guide them and reduce intimidation, making buy-in more successful. Moving forward, Jordan Hample and I will be making all materials involved in this workflow publicly available, as well as continuing and expanding user testing to include other disabilities.

VR at the University of Oklahoma Libraries, Emerging Technologies Program

Accessibility initiatives for VR at the University of Oklahoma have followed a slightly different trajectory than the one outlined by Jasmine in the previous section. The VR program at OU Libraries was officially launched in 2016 in the Innovation @ the EDGE Makerspace, which began hosting classes and integrating VR content into the course curriculum, including initial integrations within biology, architecture, and fine arts courses (Cook and Lischer-Katz 2019). We use custom-built VR software that enables users “to manipulate their 3D content, modify environmental conditions (such as lighting), annotate 3D models, and take accurate measurements, side-by-side with other students or instructors” and support networked, multiuser VR sessions, which forms “a distributed virtual classroom in which faculty and students in different campus locations [are able to] teach and collaborate” (Cook and Lischer-Katz 2019, 73). Librarians provide VR learning opportunities in three main ways: 1) deployment in the library-managed makerspace; 2) facilitated course integrations; 3) special VR events. Each approach requires different levels of support and planning from librarians. In the case of deployment in our makerspaces, students are able to learn about the technology in a self-directed manner, with guidance from trained student workers who staff the space. Workshops and orientation sessions are available, and students, faculty, and community members typically drop in when they want and engage with technology in a self-directed manner. Since the focus of this space is on self-directed learning and experimentation, the training of student support staff is essential for ensuring that the space feels welcoming and inclusive to visitors and that staff are able to adjust the level of support they provide based on the needs of the visitors to the space.

In the case of course integrations, students are typically brought to our makerspace during regularly scheduled class time. We have portable VR kits that use high-powered gaming laptops and Oculus Rift headsets, which makes it possible to bring the learning experiences directly into the classroom if the faculty member prefers. Examples of VR-based classroom activities include interacting with 3D models that simulate learning objects, such as examining the morphology of different hominid skull casts in an anthropology class or analyzing complex protein structures and processes in a biochemistry class. VR is also used in other classes as a creative tool, such as in a sculpture course in which the students created sculptures in VR and then printed them using the 3D printers in the makerspace. In planning VR course integrations, librarians work directly with faculty members to design activities that will support their course learning objectives.

VR is also used frequently at OU Libraries for special events in which experts lead participants on guided tours through scholarly, high-resolution 3D models. Participants can join the VR tour on campus or from other institutions, since our custom-built VR software supports networked, multi-user sessions. Examples include inviting an archaeologist to lead a group through a 3D scan of a cave filled with ancient rock carvings that is located in the Southwestern United States (Schaffhauser 2017), as well as a tour led by a professor of Middle Eastern History through a 3D model of the Arches of Palmyra, located in Syria.

From the start of the emerging technologies initiative at OU Libraries, rapid innovation was a guiding principle, with the hope that the benefits of emerging technologies could be demonstrated to the broader campus community and that the library could become a hub for supporting emerging technologies across campus. It was important to quickly develop a base of VR technologies and librarian skills in order to promote the potential benefits of the technologies to faculty and students across campus. Starting in January 2016, students and faculty began using our VR spaces for research, learning, experimentation, and entertainment, and by 2018 we had faculty from over 15 different academic departments across campus using VR as a component in their classes (Cook and Lischer-Katz 2019), along with over 2000 individual uses of our VR workstations. By 2019, the emerging technology librarians (ETL) unit had grown to five full-time staff members who worked together to “rapidly prototype and deploy educational technology for the benefit of a range of University stakeholders” (Cook and Van der Veer Martens 2019, 614). At this time, concerns were raised by one of our ETLs about the accessibility of existing VR services and the ETL team brought in an accessibility specialist to advise them. One of the key challenges the team identified through the process of reviewing their existing VR capabilities was the fact that most commercially produced VR software lacks accessibility options, particularly in terms of compatibility with assistive devices. In reviewing users’ experiences in our makerspace, ETLs found that users with dexterity, coordination, or mobility disabilities often request passive VR experiences that provide immersive experiences without the need for use of the VR controller inputs. For programs such as the popular Google Earth VR program, it is not currently possible to provide users with passive experiences, rather the user needs to be able to actively control the two VR controllers themselves to engage with the VR experience. To the team’s surprise, some of the lower-resolution, untethered VR systems, such as the Oculus Go have shown more capabilities for providing passive experiences that rely only on head tracking and the use of target circles for movement through the VR space. Making narrated and guided tours for a VR experience available is essential for providing access to some groups of disabled users. Ensuring that VR controllers are accessible has also been a challenge and ETLs have begun experimenting with 3D printing add-on components to make the VR controllers more usable for users with limited hand function. In response to the lack of accessibility options for commercial software releases, modifications were made to OU’s custom-built VR software to provide accessibility capabilities, including: 1) controls for changing the sensitivity of VR interface controls; and 2) options for user interface text resizing. These modest modifications were made in consultation with VR users. Technical solutions alone are not sufficient, of course, and the ETL team has also found it very important to continue to improve training for student staff so that they are prepared to properly assist disabled users in a sensitive and respectful way. Communicating clearly to the wider university community about what accessible software and hardware capabilities are available is also a challenge that the team is tackling. These activities are still ad hoc in many ways, and we have found that additional work is needed to develop procedures for addressing VR accessibility in a more systematic way in the library and across campus.

The ETL team is taking several approaches to improving our support for accessible VR, looking outward to resources beyond the walls of OU Libraries and looking inward to resources at the university to support improvements to accessibility. ETLs are expanding their knowledge base through involvement in accessibility conferences and working groups and looking to our colleagues at other institutions, such as Temple University Libraries, for guidance on policies and procedures for evaluating and implementing VR software and hardware. The ETL team is planning on conducting future usability testing and focus groups with a range of disabled users from the OU community in order to further refine the feature set of our custom software, which we plan to package and distribute for other institutions to use and build upon.

The experiences of ETLs at OU Libraries point to the importance of working with accessibility experts and bringing disabled users into the design process to develop technologies and policies. Librarians should not be expected to take on accessible design by themselves, rather they should look to experts in this field for assistance. Working with our University’s disability coordinator has been essential for helping us to identify areas where we need to improve our accessibility capabilities, as well as providing us with a network of disabled users on campus who could provide us with user feedback on our technologies. The types of issues we are looking into include techniques for auditing VR software for accessibility issues, providing clearer signage and information on websites to provide students and faculty with a clear understanding of which emerging technology tools are accessible and what accommodations are possible, and ways in which we can continue to improve staff training so that the student workers who staff our makerspace can better support disabled users. The process of developing policies and establishing processes and documentation to support those policies does take time; however, this work has been essential for training staff and establishing best practices at our makerspace in order to address the challenges of VR accessibility. Additional work is necessary to codify this ongoing and still experimental work into institutional policy documents and continue to seek out adaptive tools to make VR accessible for a greater range of library patrons.

Conclusion

The current wave of immersive technologies was not initially designed for users with varying levels of visual, auditory, mobility, and neurological capabilities. Even for libraries and centers that do have development support there is no way to remediate the inaccessibility of every experience used and, even if there was, there would be no way to keep up with the regular updates of hardware and software. One-off, localized solutions cannot replace structural change. In order for VR to become an accessible medium, developers, hardware manufacturers, distribution platforms, and other stakeholders involved in its creation and distribution need to ensure accessibility within their respective roles. The current lack of support from these stakeholders makes it crucial that library staff and the educators that they support understand disability and accessibility, develop appropriate documentation, and advocate for software and hardware vendors to provide better accessibility support in their products. In the meantime, libraries supporting different tiers of VR use and investment will have to consider different approaches to accessibility.

The preceding examples drawn from our experiences at Temple University and the University of Oklahoma (OU) show the range of issues facing accessible VR, but also show the differences in approach for different service models and pedagogical objectives. Temple University includes VR in a very broad suite of technical offerings and its faculty are not currently at the phase of “buy-in” where regular VR development is a priority. As a result, Temple’s focus is on indexing experiences and integrating alternative access plans, with accessible development occurring on a smaller scale. In comparison, OU has much more of a demand for custom-developed software solutions. This demand is due to the fact that one of the main VR applications that OU promotes for course integrations is its own flexible, custom software, which supports a variety of disciplines, including courses in biochemistry, anthropology, architecture, and English. OU is beginning to investigate the accessibility challenges of working with commercial software and is looking to Temple for guidance on how to properly evaluate different software titles and provide adequate documentation. For libraries without developer support, we can expect that the focus will more likely follow Temple’s approach. For libraries with regular development efforts, supporting home-grown accessible design practices, such as those at OU, will be more of a central activity. Some libraries will be a mixture of the two, working to blend commercial and homegrown solutions. Regardless of a library’s approach, the major takeaways for other institutions to consider as they bring accessibility thinking into their VR programs include:

  • Plan for Accessibility from the Beginning: Libraries can save time and resources by thinking about accessibility issues at the start of a program or project.
  • Lack of Standards: As of 2020, there are no standards for accessible VR design, but there are related standards that could lay the groundwork for their development.
  • Developer Support is Essential: Libraries that intend to develop VR experiences need to have sufficient developer support with accessibility expertise.
  • Importance of Auditing and Reporting: Out-of-the-box VR experiences will pose different accessibility challenges from one person to the next and should be audited to better understand these barriers to access. If a library lacks a developer to modify software or create new software, at the very least, available software needs to be audited and have a corresponding accessibility report produced.
  • VR is Not the Pedagogy: VR should be another tool in an educator’s arsenal, not the sole focus of a class (unless VR is the course subject). As Fabris et al. (2019) suggest “Having VR for the sake of having VR won’t fly; the VR learning resources need to be built with learning outcomes in mind and the appropriate scaffolds in place to support the learning experience” (74).
  • Acknowledge the Limits of VR Accessibility: There are limits to making VR accessible. The reality is that there will be students who are unable to use VR for a variety of reasons. Therefore, there should always be an alternative access plan developed so that students have access to non-VR learning methods as well.

Considering these best practices will better enable libraries to approach the challenges of making VR accessible. Putting them into action will directly benefit disabled users, improve librarians’ abilities to make their innovative technology spaces more inclusive, and will help administrators to better plan and allocate resources for supporting the missions of their institutions. While these guidelines are focused on supporting academic libraries, they will likely benefit higher education applications outside of the library, too.

Additionally, while it is true that there is extensive work to be done, there are existing inclusive instructional approaches that can be integrated into VR based coursework by individuals. Multimodal course design and Universal Design for Learning (http://udloncampus.cast.org/page/udl_about) are frameworks that can be applied to VR coursework with approaches like collaborative assignments and activities. It is also worth reviewing a 2015 special issue of Journal of Interactive Technology and Pedagogy that considers the benefits of introducing perspectives from disability studies into the context of designing innovative pedagogies. One of the important takeaways from this collection is that embracing disability and the alternative perspectives that it can provide, presents the potential for new learning opportunities (Lucchesi 2015).

Regardless of whichever pedagogical approach educators adopt, it is imperative that, unless VR is the subject of the course, they remember it is not the pedagogy. Instead, faculty should keep a diverse array of tools in their pedagogical toolkit that will support an equally diverse set of learners. As librarians, faculty, and instructional designers become familiar with inclusive learning frameworks, they are better positioned for more targeted, meaningful advocacy within their institutions. Because, while it is true that there is a lot of work to be done, it is equally true that it can only be done together through active involvement in institutional committees and task forces and by ensuring that discussions about accessibility occur in strategic planning and budgeting meetings with administrators. Accessibility awareness needs to be raised throughout libraries and other academic institutions so that the accessibility challenges of emerging technologies are addressed at the design stage and built into pedagogical implementations from the beginning. This will help to ensure that pedagogies founded on emerging technologies will be “born accessible,” for the benefit of learners and educators throughout the academic world.

Notes

[1] The use of identity-first (“disabled person”) vs. person-first (“person with disabilities”) language is debated. Disability is a complex set of identities and the language used should take into account the preferences of disabled people and other contextual factors. Our choice to use identity-first language is a conscious one.

[2] A library residency is a term position during which residents may rotate through different functional areas of the library or focus on one subject area, and often contribute to projects and initiatives at their host library to gain professional (vs. paraprofessional) experience.

Bibliography

Abhishek, Seth, Judy M. Vance, and James H. Oliver. 2011. “Virtual Reality for Assembly Methods Prototyping: A Review.” Virtual Reality 15, no. 1: 5–20.

Azenkot, Shiri, Larry Goldberg, Jessie Taft, and Sam Soloway. 2019. XR Symposium Report. https://docs.google.com/document/d/131eLNGES3_2M5_roJacWlLhX-nHZqghNhwUgBF5lJaE/edit?usp=sharing.

Bronack, Stephen, Amy L. Cheney, Richard Reidl, and Johan Tashner. 2008. “Designing Virtual Worlds to Facilitate Meaningful Communication: Issues, Considerations, and Lessons Learned.” Technical Communication 55, no. 3: 261–69.

Carr, Diane, Martin Oliver, and Andrew Burn. 2010. “Learning, Teaching and Ambiguity in Virtual Worlds.” In Researching Learning in Virtual Worlds, edited by Anna Peachey, Julia Gillen, and Daniel Livingstone, 17–31. London: Springer.

Chavez, Bayron and Sussy Bayona. 2018. “Virtual Reality in the Learning Process.” In Trends and Advances in Information Systems and Technologies, edited by Álvaro Rocha, Hojjat Adeli, Luís Paulo Reis and Sandra Costanzo, 1345–56. Cham, Switzerland: Springer International Publishing, https://doi.org/10.1007/978-3-319-77712-2_129.

Cook, Matt and Betsy Van der Veer Martens. 2019. “Managing Exploratory Units in Academic Libraries.” Journal of Library Administration, 59, no. 6: 606–28. http://doi.org/10.1080/01930826.2019.1626647.

Cook, Matt and Zack Lischer-Katz. 2019. “Integrating 3D and VR into Research and Pedagogy in Higher Education.” In Beyond Reality: Augmented, Virtual, and Mixed Reality in the Library, edited by Kenneth Varnum, 69–85. Chicago: ALA Editions.

Cruz-Neira, Carolina, Daniel J. Sandin, Thomas A. DeFanti, Robert V. Kenyon, and John C. Hart. 1992. “The CAVE: Audio Visual Experience Automatic Virtual Environment.” Communications of the ACM 35, no. 6: 64–73.

Deutschmann, Mats, Luisa Panichi, and Judith Molka-Danielsen. 2009. “Designing Oral Participation in Second Life: A Comparative Study of Two Language Proficiency Courses.” ReCALL 21, no. 2 (May): 206–26.

Donalek, Ciro, George Djorgovski, A. Cioc, A. Wang, J. Zhang, E. Lawler, S. Yeh, et al. 2014. “Immersive and Collaborative Data Visualization Using Virtual Reality Platforms.” In Proceedings of 2014 IEEE International Conference on Big Data, Washington, DC, Oct. 27–30, 609–14.

Ellis, Barrie, Gareth Ford-Williams, Lynsey Graham, Dimitris Grammenos, Ian Hamilton, Headstrong Games, Ed Lee, Jake Manion, and Thomas Westin. n.d. “Game accessibility guidelines.” Game accessibility guidelines. Accessed Dec. 13, 2019. http://gameaccessibilityguidelines.com/.

Enamorado, Sofia. 2019. “The CVAA & Video Game Accessibility.” 3Play Media. https://www.3playmedia.com/2019/03/18/the-cvaa-video-game-accessibility/.

Fabris, Christian, Joseph Rathner, Angelina Fong, and Charles Sevigny. 2019. “Virtual Reality in Higher Education.” International Journal of Innovation in Science and Mathematics Education 27: 69–80.

Holmberg, Kim and Isto Huvila. 2008. “Learning Together Apart: Distance Education in a Virtual World.” First Monday 13, no. 10 (October). https://firstmonday.org/article/view/2178/2033.

Jaeger, Paul T. 2018. “Designing for Diversity and Designing for Disability: New Opportunities for Libraries to Expand Their Support and Advocacy for People with Disabilities.” The International Journal of Information, Diversity, & Inclusion 2, no. 1–2: 52–66.

Jang, Susan, Jonathan M. Vitale, Robert W. Jyung, and John B. Black. 2017. “Direct Manipulation is Better than Passive Viewing for Learning Anatomy in a Three-dimensional Virtual Reality Environment.” Computers & Education 106: 150–65.

Johnson-Glenberg, Mina C. 2018. “Immersive VR and Education: Embodied Design Principles that Include Gesture and Hand Controls.” Frontiers in Robotics and AI 5, art. 81 (July): 1–19, http://doi.org/10.3389/frobt.2018.00081.

Johnston, Elizabeth, Gerald Olivas, Patricia Steele, Cassandra Smith and Liston Bailey. 2018. “Exploring Pedagogical Foundations of Existing Virtual Reality Educational Applications: A Content Analysis Study.” Journal of Educational Technology Systems 46, no. 4: 414–39. https://doi.org/10.1177/0047239517745560.

Kersten-Oertel, Marta, Sean Jy-Shyang Chen, and D. Louis Collins. 2014. “An Evaluation of Depth Enhancing Perceptual Cues for Vascular Volume Visualization in Neurosurgery.” IEEE Transactions on Visualization and Computer Graphics 20, no. 3: 391–403.

Laha, Bireswar, Doug A. Bowman, and John J. Socha. 2014. “Effects of VR System Fidelity on Analyzing Isosurface Visualization of Volume Datasets.” IEEE Transactions on Visualization & Computer Graphics 4: 513–22.

Laurent Clerc National Deaf Education Center. 2015. “Working with Interpreters.” Gallaudet University. https://www3.gallaudet.edu/clerc-center/info-to-go/interpreting/working-with-interpreters.html.

Lischer-Katz, Zack, Matt Cook, and Kristal Boulden. 2018. “Evaluating the Impact of a Virtual Reality Workstation in an Academic Library: Methodology and Preliminary Findings.” In Proceedings of the Association for Information Science and Technology Annual Conference, Vancouver, Canada, Nov. 9–14, 300–8. https://hdl.handle.net/11244/317112.

Lucchesi, Andres. 2015. “Introduction to Special Issue: Disability Studies Approaches to Pedagogy, Research, and Design.” Journal of Interactive Technology & Pedagogy 8. https://jitp.commons.gc.cuny.edu/category/issues/issue-eight/.

Lund, Brady D. and Ting Wang. 2019. “Effect of Virtual Reality on Learning Motivation and Academic Performance: What Value May VR Have for Library Instruction?” Kansas Library Association College and University Libraries Section Proceedings 9, no. 1: 1–7. https://doi.org/10.4148/2160-942X.1073.

Milovanovic, J. 2017. “Virtual and Augmented Reality in Architectural Design and Education.” In Proceedings of the 17th International Conference, CAAD Futures, Istanbul, Turkey, July. https://www.researchgate.net/publication/319665970_Virtual_and_Augmented_Reality_in_Architectural_Design_and_Education_An_Immersive_Multimodal_Platform_to_Support_Architectural_Pedagogy.

Mirza, Rafia and Maura Seale. 2017. “Who Killed the World? White Masculinity and the Technocratic Library of the Future.” In Topographies of Whiteness: Mapping Whiteness in Library and Information Science, edited by Gina Schlesselman-Tarango, 171–97. Sacramento, CA: Library Juice Press. http://mauraseale.org/wp-content/uploads/2016/03/Mirza-Seale-Technocratic-Library.pdf

Mott, Martez, Ed Cutrell, Mar Gonzalez Franco, Christian Holz, Eyal Ofek, Richard Stoakley, and Meredith Ringel Morris. 2019. “Accessible by Design: An Opportunity for Virtual Reality.” ISMAR 2019 Workshop on Mixed Reality and Accessibility. https://www.microsoft.com/en-us/research/publication/accessible-by-design-an-opportunity-for-virtual-reality/.

Ni, Tao, Doug A. Bowman, and Jian Chen. 2006. “Increased Display Size and Resolution Improve Task Performance in Information-rich Virtual Environments.” In Proceedings of Graphics Interface, Quebec City, Canada, June 7–9 139–46.

Nicholson, Karen P. 2015. “The McDonaldization of Academic Libraries and the Values of Transformational Change.” College & Research Libraries 76, no. 3: 328–338. http://doi.org/10.5860/crl.76.3.328.

Patterson, Brandon, Tallie Casucci, Thomas Ferrill and Greg Hatch. 2019. “Play, Education, and Research: Exploring Virtual Reality through Libraries.” In Beyond Reality: Augmented, Virtual, and Mixed Reality in the Library, edited by Kenneth J. Varnum, 47–56. Chicago: ALA Editions.

Pober, Elizabeth E. and Matt Cook. 2016. “The Design and Development of an Immersive Learning System for Spatial Analysis and Visual Cognition.” In Proceedings of 2016 Conference of the Design Communication Association, Bozeman, MT, https://static1.squarespace.com/static/532b70b6e4b0dca092974dbe/t/5755e2df20c647f04c95598a/1465246433366/pobercook_text+%281%29.pdf.

Prabhat, Andrew Forsberg, Michael Katzourin, Kristi Wharton, and Mel Slater. 2008. “A Comparative Study of Desktop, Fishtank, and CAVE Systems for the Exploration of Volume Rendered Confocal Data Sets.” In IEEE Transactions on Visualization and Computer Graphics 14, no. 3 (May-June): 551–63.

Praslova–Førland, Ekaterina, Alexei Sourin, and Olga Sourina. 2006. “Cybercampuses: Design Issues and Future Directions.” Visual Computer 22, no. 12: 1015–28.

Radianti, Jaziar, Tim A. Majchrzak, Jennifer Fromm, and Isabell Wohlgenannt. 2020. “A Systematic Review of Immersive Virtual Reality Applications for Higher Education: Design Elements, Lessons Learned, and Research Agenda.” Computers & Education 147: 1–29. https://doi.org/10.1016/j.compedu.2019.103778.

Ragan, Eric D., Regis Kopper, Philip Schuchardt, and Doug A. Bowman. 2013. “Studying the Effects of Stereo, Head Tracking, and Field of Regard on a Small-scale Spatial Judgment Task.” IEEE Transactions on Visualization and Computer Graphics 19, no. 5: 886–96.

Schaffhauser, Dian. 2017. “Multi-campus VR Session Tours Remote Cave Art.” Campus Technology, (October 9). https://campustechnology.com/articles/2017/10/09/multi-campus-vr-session-tours-remote-cave-art.aspx

Schneider, Sven, Saskia Kuliga, Christoph Hölscher, Ruth Conroy-Dalton, André Kunert, Alexander Kulik, and Dirk Donath. 2013. “Educating Architecture Students to Design Buildings from the Inside Out.” In Proceedings of the 9th International Space Syntax Symposium, edited by Y.O. Kim, H.T. Park and K.W. Seo, Seoul, Korea. https://www.researchgate.net/profile/Saskia_Kuliga/publication/281839210_Educating_Architecture_Students_to_Design_Buildings_from_the_Inside_Out_Experiences_from_a_Research-based_Design_Studio/links/55faa73c08aeba1d9f36bb64/Educating-Architecture-Students-to-Design-Buildings-from-the-Inside-Out-Experiences-from-a-Research-based-Design-Studio.pdf.

Temple University Libraries. n.d. “Loretta C. Duckworth Scholars Studio.” Temple University Libraries. Accessed Dec. 13, 2019. https://library.temple.edu/lcdss.

U.S. Department of Education, National Center for Education Statistics. 2019. Digest of Education Statistics, 2017 (2018–070), https://nces.ed.gov/fastfacts/display.asp?id=60.

U.S. General Services Administration. n.d. “About Us.” Section508.gov, GSA Government-wide IT Accessibility Program. Accessed Dec. 13, 2019. https://www.section508.gov/about-us.

Ware, Colin and Peter Mitchell. 2005. “Reevaluating Stereo and Motion Cues for Visualizing Graphs in Three Dimensions.” In Proceedings of the 2nd Symposium on Applied Perception in Graphics and Visualization, 51–8. http://doi.org/10.1145/1080402.1080411.

W3C. 2019. “Inclusive Design for Immersive Web standards.” W3C. https://www.w3.org/2019/08/inclusive-xr-workshop/.

W3C Web Accessibility Initiative. 2019. “Making the Web Accessible.” Web Accessibility Initiative. https://www.w3.org/WAI/.

W3C Web Accessibility Initiative. 2019. “Web Content Accessibility Guidelines (WCAG) Overview.” Web Accessibility Initiative. https://www.w3.org/WAI/standards-guidelines/wcag/.

Wentz, Brian, Paul T. Jaeger, and Jonathan Lazar. 2011. “Retrofitting Accessibility: The Legal Inequality of After–the–fact Online Access for Persons with Disabilities in the United States.” First Monday 16, no. 11 (November). https://firstmonday.org/ojs/index.php/fm/article/view/3666/3077.

Wiegand, Wayne A. 1999. “Tunnel Vision and Blind Spots: What the Past Tells Us about the Present; Reflections on the Twentieth-Century History of American Librarianship.” The Library Quarterly 69, no. 1: 1–32.

Winner, Langdon. 1986. The Whale and the Reactor: A Search for Limits in an Age of Technology. Chicago: University of Chicago Press.

Wong, Alice, Hannah Gillis, and Ben Peck. 2018. “VR Accessibility: Survey for People with Disabilities.” Disability Visibility Project & ILMxLAB. https://drive.google.com/file/d/0B0VwTVwReMqLMFIzdzVVaVdaTFk/view.

Zhao, Yuhang, Edward Cutrell, Christian Holz, Meredith Ringel Morris, Eyal Ofek, and Andrew D. Wilson. 2019. “SeeingVR: A Set of Tools to Make Virtual Reality More Accessible to People with Low Vision.” In Proceedings of CHI 2019, Glasgow, Scotland, May 4–9. http://doi.org/10.1145/3290605.3300341.

About the Authors

Jasmine Clark is the Digital Scholarship Librarian at Temple University. Her primary areas of research are accessibility and metadata in emerging technology and emerging technology centers. Currently, she is co-leading The Virtual Blockson, a project to recreate the Charles L. Blockson Afro-American Collection in virtual reality, while also doing research on 3D metadata and the development of Section 508 compliant guidelines for virtual reality experiences. Jasmine has experience in a variety of functional areas and departments, including metadata, archives, digital scholarship, and communications and development. She is interested in the ways information organizations can integrate accessible, inclusive practices into their services, hiring, and management practices.

Zack Lischer-Katz is a postdoctoral research fellow at University of Oklahoma Libraries. From 2016 to 2018 he was a Council on Library and Information Resources (CLIR) Postdoctoral Fellow. He employs qualitative-interpretive methodologies to examine visual information preservation and curation in information institutions, with a focus on complex data types, such as virtual reality, 3D, and audiovisual formats. His research has appeared in Library Trends, International Journal of Digital Curation, Information Technology and Libraries, and First Monday. He received his PhD in Communication, Information, & Library Studies from Rutgers University and his MA in Cinema Studies from New York University.

Grinnell College students examine a double-pen slave cabin in Vacherie, Louisiana.
1

Using Virtual Reality to Expand Teaching and Research in the Liberal Arts

Abstract

Grinnell College has established a lab for teaching undergraduate liberal arts students the hard and soft skills necessary to develop extended reality (XR) experiences. This lab helps the College respond to external social and economic pressures while retaining its core liberal arts values. Within the lab, students develop the metacognitive skills, technical training, and problem-solving strategies that will make them competitive candidates in a global twenty-first–century marketplace. For other institutions interested in implementing an XR lab on their campuses, we provide key takeaways in the following areas: how we launched our lab, the funding instruments that support lab activities, the hardware and software used to develop XR experiences, the development team structure and member responsibilities, lessons learned from the pilot project, and projects currently in development.

Background

Grinnell College, like many small liberal arts colleges, has questioned how to remain robust and relevant in a digital age (Selingo 2013; 2017). We value knowledge for its own sake, social justice, and critical thinking; yet, we accept responsibility for equipping our students with the skills that allow them to adapt to a world of rapidly changing professional opportunities. We refused to sacrifice the former for the latter. Instead, we created a learning environment to promote both our traditional values and practical job skills. In our lab, when students research, create, and evaluate extended reality (XR) experiences, they develop the technical, social-awareness, and problem-solving skills that make them attractive candidates for twenty-first–century jobs while exhibiting liberal arts sensibilities. By developing marketable skills within the framework of core liberal arts questions and experiences, the College moves toward a future in which our educational offerings are both highly relevant and eminently sustainable.

Various characteristics of and cultures within the institution have influenced how the College has responded to the pressures of a changing academic and digital landscape. Grinnell College is a small, residential, undergraduate-only liberal arts college in rural central Iowa. The College was established in 1846 with a basis in individual intellectual pursuit for the betterment of humankind that has remained strong to today and is in evidence with the individually advised curriculum. The teaching culture is centered around small, face-to-face, discussion-based classes that explore topics according to professor interests. The College includes disciplines in the arts, social sciences, and natural sciences; but we do not have professional programs such as journalism, business, and nursing, perhaps because corporate or practical pursuits are viewed as less intellectually rigorous. The College also functions with a conservative curriculum and traditional views of faculty, who are the College employees and experts primarily responsible for helping students to grow in their own knowledge. Challenges arise when new developments conflict with traditional conditions. For example, we have seen the professionalization of College staff, with highly educated, non-faculty employees taking on more significant roles in students’ educational experiences. Additionally, we have seen changes in what students need and want from their college experience to help them succeed beyond school. Similar to other institutions and labs developing projects in XR, the College wrestles with how to remain true to our essential values while accommodating emerging needs (Szabo 2019).

The Grinnell College Immersive Experiences Lab (GCIEL) emerged from discussions at the administrative level, which identified a need to synthesize a twenty-first–century liberal arts education using emerging digital visualization technologies. GCIEL is an interdisciplinary community of inquiry and practice that allows students, faculty, and staff at the College to explore the liberal arts through XR technologies (Brown, Collins, and Duguid 1989; Wenger 1998; Wenger, McDermott, and Snyder 2002). XR is an umbrella term encapsulating immersive technologies such as virtual reality (VR), augmented reality (AR), and mixed reality (MR). Of these technologies, lab activity started with a focus on developing VR experiences that completely immerse the user in a simulated three-dimensional (3D) environment (Bailenson 2019; Greengard 2019; Rubin 2018; Jerald 2015); we plan on expanding into AR and MR in the future.

Participating in the hands-on process of developing VR experiences has resulted in educational benefits for students. First, students gain critical-thinking and technical skills. When working in project teams to create immersive digital content, students experience an authentic development environment using industry-standard hardware and software, which prepares them to succeed in a rapidly changing job market. From a liberal arts perspective, the development process challenges students to explore deep questions and make interdisciplinary connections. The research required for developing culturally sensitive, ethical, and historically accurate immersive digital content is both demanding and comprehensive. Compared to research methodologies privileging linear subject matter presentations, such as a term paper or a video, research for VR projects compels students to consider how elements of their chosen topic function together as an interconnected, object-oriented activity system (Engeström, Miettinen, and Punamäki 1999; Jonassen and Rohrer-Murphy 1999). To do this, students must consider multiple context-specific variables for the system they investigate, how these variables interact within historical, spatial, and social contexts, and how end users will ultimately interact with the variables in an VR environment. Second, students develop soft skills including communication and collaboration. Interdisciplinary teamwork between students, faculty, and staff is a key feature of the problem-solving experience and establishes a collaborative knowledge-generation framework. The faculty role shifts from a lecturer focused on content coverage to a coach who guides students as they navigate the “real world” challenges they encounter. Staff member roles shift from assistant to technical advisors and mentors. Student roles shift from being passive recipients of knowledge to co-creators in the learning experience. These shifts allow team members to learn from each other as they integrate their own discipline knowledge and methods into the project.

Narrative

Pedagogical approaches

Inspired by Jonassen’s concepts about teaching for solving ill-structured problems and active learning (Jonassen 2000), GCIEL’s pedagogical practices guide students through a problem-solving process in which they integrate several content domains and negotiate the unpredictable paths that emerge along the way. Jonassen, Carr, and Yueh (1998) conceptualize technology as knowledge construction “Mindtools” that students learn with, not from. Using this framework, GCIEL allows learners to function as designers using VR technologies to explore their subject matter, critically evaluate the content they are studying, and represent their knowledge in a meaningful way. This approach challenges certain traditional liberal arts attitudes about what kinds of learning are valued. While the liberal arts shies away from anything that resembles “vocational” training, GCIEL fully embraces training in practical hard and soft skills as an integrated part of content knowledge acquisition and critical thinking. We recognize skills such as software and hardware competence, digital file management, project and time management, troubleshooting, and team communication as foundations for the higher-order thinking skills that liberal arts college graduates will need throughout their lives. Thus, we intentionally teach these competencies alongside more traditional humanities topics rather than hope that learners acquire them incidentally through trial and error. In this way, GCIEL builds effective learning experiences that result in students thinking critically about VR technologies and using these technologies to examine, interrogate, and represent core liberal arts topics.

GCIEL seeks to optimize learning by maintaining a flexible, inclusive, and student-centered educational environment in which instructors “pay close attention to the knowledge, skills, and attitudes that learners bring” (National Research Council 2000, 23) to the research and development experience. By treating learners “as cocreators in the teaching and learning process, as individuals with ideas and issues that deserve attention and consideration” (McCombs and Whistler 1997, 11), GCIEL allows students to take an active role in reinventing their liberal arts experience. Heeding advice that “supplementing or replacing lectures with active learning strategies and engaging students in discovery and scientific process improves learning and knowledge retention” (Handelsman et al. 2004, 521), GCIEL emphasizes hands-on, authentic learning. Students develop products aligned with their interests and wield digital technologies in socially conscious ways within widely-ranging content domains. Students, in a focus group interview, viewed the experience as highly beneficial to their overall education. One student team member particularly valued the opportunity to learn “interdisciplinary communication on a long-term project” of a scale and duration that far exceeded what could be done within just one semester of a class (GCIEL Focus Group 2018). Another student observed that one of the most important parts of the project was how, “It feels like we’re on a team with our bosses…instead of it being very much top down” (GCIEL Focus Group 2018).

When developing VR experiences in GCIEL, Grinnell College students cultivate skills which help them adapt to rapidly-changing professional opportunities and contribute to others’ learning. Because the student-developed VR products are released as open educational resources (OER) under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License, students anywhere in the world can augment their education by using and contributing to the custom-built immersive experiences. As an educational tool, VR is particularly useful for enhancing spatial knowledge representation, promoting experiential learning opportunities, increasing motivation and engagement, and contextualizing the learning experience (Dalgarno and Lee 2010; Steffen et al. 2019). The embodied experiences in VR have been found to promote empathy (Herrera et al. 2018; van Loon et al. 2018) and perspective taking (Ahn, Le, and Bailenson 2013; Yee and Bailenson 2006), both of which are particularly important within liberal education contexts that focus on preparing students to deal with complexity, diversity, and change and to promote social responsibility (“What Is Liberal Education” n.d.). The VR projects developed in GCIEL (detailed below) offer new ways to engage students in various learning experiences across widely ranging domains from history (Ijaz, Bogdanoych, and Trescak 2017; Taranilla et al. 2019; Wood, William, and Copeland 2019; Yildirim, Elban, and Yildirim 2018), second language and culture acquisition (Blyth 2018; Dolgunsoz, Yildirim, and Yildirim 2018; Legault et al. 2019), and mathematics (Sundaram et al. 2019; Nathal et al. 2018; Putman and Id-Deen 2019).

Funding instruments

Dr. David Neville, a Digital Liberal Arts Specialist at Grinnell College, spearheaded the GCIEL initiative. Dr. Neville’s background in instructional technology and design, digital game-based learning, 3D modeling, and Unity development gives him the expertise to serve as the director of the lab and act as the technical advisor on all GCIEL projects. In Fall 2016, Dr. Neville received a $10,000 planning grant from Grinnell College’s Innovation Fund (IF) to investigate the feasibility of implementing a VR lab at the College. He used the grant funds to educate faculty and staff, bring in external experts, purchase equipment, and hire students with the following financial breakdown: First, about 45% of the IF monies supported participant stipends for a summer workshop led by Dr. David Neville and Dr. Damian Kelty-Stephen. This workshop helped 10 faculty and staff members at Grinnell College learn how to use VR technologies in a curricular setting. Tweets about the workshop are archived under the #gcielsw17 hashtag. Because more people showed interest in the topic than originally anticipated, the Center for Teaching, Learning, and Assessment provided an additional $1,920 to support the extra participants who registered for the workshop. Second, about 4% of the funds paid for VR experts to present their research at the workshop. Dr. Joel Beeson, Associate Professor in West Virginia University’s Reed College of Media, presented his work on the Bridging Selma Project and the Fractured Tour app. Dr. Glenn Gunhouse, Senior Lecturer of Art History in the School of Art and Design at Georgia State University, presented a general introduction to his cultural heritage projects in virtual reality, with observations about how the technology can provide access to otherwise inaccessible objects of study (Sinclair and Gunhouse 2016). Third, roughly 15% went towards purchasing new VR hardware and software (e.g., Dell Precision 5810 with NVIDIA Quadro M5000 GPU, Oculus Rift, and Wacom tablet). Finally, about 37% of IF monies paid wages for students working on the development team for the lab’s first VR project. Supporting student development work on this project, the Institute for Global Engagement at Grinnell College contributed $6,200 to fund a one-week visit to Louisiana for site-based research.

In Fall 2018, GCIEL received $144,000 for a three-year pilot project IF grant. These monies were utilized in ways which allowed the lab to expand its influence on campus and widen its project portfolio. First, about 10% of the IF award supported a new XR speaker series, which involved bringing academics and industry representatives to Grinnell College. These experts presented on the current state of XR in their fields, shared their vision for how XR will grow in the future, and demonstrated how a liberal arts education can prepare students for a career in XR. Students gained networking opportunities with these influential thought and industry leaders. Second, about 78% of the award paid personnel costs for the development teams, including student wages (72%) for four development teams and site-based research costs (6%). Finally, GCIEL used the remaining 12% of IF monies to purchase software and hardware necessary for developing VR experiences. These included software licenses, online training, digital assets, an additional VR-capable workstation with associated hardware, and an HTC Vive. This IF support ends in Summer 2021, at which time the College will consider whether to provide permanent institutional funding for GCIEL.

Team structure and technology pipeline

After confirming faculty interest in VR at the summer workshop, we began to assemble a VR development team for a pilot project. Forming the team proved that our small liberal arts college had sufficient resources and talent on site to shoulder an ambitious digital project. This was a considerable achievement considering that larger game design studios typically have development teams with hundreds of members, each contributing deep subject-matter knowledge, software and programming expertise, visual and 3D design capabilities, technical support, and project guidance. Echoing the development team experience that students might encounter in the XR industry, we envisioned our scaled-down version of the team to include a faculty adviser, a technical staff member, and students functioning as Subject-Matter Experts (SMEs), 3D Artists, and Unity Developers. The faculty adviser would come from a field related to the project’s topic and focus on helping students learn the subject matter. The technical staff member would help students manage the project and learn essential technological skills. Each student role had unique requirements.

Typically, we recruited the project SME through the faculty member, who invited an advanced undergraduate student major from their discipline. This student may have demonstrated relevant skills while working with the faculty member on prior academic projects. Only the SME had a personal invitation to join the VR development project, unlike the 3D Artist and Unity Developer, who were recruited using an open application and interview process through the student employment portal. The SME was responsible for (a) finding, evaluating, and utilizing resources to guide project development; (b) disseminating research findings to other team members in an understandable manner; and (c) leading the team’s process and progress. We considered giving SMEs more responsibilities in directing and managing a project in order to offset the marginalization that SMEs from humanities fields may feel during the coding-heavy portions of the project when they lack technical experience compared to their teammates. This may require the SME to learn and apply instructional design theories and models, Agile software development methods (e.g., Scrum), and the Unified Modeling Language (UML) to the project.

We selected a 3D Artist based on this individual’s technical experience or interests. The Artist needed to be able to use software such as Autodesk 3ds Max, Substance Painter, and Adobe Creative Cloud software platforms (e.g., Illustrator and Photoshop), and also be willing to engage in 3D modeling and texturing, UV mapping and unwrapping, model rigging and animation, developing concept art, and storyboarding. We chose 3ds Max because it is an industry-recognized tool, and familiarity with this system should better prepare students for internships and employment opportunities. The Artist is primarily responsible for 3D asset development and animation in 3ds Max and texture creation in Substance Painter. Artists may also contribute to other aspects of the project such as writing entries for a project blog, creating turntable animations of project assets for the GCIEL YouTube channel, or presenting to students and faculty about the lessons learned during project development. The Artist’s workflow included (a) evaluating primary and secondary resources identified by the Subject-Matter Expert and any data collected through site-based research; (b) utilizing these resources and data to create 3D models and animations in 3ds Max for the VR experience; and (c) importing the FBX file of the models into Substance Painter and Unity. Within Substance Painter, the Artist uses the physically based rendering and shading (PBR) capabilities of the software platform to create albedo transparency, specular smoothness, normal, occlusion, and emission texture maps. Within Unity, the Artist creates materials with a standard specular shader and then applies the texture maps to the 3D models. The Artist may also create lighting and particle effects for the VR experience inside Unity.

We selected Unity Developers based on their technical experience or interests in the Unity integrated development environment (IDE), object-oriented design and programming principles, Unity script writing in the C# programming language, and version control and collaboration with Git and GitHub. The Unity Developers were primarily responsible for writing the code that drives the VR experience; the information provided from the SME and the team’s site-based research informs how the Unity Developer programs the functionality of the experience. The Unity Developer also needed to be familiar with or willing to learn the SteamVR Unity plugin, which allows Unity to interact with and receive input from attached VR hardware (e.g., Oculus Rift-S and HTC Vive). The workflow for the Unity Developers entailed (a) brainstorming the interactivity in the VR experience; (b) bodystorming the experience with the team to flesh out what the user experience (UX) should look and feel like and how users would potentially interact with the experience; (c) utilizing whiteboxing and method stubbing to quickly make experience prototypes; (d) running through prototype tests of the VR experience to elicit user feedback; and (e) producing a minimal viable product (MVP) that could be used to secure external grant funding or to gather data in research experiments. The MVP is a version of the VR experience with just enough features to demonstrate proof of concept and provide feedback for future product development. We uploaded major versions of the VR experiences and their MVPs to the lab GitHub repos to serve as our backups, include in students’ portfolios, and share open source resources with other educational institutions interested in developing VR experiences.

Pilot project

Dr. Sarah Purcell, the L. F. Parker Professor of History at Grinnell College, and Dr. David Neville, Director of GCIEL, launched the pilot project in late Spring 2017. They hired four students for the project development team including history student Sam Nakahira as the SME, studio art student Rachel Swoap as the 3D Artist, and computer science students Zachary Segall and Eli Most as the Unity Developers. The project began as an ambitious attempt to build a VR experience of the Uncle Sam Plantation, a nineteenth-century Louisiana sugar plantation. Unfortunately, the project had an unintentionally slow, rolling start as two team members went to study in Europe for a semester. In Summer 2017, Sam Nakahira worked with Dr. Sarah Purcell to research and write about the Uncle Sam Plantation and its inhabitants, the 19th-century sugar production methods, and the historical context that would guide the team’s development process. During Fall 2017, Zachary Segall began prototyping the VR experience, deepening proficiency in the Unity IDE, and choosing a VR interaction system for the project. Based on development problems at the time with the Virtual Reality Toolkit (VRTK), Zachary Segall chose SteamVR v. 1.2.3 as the VR interaction system. With all the team members back on campus by early 2018, the full development team visited Louisiana in January 2018 for site-based research (see Figure 1). They met immediately afterwards to begin building the VR experience. At this point, we encountered a brand new series of challenges.

Grinnell College students examine a double-pen slave cabin in Vacherie, Louisiana.
Figure 1. Site-based research. Members of the GCIEL student development team (from left to right: Sam Nakahira and Zachary Segall) conduct site-based research of a double-pen slave cabin at Laura Plantation in Vacherie, Louisiana (January 2018). Photo by David Neville.

Initially, the team intended to simulate the 19th-century sugarhouse and steam-powered sugar mill that had operated on the Uncle Sam Plantation. The team could access the plot plan and survey data of the plantation mansion and larger outbuildings (see Figure 2); however, we had difficulty locating documentation for the sugarhouse and sugar mill. Additionally, modeling and animating the sugar mill exceeded the skill level of our 3D Artist, who was new to the 3ds Max modeling software. We soon realized the project’s scale was far beyond what we could reasonably handle with our current resources and timeframe; so, we opted to start small and then iterate toward the larger-scale goal.

Plot plan of the Uncle Sam Plantation made by the Historic American Buildings Survey (HABS) in 1940.
Figure 2. Plot plan of the Uncle Sam Plantation. Plot plan of the Uncle Sam Plantation (Leimkuehler 1940) made by the Historic American Buildings Survey (HABS) in 1940 and one of the historical documents utilized by the GCIEL student development team for developing the VR experience.

To provide a common ground for historical understanding across team members, all participated in Dr. Sarah Purcell’s two credit-hour guided-reading course on the history of American slavery that focused on Louisiana, museum curation, and public history theory. Course readings inspired the new direction for our project. To honor the humanity of the enslaved people who lived on the plantation, the team decided to refocus the project on teaching users how to interpret the home life of the enslaved. Having agreed on a new approach, the team began recreating a double-pen slave cabin, which our site-based research provided sufficient data for a digital model (see Figures 3 and 4), and designing plans for structuring the VR experience itself (see Figures 5 and 6).

The 3ds Max interface showing a high quality render of a double-pen slave cabin.
Figure 3. The 3ds Max interface. This screenshot shows a high quality render of the double-pen slave cabin currently in development. The render uses the NVIDIA Mentalray Renderer with Sunlight and Daylight Systems set to 7 AM on 21 October 1868 in Baton Rouge, Louisiana. A turntable render of this 3D model is available on the GCIEL YouTube channel. Screenshot and model by David Neville.
The Unity interface showing models of the double-pen slave cabin and the plantation mansion.
Figure 4. Importing models into the Unity game engine. GCIEL student development team members import the models they developed in 3ds Max into the Unity game engine for programming user interactivity. The HABS plot plan is used as a reference image to ensure proper scale of the VR experience and approximate distances between its features. Screenshot by David Neville.
Students on the GCIEL development team discuss the Uncle Sam Plantation VR project.
Figure 5. Development team discussion. GCIEL student development team members (from left to right: Sam Nakahira, Zachary Segall, and Rachel Swoap) reflect on how to reconstruct the lived spaces of the plantation complex as authentically and sensitively as possible, and brainstorm possible directions that a VR experience could take. Photo by David Neville.
Experience flowchart for the Uncle Sam Plantation VR project.
Figure 6. VR experience flowchart for the proposed structure of prototype Uncle Sam Plantation VR experience. Image by David Neville.

We came to four critical insights as we found ourselves frequently adjusting our development pipeline. First, we needed to design the curricular content around the problems arising in the project. We initially held the course meetings separate from project-development meetings to prevent talk about the project’s technical details from overshadowing discussion about the historical topics. However, we discovered that the course topics could easily become divorced from and less relevant to the specific historical challenges that emerged naturally from the project work. We actually needed to let the project work and the historical topics inform one another in real time. Second, working together closely as an interdisciplinary team to identify problems and brainstorm solutions was essential. At first, everyone worked on their own and within their own disciplinary perspective in a disconnected divide-and-conquer approach. This left little overlap for noticing how the separate parts were not quite fitting together as a whole. Had the team been working together more closely, we could have saved time by realizing sooner that researching the sugar production was a dead end. Third, we needed alignment between the project goals and the team members’ skills, especially for technology-heavy projects. If the team members did not already have the skills when they started, the team needed to re-think the goal or to devote time and resources to help the team members acquire the necessary skills.

Fourth, and perhaps most crucially, we discovered that team members must adapt themselves to different disciplinary expectations and research styles. In particular, the approaches used in computer science and history were quite different and led to some tension. Computer science professionals reduce a design problem into small, manageable components and then rapidly iterate through prototypes to find the most effective and efficient solution. In contrast, history professionals start with library and archival research to shape the research questions, then they produce a polished document with the conclusions about the subject of inquiry. Risking oversimplification, it was as if the computer science approach tried building a complex whole from smaller, simpler parts and the history approach tried contemplating a complex whole to extract a few smaller, concrete understandings. Puzzling over how to merge these distinctly different problem-solving approaches, we began implementing a new project workflow based loosely on Scrum with two-week sprints (Ashmore and Runyam 2015; Deemer et al. 2009; Rubin 2013). This process provided a common framework for approaching the problem by breaking the whole project into smaller chunks so the SME would have a more narrow issue to explore and the Unity Developers had more tangible components to start building.

Scrum is a software development project framework that embraces iterative and incremental practices, collaborative teamwork, and self-organization. A Scrum sprint is a fixed space of time in which a product of the highest possible value is created. The sprint began with team members meeting in the GCIEL space to brainstorm and assign project tasks (see Figure 7). Members tracked their progress on these tasks using Trello, a web-based project management platform, and a whiteboard located in the team space and collaboratively addressed questions as they arose (see Figures 8 and 9). At the end of the sprint, team members met to debrief, identify new areas that needed to be developed, and reflect on what they learned with regard to both the historical subject matter and project technical skills. At appropriate stages in developing the VR experience, the development team included prototype testing in their workflow to ensure the end-users would have a favorable experience (see Figure 10). By involving all team members in this process, we improved the interdisciplinary communication and problem solving.

Students on the GCIEL development team launch a Scrum sprint for the Uncle Sam Plantation VR project.
Figure 7. Two-week Scrum sprint. The start of a two-week Scrum sprint utilized the community-building spaces of the Digital Liberal Arts Lab (DLAB) at Grinnell College, as well as the Media:Scape technology available there. GCIEL student dev team members (clockwise around the table): Rachel Swoap, Sam Nakahira, Zachary Segall, and Eli Most. Photo by David Neville.
The Trello interface showing lists and cards used for managing the Uncle Sam Plantation VR project.
Figure 8. High-tech project management. Trello, a web-based project management platform, was critical for implementing a Scrum framework that included brainstorming new ideas for the project and who was in charge of completing assigned tasks. Screenshot by David Neville.
The whiteboard in the GCIEL workspace functions as a Scrum board.
Figure 9. Low-tech project management. In addition to Trello, a Scrum board located in the GCIEL space helped student development team members keep track of project-related tasks, who they were assigned to, and their status. Photo by David Neville.
Prototype testing a VR experience in the GCIEL workspace.
Figure 10. Prototype testing. Zachary Segall tests a prototype VR experience with an unidentified Grinnell College computer science student. User testing allows GCIEL development teams to think critically about their own work. Photo by David Neville.

Second-generation projects

Having learned valuable lessons about the VR design process through the pilot project, GCIEL moved forward with three new VR projects spanning the liberal arts disciplines at Grinnell College, including recreating a Viking meadhall, creating an environment to help students visualize mathematical ideas, and creating an immersive experience to teach German language and culture.

Dr. Tim D. Arner, Associate Dean and Associate Professor of English, and Dr. David Neville lead the Envisioning Heorot Project that is building a VR experience of Heorot, the meadhall from the Old English poem Beowulf where much of the narrative happens. This immersive experience is modeled on archeological excavations of meadhalls in Denmark, England, and Iceland (see Figure 11) and on accounts from historical and poetic records from the early Middle Ages. Grinnell College students involved in the project include Ethan Huelskamp, Joseph Robertson, Maddy Smith, Anna Brew, Brenna Hanlon, Zoe Cui, Tal Rastopchin, and Michael Andrzejewski. The team plans to fill the VR meadhall with people and objects from the poem in order to help the participants exploring the space sense how the room’s layout contributes to its function as a political and social arena. The Envisioning Heorot Project will help student researchers and people reading Anglo-Saxon poetry, especially Beowulf, to understand how such civic spaces functioned in Anglo-Saxon and medieval Scandinavian culture and helped shape Anglo-Saxon social structures. While building or exploring this virtual space, students will learn to analyze how the meadhall functions in Beowulf and its analogues, to locate northern European cultures within a global network of trade and cultural influence, and to consider how movement through physical space is defined by and reinforces social roles in a particular cultural context.

Grinnell College students conducting site-based research at the Reykjavik City Museum, Iceland.
Figure 11. Site-based research in Iceland. Site-based research in Iceland and Denmark has been invaluable for students working on the Envisioning Heorot Project: Development work in 3ds Max and Substance Painter has been strongly influenced by findings and impressions made on these trips. Here students (from left to right) Ethan Huelskamp, Joseph Robertson, Maddy Smith, and Megan Gardner, examine a Viking hearth in Iceland with a representative from The Settlement Exhibition at the Reykjavik City Museum, Iceland. Photo by Tim Arner.

Dr. Chris French, Professor of Mathematics, and Dr. David Neville lead the Math Museum Project, which allows participants to explore and interact with mathematical ideas in VR. Grinnell College students involved in this project are Nikunj Agrawal, Ziwen Chen, Alexander Hiser, Yuya Kawakami, HaoYang Li, Robert Lorch, Tal Rastopchin, Lang Song, Charun Upara, and Hongyuan Zhang. This project is inspired by the mathematical models from the late 19th century when mathematicians partnered with industrialists to model new kinds of surfaces out of plaster, cardboard, or wire. These models brought new developments in algebraic geometry and new notions of non-Euclidean geometry. Immersed within the virtual Math Museum, students can interact with visualized mathematical concepts thereby experiencing greater enjoyment and comprehension of mathematical ideas.

In one room of the virtual museum, players walk around on a large ellipsoid surface, so they experience the shape in much the same way as an insect might move around on a plaster model. The player can find the umbilic points of the shape by using a tool that measures the curvature of the ellipsoid at the current location whenever the player triggers the measuring device. Another room is inspired by models created by the German mathematician Kummer. In this space, the player can manipulate a surface by adjusting certain parameters and then can watch how the surface evolves. The player’s task is to find the values for the surfaces that Kummer built. In a third room, the player must assign colors to the vertices of a graph consisting of edges and vertices so adjacent vertices take different colors. The goal is to use the minimal number of colors. This activity teaches the notion of the chromatic number of a graph. Also, students are currently developing another room in which the player learns about graph isomorphisms by manipulating the vertices of a graph to make it look like another graph.

Dr. David Neville leads the German VR Project, a game for teaching environmentalism in authentic German linguistic and sociocultural contexts. Originally developed as a flat screen 3D game focusing on glass recycling and waste management systems in German public spaces, Zachary Segall and Eli Most ported the game in 2018 to create an alpha-level VR prototype (see Figure 12). Grinnell College students involved in the project include Savannah Crenshaw, Martin Pollack, Yinan Hui, Bojia Hu, Jin Hwi, Tal Rastopchin, and Michael Andrzejewski. Research on the 3D game found that goal-directed player activity provided learners of a second language and culture with a more nuanced view of the activity systems that constitute a target culture, and also apparently influenced how learners invoked and structured language in order to describe these systems (Neville 2014). The VR version of the game will expand the scope of the 3D game by including more narrative to situate the user in an authentic German cultural situation and more in-game tasks related to recycling and waste management practices. We hope that increased immersion and sense of presence in a completely virtual environment will target greater learning outcomes in second language and culture acquisition, and perhaps even realize outcomes that were not discovered in the 3D version of the game.

Screen capture from the German VR project showing a German public space, a beer bottle, and a VR hand controller with directions in German.
Figure 12. Screen capture from the German VR project. The German VR project situates second language and culture acquisition within authentic sociocultural contexts and activities. Screenshot by David Neville.

Next steps

We are currently refining the teams’ workflows to use Scrum methods for project management and incorporating problem-based learning theory to intentionally teach metacognitive skills (Barrows 1996; Edens 2000; Hmelo-Silver 2004; Dunlop 2005; Yew and Schmidt 2011). A victim of our own success, we face a number of challenges while scaling up the lab to support multiple VR projects simultaneously. It has been difficult to find a dedicated physical space on campus which can support a growing community of practice. As a result, GCIEL’s work remains somewhat decentralized. It also remains to be seen how much these discrete cross-curricular VR projects will transform Grinnell College’s core curricula. Likely, GCIEL’s future projects will rely on external grant support, and it may be difficult for small-scale liberal arts teams to compete with large R1 research and development labs for funding. While we are excited to see our established team members graduate and move on to high-powered tech jobs and graduate schools, this leaves recurring gaps in our project teams, so we must constantly train new students to join the project teams. Successful project teams need consistent faculty and staff time and attention; yet, College employees find themselves increasingly burdened with competing responsibilities. Overcoming these challenges depends on our ability to convince the College to change some traditional structures and to provide sufficient time and resources for experimentation. Success is not guaranteed, but we believe the effort is worthwhile.

The future of GCIEL beyond our grant funding is still in discussion. As a well-resourced institution with an individually advised curriculum, Grinnell College has a few options that we can harness to secure GCIEL’s future. For example, the Writing Lab pays student writing mentors out of their general operations budget and these students do not receive academic credit, though they do take an introductory writing course to ensure they have the necessary skills. GCIEL could adopt a similar model and teach an VR basics course to develop a pool of potential student employees as VR mentors. Another possibility is integrating the lab into existing or emerging curricular structures. VR project development would fit most seamlessly into the Mentored Advanced Project (MAP) structure as a group research project supervised by a faculty member. These MAP experiences allow students to register for 2- or 4-credit MAP research credits and work closely with faculty advisers on independent research projects. We might also be able to utilize the “Plus 2” option, which allows professors and individual students enrolled in a regularly scheduled course to plan work that would go beyond the standard syllabus. GCIEL and student VR projects may also find a place within the emerging Digital Studies Concentration or the new Film and Media Studies Program. Grinnell College’s concentrations typically involve a cross-departmental listing of various courses that meet the concentration’s themes and goals, but GCIEL could provide the seed for a concentration-specific seminar that is listed as a requirement or additional way for the students to complete credits towards the concentration. Ultimately, we want to find ways to leverage the benefits of housing GCIEL within the curriculum (e.g., rewarding students with class credits and guaranteed team members) along with the benefits of being independent from the curriculum (e.g., freedom from semester limits and ability to form multidisciplinary collaborations with skilled students, staff, and faculty). Fortunately, Grinnell College has a history of offering student learning opportunities that take many forms, including those that exist outside of traditional classroom environments.

We think all these efforts will pay off in the long run. Opening the traditional classroom format to integrate technological expertise and domain-specific content across disciplinary divides will expand student assessments beyond term papers to include scholarly products that will excite and engage a new generation of scholars in the twenty-first century. We will also have to ask: what is the best way to assess learning outcome achievement for interdisciplinary projects related to creating VR experiences? Can we identify meaningful learning outcomes we should expect of all students, such as project management and effective communication? Do we need to assess students on their domain-specific skills and knowledge, such as software troubleshooting, graphical design, or archival research? Who would be responsible for designing and evaluating these assessments? How do we more closely integrate staff and faculty roles in collaborative curriculum design, which breaks down the traditional barriers between faculty and staff roles? How do we challenge College organizational structures to harness staff expertise alongside faculty domain knowledge?

Learning from the successes of vocational and professional schools, we can reinvigorate liberal arts education with hands-on cooperative training, yet retain the focus on our traditional values that makes us unique. This new model could help to transform liberal arts institutions into laboratories for innovation in solving twenty-first–century problems. In the end we believe liberal arts graduates can—and should—have the best of both worlds: knowledge and the skills to apply it. 

Key Takeaways

  • Complex projects, especially ones using technology, require teams consisting of people with different technical and subject-matter competence. These projects provide excellent opportunities for interdisciplinary collaboration and teaching.
  • To develop transferable skills and knowledge, model the project experience on “real-world” structures. This includes treating student collaborators as equals who participate in decision-making and receive compensation (e.g., stipends or academic credit).
  • Time-intensive projects will require focused, concentrated effort by team members. These projects may require institutional support for faculty involvement (e.g., reassigned time) and students to commit at least 10 hours a week to project development.
  • Long-term, complex projects benefit from a permanent physical space that is equipped to support the technology, comfortably hold team meetings, and accommodate team members’ work styles, including access outside of business hours.
  • The project curriculum must provide team members with the necessary prerequisite technical and subject-matter knowledge to start the project, and it must also be flexible enough in time and resources to adapt to questions that emerge during project development. As VR projects require new ways of configuring faculty-staff-student interaction and budgets to support developments, they provide excellent opportunities for institutional growth and external funding.
  • When properly configured teams work on developing well-designed VR experiences, students learn valuable skills related to communication, self-directed learning, attention to detail, problem solving, negotiation, and time management.
  • Development team members need to be well-versed in the ethical, psychological, and pedagogical affordances of VR and how these impact the project.
  • Start small with complex projects and iterate towards larger goals.
  • Open lines of communication between all team members—staff, faculty, and students—are essential to project success. Avoid isolation by encouraging teammates to pair up, even when working on components that traditionally involve many hours of individual work, such as archival research or programming. In this way, teammates can learn from the others’ processes. This supports cross-training and allows cross-pollination from diverse backgrounds/expertises. Web-based project management platforms, when used appropriately, help to facilitate this communication.
  • To truly transform, institutions will have to examine deep structures: curricula, staff/faculty time, majors, and funding.

Bibliography

Ahn, Sun Joo (Grace), Amanda Minh Tran Le, and Jeremy Bailenson. 2013. “The Effect of Embodied Experiences on Self-other Merging, Attitude, and Helping Behavior.” Media Psychology 16, no. 1: 7–38.

Ashmore, Sondra, and Kristin Runyan. 2015. Introduction to Agile Methods. New Jersey: Pearson Education.

Bailenson, Jeremy. 2019. Experience on Demand: What Virtual Reality Is, How It Works, and What It Can Do. New York: W. W. Norton & Company.

Barrows, Howard. 1996. “Problem-Based Learning in Medicine and Beyond: A Brief Overview.” New Directions for Teaching and Learning 68 (Winter): 3–12.

Blyth, Carl. 2018. “Immersive Technologies and Language Learning.” Foreign Language Annals 51: 225–232.

Brown, John Seely, Allan Collins, and Paul Duguid. 1989. “Situated Cognition and the Culture of Learning.” Educational Researcher 18, no. 1 (January–February): 32–42.

Dalgarno, Barney, and Mark J. W. Lee. 2010. “What are the Learning Affordances of 3-D Virtual Environments?” British Journal of Educational Technology 41, no.1 (January): 10–32.

Deemer, Pete, Gabrielle Benefield, Craig Larman, and Bas Vodde. 2012. The Scrum Primer: A Lightweight Guide to the Theory and Practice of Scrum. Version 2.0. http://scrumprimer.org.

Dolgunsoz, Emrah, Gurkan Ylidirim, Serkan Yildirim. 2018. “The Effect of Virtual Reality on EFL Writing Performance.” Journal of Language and Linguistic Studies 14, no. 1: 278–292.

Dunlop, Joanna C. 2005. “Problem-Based Learning and Self-Efficacy: How a Capstone Experience Prepares Students for a Profession.” Educational Technology Research and Development 53, no. l (March): 65–85.

Edens, Kellah M. 2000. “Preparing Problem Solvers for the 21st Century through Problem-Based Learning.” College Teaching 48, no. 2: 55–60.

Engeström, Yrjö, Reijo Miettinen, Raija-Leena Punamäki. 1999. Perspectives on Activity Theory. New York: Cambridge University Press.

GCIEL Focus Group. 2018. Interview by Vanessa Preast. Report of Interviews with Student Team Members. Grinnell College, April 2.

Greengard, Samuel. 2019. Virtual Reality. Cambridge, Massachusetts: MIT Press Essential Knowledge Series.

Handelsman, Diane Ebert-May, Robert Beichner, Peter Bruns, Amy Chang, Robert DeHaan, Jim Gentile, Sarah Lauffer, James Stewart, Shirley M. Tilghman, and William B. Wood. 2004. “Scientific Teaching.” Science 304, no. 5670 (April): 521–22.

Herrera, Fernanda, Jeremy Bailenson, Erika Weisz, Elise Ogle, and Jamil Zaki. 2018. “Building Long-term Empathy: A Large-scale Comparison of Traditional and Virtual Reality Perspective-taking.” PLoS ONE 13, no. 10: https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0204494.

Hmelo-Silver, Cindy E. 2004. “Problem-Based Learning: What and How Do Students Learn?” Educational Psychology Review 16, no. 3 (September): 235–66.

Ijaz, Kiran, Anton Bogdanovych, and Tomas Trescak. 2017. “Virtual Worlds vs Books and Videos in History Education.” Interactive Learning Environments 25, no. 7: 904–929.

Jerald, Jason. 2015. The VR Book: Human-Centered Design for Virtual Reality. Williston, VT: Morgan & Claypool Publishers.

Jonassen, David H. 2000. “Toward a Design Theory of Problem Solving.” Educational Technology Research and Development 48, no. 4 (December): 63–85.

Jonassen, David H., Chad Carr, and Hsiu-Ping Yueh. 1998. “Computers as Mindtools for Engaging Learners in Critical Thinking.” TechTrends 43, no. 2 (March): 24–32.

Jonassen, David H., and Lucia Rohrer-Murphy. 1999. “Activity Theory as a Framework for Designing Constructivist Learning Environments.” Educational Technology Research and Development 47, no. 1 (March): 61–79.

Legault, Jennifer, Jiayan Zhao, Ying-An Chi, Weitao Chen, Alexander Klippel, and Ping Li. 2019. “Immersive Virtual Reality as an Effective Tool for Second Language Vocabulary Learning.” Languages 4, no. 13: https://www.mdpi.com/2226-471X/4/1/13.

Leimkuehler, F. Ray, field team supervisor. 1940. Uncle Sam Plantation. From the Library of Congress, Historic American Building Survey. https://www.loc.gov/pictures/item/la0030.

McCombs, Barbara L. and Jo Sue Whistler. 1997. The Learner-Centered Classroom and School: Strategies for Increasing Student Motivation and Achievement. San Francisco: Jossey-Bass Publishers.

Nathal, Karla Liliana Puga, María Eugenia Puga Nathal, Humberto Bracamontes del Toro, Marco Antonio Guzmán Solano, and Juan Carlos Martínez Sandoval. 2018. “The Immersive Virtual Reality: A Study in Three-dimensional Euclidean Space.” American Journal of Educational Research 6, no. 3: 170–174.

National Research Council. 2000. How People Learn: Brain, Mind, Experience, and School: Expanded Edition. Washington, DC: The National Academies Press. https://www.nap.edu/catalog/6160/how-people-learn-brain-mind-experience-and-school.

Neville, David. 2014. “The Story in the Mind: The Effect of 3D Gameplay on the Structuring of Written L2 Narratives.” ReCALL: The Journal of the European Association for Computer Assisted Language Learning 27, no. 1: 1–17.

Putman, Shannon, and Lateefah Id-Deen. 2019. “I Can See It! Math Understanding through Virtual Reality.” Educational Leadership 76, no. 5 (February): 36–40.

Rubin, Kenneth. 2013. Essential Scrum: A Practical Guide to the Most Popular Agile Process. New Jersey: Pearson Education.

Rubin, Peter. 2018. Future Presence: How Virtual Reality Is Changing Human Connection, Intimacy, and the Limits of Ordinary Life. New York: Harper Collins.

Selingo, Jeffrey. 2013. College (Un)bound: The Future of Higher Education and What It Means for Students. New York: Houghton Mifflin Harcourt.

———. 2017. There Is Life after College. New York: William Morrow.

Sinclair, Bryan and Glenn Gunhouse. 2016. “The Promise of Virtual Reality in Higher Education” EDUCAUSE Review: https://er.educause.edu/articles/2016/3/the-promise-of-virtual-reality-in-higher-education.

Steffen, Jacob, James E. Gaskin, Thomas O. Meservy, Jeffrey L. Jenkins, and Iopa Wolman. 2019. “Framework of Affordances for Virtual Reality and Augmented Reality.” Journal of Management Information Systems 36, no. 3: 683–729.

Sundaram, Shirsh, Ashish Khanna, Deepak Gupta, and Ruby Mann. 2019. “Assisting Students to Understand Mathematical Graphs Using Virtual Reality Application.” In Advanced Computational Intelligence Techniques for Virtual Reality in Healthcare, edited by Deepak Gupta, Aboul Ella Hassanien, and Ashish Khanna, 49–62. Studies in Computational Intelligence, vol 875. Cham, Switzerland: Springer.

Szabo, Victoria. 2019. “Collaborative and Lab-Based Approaches to 3D and VR/AR in the Humanities.” In 3D/VR in the Academic Library: Emerging Practices and Trends, edited by Jennifer Grayburn, Zack Lisher-Katz, Kristina Golubiewski-Davis, and Veronica Ikeshoji-Orlati, 12–23. Council on Library and Information Resources Report 176. https://www.clir.org/wp-content/uploads/sites/6/2019/02/Pub-176.pdf.

van Loon, Austin, Jeremy Bailenson, Jamil Zaki, Joshua Bostick, and Robb Willer. 2018. “Virtual Reality Perspective-taking Increases Cognitive Empathy for Specific Others.” PLoS ONE 13, no. 8: https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0202442.

Villena Taranilla, Rafael, Ramón Cózar-Gutiérrez, José Antonio González-Calero, and Isabel López Cirugeda. 2019. “Strolling through a City of the Roman Empire: An Analysis of the Potential of Virtual Reality to Teach History in Primary Education.” Interactive Learning Environments. https://www.tandfonline.com/doi/full/10.1080/10494820.2019.1674886.

Wenger, Etienne. 1998. Communities of Practice: Learning, Meaning, and Identity. Cambridge: Cambridge University Press.

Wenger, Etienne, Richard McDermott, William M. Snyder. 2002. Cultivating Communities of Practice. Cambridge, MA: Harvard Business Press.

“What Is Liberal Education.” n.d. Association of American Colleges & Universities, accessed March 03, 2020, https://www.aacu.org/leap/what-is-liberal-education.

Wood, Zebulon M., Albert William, and Andrea Copeland. 2019. “Virtual Reality for Preservation: Production of Virtual Reality Heritage Spaces in the Classroom.” In 3D/VR in the Academic Library: Emerging Practices and Trends, edited by Jennifer Grayburn, Zack Lisher-Katz, Kristina Golubiewski-Davis, and Veronica Ikeshoji-Orlati, 39–53. Council on Library and Information Resources Report 176. https://www.clir.org/wp-content/uploads/sites/6/2019/02/Pub-176.pdf.

Yee, Nick, and Jeremy Bailenson. 2006. “Walk a Mile in Digital Shoes: The Impact of Embodied Perspective-taking on the Reduction of Negative Stereotyping in Immersive Virtual Environments.” Proceedings of PRESENCE 2006: The 9th Annual International Workshop on Presence. Cleveland, Ohio, August 24–26. https://vhil.stanford.edu/pubs/2006/walk-a-mile-in-digital-shoes-the-impact-of-embodied-perspective-taking-on-the-reduction-of-negative-stereotyping-in-immersive-virtual-environments.

Yew, Elaine H. J. and Henk G. Schmidt. 2011. “What Students Learn in Problem-Based Learning: A Process Analysis.” Instructional Science 40, no. 2 (March): 371–95.

Yildirim, Gürkan, Mehmet Elban, and Serkan Yildirim. 2018. “Analysis of Use of Virtual Reality Technologies in History Education: A Case Study.” Asian Journal of Education and Training 4: 62–69.

About the Authors

David O. Neville (PhD, Washington University in St. Louis; MS, Utah State University) is a Digital Liberal Arts Specialist and Director of the Immersive Experiences Lab at Grinnell College.

Vanessa Preast (PhD, Iowa State University; DVM, University of Florida) is Associate Director of the Center for Teaching, Learning, and Assessment at Grinnell College.

Sarah J. Purcell (PhD, Brown University) is the L.F. Parker Professor of History at Grinnell College.

Damian Kelty-Stephen (PhD, University of Connecticut-Storrs) is Assistant Professor of Psychology at Grinnell College.

Timothy D. Arner (PhD, Pennsylvania State University) is Associate Dean of Curriculum and Academic Programs and Associate Professor of English at Grinnell College.

Justin Thomas (MFA, University of Maryland) is Associate Professor of Scenic and Lighting Design and Chair of the Theatre and Dance Department at Grinnell College.

Christopher P. French (PhD, University of Chicago) is Professor of Mathematics at Grinnell College.


A workshop filled with the tools of a silversmith. In the left half of the frame, a man in colonial attire sits with his back to the viewer. In the center-right of the frame, the player’s pointer rests on a colorful print and declares “Landing Print.”
2

Mission US TimeSnap: Developing Historical Thinking Skills through Virtual Reality

Abstract

Mission US: TimeSnap is a blended learning experience, marrying the capacity of a virtual reality mission with consolidation, support, and deeper exploration in the classroom. This article investigates the affordances of virtual reality as a teaching tool and the challenges of designing for today’s classroom. The game developers of Electric Funstuff were drawn to virtual reality by research that suggests it has great potential to support the kind of inquiry-based learning that many high school history classrooms struggle to provide. The result is Mission 1: King Street, 1770, the first in a series of history-based virtual reality missions that model and scaffold the use of critical historical thinking skills. After several rounds of testing and iteration, Mission 1 is poised for a final classroom evaluation, and this paper shares the developers’ insights and best practices for other classroom-VR creators.

Introduction

There’s an argument brewing in the Royal Exchange Tavern on King Street. Two men cluster at the end of a sturdy wooden table, deep in conversation and visibly agitated. The tavern keeper ignores their quarrel, distracted by an advertisement in the Gazette. Across the room, a man slouches over his tankard and re-reads, astonished, a letter the author never meant to share with the people of Boston. Hundreds of miles away, hundreds of years into the future, yet, impossibly, also present in this moment, in Boston, in April, in 1770, a high school student considers her options: “Hm, do I really want to do that? No, don’t go there…”

This student is playing Mission US: TimeSnap, a game-based virtual reality experience designed to critically engage high school learners in US History. Before her mission is through, this student will explore three richly detailed and interactive locations in 1770 Boston, on the way gathering evidence that will help her explain why, only weeks earlier, five civilians were gunned down in the middle of the street by British soldiers. And, because TimeSnap is a blended learning experience, the journey won’t end when she removes her headset. Outside of the virtual world, this student will collaborate with her classmates and receive support from her teacher in order to understand and articulate not only the causes of the Boston Massacre but also the different ways this event was interpreted and why this matters to America’s Revolutionary history. In short, she will be “doing history” by grappling with contextualization, causation, and other essential historical thinking skills.

This paper describes the design and implementation of TimeSnap from the perspective of both its developers and researchers and offers lessons learned for would-be practitioners.[1] These lessons include (a) how to allow for the time and technological constraints of today’s classroom, (b) how to manage cognitive load in virtual learning environments, and (c) how to use design to support active learning.

Educational Affordances of Virtual Reality

Since the computer arrived in the classroom, history educators have sought to harness digital technologies to innovate instruction. Advocates saw exciting opportunities to digitize primary sources, scaffold learning with hypermedia, and build two- and three-dimensional virtual spaces for exploration and engagement (Dede 1992; Evans and Brown 1998; Cornell and Dagefoerde 1996). The use of technology in the classroom arose side-by-side with a shift in pedagogical practice in the social sciences. Over the past few decades, professional organizations like the Stanford History Education Group, National Center for History in Schools, American Social History Project, and Roy Rosenzweig Center for History News and Media have developed strategies and resources to help each learner to “read like a historian,” or practice disciplinary literacy, by grappling with historical evidence. Inquiry-based learning, where teachers guide students through the process of evidence-gathering, source evaluation, and argumentation, has emerged as the most promising instructional mode for building these historical thinking skills (Voet and De Wever 2017). Assessment tools have also evolved: the document-based question (DBQ)––in which students analyze primary and secondary sources to explain past events and make arguments––has been widely adopted as the most reliable measure of student learning. Technology, particularly digital media, has been singled out for its significant potential for scaffolding learning (Dede 1992; Saye and Brush 2006). Hypermedia and other digital supports foster inquiry into the “ill-structured” problems of history by providing hard scaffolds and promoting independent exploration and problem-solving (Saye and Brush 2006). However, despite calls for an inquiry-based classroom and even after the wide adoption of digital tools in many classrooms, according to one survey, half of high school history teachers still regularly lecture for three-quarters of the class period––and some for the entire period (Wiggins 2015). Implementing these methods poses a challenge for teachers trained in conventional practices as well as for students who struggle to analyze complex texts.

Reflecting on the need for both effectively modeled historical thinking skills and more compelling practice environments, we saw an opportunity for innovation. After ten years’ experience using the affordances of games and interactives to deepen middle school social studies through Mission US, our game developers wanted to harness the unique capacities of virtual reality (VR) to build historical literacy. We drew on the insights of the Stanford History Education Group (SHEG) and the National Center for History in the Schools (NCHS), selecting essential historical thinking skills like contextualization, causation, and sourcing to model and develop in high school history classrooms through a blended VR experience.

Virtual reality has strong potential for teaching history. Like living history museums, VR assembles a three-dimensional historical world to explore––putting students “inside” the past and, through embodied learning, making historical investigations more memorable and motivational. Theorists of embodied learning assert that learning is a product of sensorimotor interaction with the world rather than the result solely of mental activities that occur within the brain’s physical confines (Lakoff and Johnson 1999; Osgood-Campbell 2015). Proponents of experiential learning argue that the most powerful learning experiences are those that allow people to experiment (or “take action”) physically as well as mentally through hands-on activities, reflect on the outcomes, and make changes as required to advance toward goals (Kolb 2014; Kontra, Goldin-Meadow, and Beilock 2012). In this frame, learning activities should be designed to allow students to interact in meaningful ways with their environments to facilitate deeper encoding of knowledge.

Researchers speculate that VR can promote embodied and experiential learning by facilitating presence, or the illusory perception of physically “being there” in a non-physical space (Schubert, Friedmann, and Regenbrecht 2001). Accordingly, students can interact with content in ways not possible with books, video, or even games. They can, for example, pick up and rotate objects, and, in in-room VR, move toward and away from sounds, giving them an intimate sense for the distinctive material culture of a historical era. Students may also be more likely to practice the skills of historical thinking after having them modeled by characters in the VR space and then trying themselves. Similarly, VR may promote embodied learning by enhancing episodic memory (memory of autobiographical events) and visuospatial processing (the ability to identify objects and the spatial relationships among them) (Parsons et al. 2013; Repetto, Serino, Macedonia, and Riva 2016). Some researchers have proposed that the formation of memories is closely tied to the ability to take action on the information being encoded by the brain. According to Glenberg (1997), “conceptualization is the encoding of patterns of possible physical interaction with a three-dimensional world. These patterns are constrained by the structure of the environment, the structure of our bodies, and memory” (see Osgood-Campbell 2015). If this is the case, a learner in a VR setting, who must perform actions (albeit limited gestures, not fine motor movement) to navigate their virtual environment and unlock knowledge, would form more meaningful memories than a student reading the same information.

VR has educational potential beyond the affordances of embodied learning. Research suggests, for example, that the novelty and interactive possibility of VR improves student motivation and increases student recall (Chiang, Yang, and Hwang 2014; Ijaz, Bogdonovych, and Trescak 2017). Furthermore, learning in a realistic virtual space aligns with the methodologies of anchored instruction. Anchored instruction theory posits that more meaningful learning takes place when students are placed in a realistic context, for example by solving a problem presented in a case study (Yilmaz 2011). Often, anchored instruction is supported by technology like video or VR, which supplies the realism of an otherwise unfamiliar situation. In a virtual scenario, students activate “inert” knowledge when they encounter situations to which that knowledge can be applied (Love 2005).

TimeSnap is designed to bring these advantages to the US History classroom. With its immersive and interactive historical spaces, TimeSnap aims to model the work of history as it builds knowledge of historical people, places, events, and ideas. Working under the assumption that inquiry-based learning experiences are the most powerful, our theory of change posits that a brief (fifteen-to-thirty minute) VR experience that models historical thinking skills, followed by a lesson plan that helps students to apply their new knowledge and skills, will be demonstrably more effective at helping students retain and apply historical knowledge and skills than a traditional, paper-based lesson.

Virtual Tour of TimeSnap Mission 1: King Street, 1770

Mission US: TimeSnap is a blended learning experience that marries the capacity of the virtual reality mission with consolidation, support, and deeper exploration in the classroom. It was developed in Unity and optimized for the Oculus Go. The production and research process has been funded by a Small Business Innovation Research grant through the US Department of Education. In each TimeSnap mission, students “travel” back in time to investigate a pivotal period in American history. The core of each lesson is a VR mission in which students explore historical locations, encounter local people, collect and analyze artifacts, and bring back evidence to construct an interpretation of what happened and why. The following case study is focused on the development and testing of Mission 1: King Street, 1770, an investigation of the Boston Massacre. Later missions will build upon this research to explore the Fugitive Slave Law, westward expansion, and turn-of-the-century immigrant communities.

To encourage the critical inquiry and problem-solving skills at the heart of inquiry-based learning, TimeSnap is animated by “missions,” questions that form the basis for the VR task and the lesson to follow. A simplified in-game mission (Find the causes of the Boston Massacre) keeps students focused on a single task during their time in the VR. The classroom lesson poses a more complex question (How did the Patriots and the British each explain the causes of the Boston Massacre?), to be answered using evidence collected in the VR in collaboration with classmates and with teacher support and guidance. For more advanced students, an optional DBQ challenges them to apply what they have learned to a new set of documents and interpret the larger significance of the Boston Massacre in American history.

Entering a three-dimensional virtual space allows players to feel physically immersed in a new world, but TimeSnap extends this opportunity for immersion by including worldbuilding. Students do not simply put on the VR headset and immediately see 1770: they enter a future society with its own fractured history before embarking on their mission. Students are deputized as agents of the Chronological Advanced Research Projects Agency (C.A.R.P.A.), a future government department. C.A.R.P.A. was founded to rebuild the world’s archives using the agency’s signature technology, a virtual form of time travel that replicates historic environments, artifacts, and organisms. C.A.R.P.A. created this technology to repopulate their digital collections and expand their understanding of the past. Agents search for objects and information to fill gaps in the historical record that have puzzled the agency’s scholars. In the King Street Mission, for example, C.A.R.P.A. is aware of the Boston Massacre but does not have the evidence necessary to explain why five civilians were shot by soldiers of their own government. Room by room, students uncover the clues necessary to explain the many factors contributing to the Massacre.

Overview of User Experience

In a three-minute tutorial, agents meet C.A.R.P.A.’s Director Wells, who will be their guide and model for historical thinking. Director Wells outfits agents with a TimeSnap device (the handheld VR controller) that enables time travel and other helpful powers. Wells gives agents a mission: to go back in time and verify or collect historical accounts in order to respond to the mission question (e.g., What caused the Boston Massacre?). Their mission begins with a key piece of evidence––a challenging text or visual primary source. Wells poses focus questions about the evidence and prompts the players to learn all they can by investigating historical figures, locations, and other documents and artifacts. In King Street, 1770, Wells leads agents through three rooms, which are carefully researched recreations of colonial historical settings. Players explore rooms to gather sources and contextual information and to collect and study additional primary documents.

VR Features

Teleport

Players use the Oculus pointer (or an equivalent) to navigate to, and through, rooms in the VR environment.

Audio Guide

Voiceover (VO) support, in the form of C.A.R.P.A.’s Director Wells, guides players through the space, assigns tasks connected to the lesson question, and models historical thinking skills, including sourcing and contextualization.

Scan, Mind Meld, and Analyze

Players use the pointer to click on people and objects in the VR environment, view hot spots providing background information, “hear” thoughts (Mind Melds), and zoom in for a closer look. This feature is the primary way that players interact with the VR rooms and items.

Close-up of Paul Revere’s “Landing of the Troops” print. In the center of the frame is a transcription of the cursive letters from a corner of the print. Text supports are highlighted in aqua. At the bottom of the frame, the text supports explain that His Majesty’s Secretary of State for America was “the official responsible for overseeing the American colonies.”
Figure 1. Transcription and text supports for Paul Revere’s “Landing of the Troops.”
In the center of the frame, a textbox contains the transcription of a Mind Meld with the tavern keeper. Below the text are two prompts labeled with an ear, inviting the student to listen further to one of the options.
Figure 2. Transcription of the tavern keeper’s branching Mind Meld.

Tableaux

Each room is divided into discrete scenes, known as tableaux. Each is a collection of objects and Mind Melds, typically providing interrelated information. Players must complete a minimum number of interactions with the items in a single tableau before they move on.

Field Notes

Players automatically collect field notes during their interactions with certain objects and people. Notes are sorted into pre-set categories as they are found. Players can track their progress unlocking categories and collecting notes when they return to the C.A.R.P.A. Lab. At the end of their mission, players are emailed copies of their notes.

In the left half of the frame, a text box indicates how many Field Notes the player has collected. The Field Notes are grouped by category (ex. “The Aftermath”, “Taxation”, etc).
Figure 3. Collected field notes displayed in the C.A.R.P.A. Lab.

Evidence Locker, Room, and Exit Questions

After they complete each room, players are asked to select one of three objects to return with them to the C.A.R.P.A. Lab. These objects are held in the Lab’s evidence locker for the duration of the mission. When they return to the C.A.R.P.A. Lab, players answer questions about the items they have selected from each room and about the conclusions they are drawing about the mission question.

In figure 4, holograms of three objects are projected on top of the final Revere Workshop tableau. The closed caption prompts players to choose the object that “best helps you understand the causes of the massacre.”
Figure 4. A room question in Revere’s Workshop.
In figure 5, a follow-up question about the Revere Workshop artifact is projected over the C.A.R.P.A. Lab scene.
Figure 5. The follow-up question from Revere’s Workshop in the C.A.R.P.A. Lab.

Virtual rooms

C.A.R.P.A. Lab

A large space with irregular grey and white walls. In the center of the field of view, a fragment of Paul Revere’s “Bloody Massacre” spins on a small holographic pedestal. The closed caption reads, “It’s a primary source from 1770 that another agent retrieved.”
Figure 6. A fragmentary source is analyzed in the C.A.R.P.A. Lab.

Players begin and end their mission in the C.A.R.P.A. Lab, a cavernous industrial space with an evidence locker for the artifacts students collect from the historic spaces.

  • Room Objective: Acclimate players to the VR environment, introduce them to the mission and Director Wells, and help students reflect on and consolidate information between VR rooms.
  • Number of Objects: 1
  • Number of Mind Melds: 0

Paul Revere’s Workshop

A workshop filled with the tools of a silversmith. In the left half of the frame, a man in colonial attire sits with his back to the viewer. In the center-right of the frame, the player’s pointer rests on a colorful print and declares “Landing Print.”
Figure 7. Paul Revere in his Workshop.
  • Room Objective: Discover the complete “Bloody Massacre” print and explore Paul Revere’s perspective on the Boston Massacre.
  • Tableaux: Revere’s Workbench, Revere in 1770, Revere in 1768
  • Number of Objects: 5
  • Number of Mind Melds: 2

Royal Exchange Tavern

A dim tavern interior with wooden tables and a large fireplace. Two men stand in the center-right foreground. A third sits in the left background. There is a wooden stick on the table behind one of the standing men.
Figure 8. Customers at the Royal Exchange Tavern.
  • Room Objective: Encounter conflicting perspectives and evidence on the Boston Massacre.
  • Tableaux: An Argument, The Tavernkeeper, An Editorial
  • Number of Objects: 4
  • Number of Mind Melds: 4

Boston Gaol

A narrow jail cell. In the center of the frame, a man sits in his shirtsleeves with his back to the viewer. To his left, the iconic red coat of a British soldier lies on a cramped metal bed. To his right, a player’s pointer lands on a piece of paper with the question “What is he writing?”
Figure 9. Captain Preston in Boston Gaol.
  • Room Objective: Hear Captain Preston’s account of the Massacre.
  • Tableaux: Preston Asleep, Preston Awake
  • Number of Objects: 4
  • Number of Mind Melds: 1

TimeSnap Lesson

The King Street, 1770 VR mission is followed by a classroom lesson that helps students apply the knowledge and skills presented in the VR mission. Teachers are asked to lead their students in a mission debrief discussion that helps students review and consolidate the information they were exposed to in the VR. Students are provided a copy of their field notes, pre-sorted into relevant categories to support their inquiry into the causes of the Boston Massacre. Classroom activities and teacher-led discussions lead students to expand their inquiry into the Massacre, from naming and explaining the causes of the Boston Massacre to a critical evaluation of the sources of their evidence. Ultimately, students are expected to use the historical thinking skills modeled by Wells and practiced in the lesson to analyze a new set of documents pertaining to the Boston Massacre and the American Revolution.

Testing and Evaluation

Over the course of its ongoing development, the usability, feasibility, and promise of efficacy of TimeSnap have been evaluated in numerous settings. The final version of TimeSnap, described above, has been substantially revised based on recommendations from two pilot studies (conducted in December 2017 and January 2019), but final testing is still in progress. The results of this summative study, including the extent to which learning was positively impacted by the VR experience, will be shared via the project website.

Initial Phase I pilot study

In December of 2017, a pilot study of the initial Mission 1 VR and lesson activities was conducted with two US history teachers and fifty-nine students in two public high school classrooms (a ninth grade class in Queens, New York, and an eleventh grade class in suburban New Jersey) to determine the project’s feasibility. Students in each class were randomly assigned to a treatment group or control group by their teachers. Prior to the beginning of the pilot, participating teachers and students in the treatment group were asked to complete a Student Immersive Tendencies Questionnaire. During the two-day classroom pilot, all students had the opportunity to engage in the TimeSnap VR experience and were also asked to read, annotate, and respond to questions about four primary source documents related to the Boston Massacre, though students in the control group were asked to complete their analysis of documents before engaging in the VR experience. Many students in the treatment group reported that the immersive nature of the VR experience heightened their engagement and focus during the lesson in addition to aiding their ability to visualize and recall important information about the historical context. Students also reported that they enjoyed having a personal, distraction-free learning space in which to explore and progress at their own pace. Both teachers were able to successfully incorporate TimeSnap into their regular instructional approaches and noted an interest in using VR with their students in the future. To ensure that this novel instructional experience was not going to adversely affect learning, the pilot study also included preliminary measures for efficacy. The treatment group actually showed slight, but not statistically significant, improvements in retention of historical facts. More importantly for the goals of the study, students and educators affirmed the potential for the game to impact students’ sourcing and contextualization skills.

Phase II formative research

In January 2019, a newer iteration of the TimeSnap: Mission 1 VR prototype and accompanying curriculum materials were tested with a group of five eleventh-grade students and one facilitating teacher at a public high school in lower Manhattan in New York City. The instructional session took place in an after school setting over ninety minutes (designed to simulate a condensed version of two individual instructional periods) and was immediately followed by a thirty-minute group interview with all participating students and a forty-minute interview with the facilitating teacher later that week (see Appendix for additional information.) The small sample of student participants (n=5) allowed for in-depth analysis of students’ written responses to open-ended questions, which provided some insight into the nuances of their misconceptions and gaps in understanding.

Key Findings and Implications

All participating students exhibited a high degree of engagement in the VR and subsequent class discussions and collaborative writing activities. The two features of the VR experience students found most compelling were picking up and manipulating objects and Mind Melding with different historical figures. Ironically, though students were able to vividly recall objects they had “touched” in VR, they ultimately struggled to articulate how these interactions informed their understanding of the relevant 2D primary source documents. This suggested that future iterations of the prototype might benefit from attaching deeper, more meaningful content to these popular mechanics, in an effort to better engage and support students in making sense of difficult language and more relevant contextual details. At the same time, it remained important to consider what could get lost through such enhancements to the Mind Meld mechanic, insofar as this feature was intended to function as a support—not a replacement—for the heavy lifting work of document analysis.

All students demonstrated an appropriate degree of intuition about how to interact with key VR features, though most expressed a desire for more opportunities to “click around to figure it out yourself.” Nevertheless, and in spite of the degree to which they chose to engage with in-game scaffolds, all students exhibited difficulty recalling and articulating specific mission goals 
following the experience and there were only minor differences in their performance on a six-question multiple choice pre-/post- VR assessment. When asked to recall important information associated within each VR room, student responses primarily focused on people and objects, with a tendency to describe these elements broadly rather than explicitly referencing their significance to the mission question or historical context (e.g. “three men,” “tools,” “a bowl with writing on it,” etc.). Though students were able to work together to answer sourcing and contextualization questions about “The Case of Capt. Preston of the 29th Regiment,” they were less successful in building and supporting independent arguments related to the “Bloody Massacre” print, where they either interpreted the print as a photographic representation of historical events or failed to acknowledge the broader historical events which informed the creation of the document. Students’ failure to fully meet the lesson’s learning objective, coupled with their professed desire for additional agency and freedom to choose their own level of scaffolding, suggested a need for the incorporation of additional prompts and moments aimed at inspiring students to pause, reflect, and revise their initial impressions as the VR experience unfolds, rather than postponing such activities until students’ return to the “real world.”

Phase II Full Study

TimeSnap is currently undergoing final testing. In December and January 2019–20, the revised build of Mission 1 and accompanying instructional material were piloted in three “treatment” classrooms, while three “business-as-usual” classrooms completed a paper-based lesson on the same content and skills. The first build of Mission 2 will be tested at the same three sites at a date to be determined. Our research partners are evaluating TimeSnap on the following criteria:

  • Usability: Are students able to navigate the VR setting successfully and accomplish the goals of the lesson?
  • Feasibility: Is the teacher able to integrate the students’ experiences in VR with the associated classroom activities to achieve the learning objectives?
  • Fidelity of Implementation: What modifications does the teacher make to the lesson activities or curriculum materials and why?
  • Student Impact: As compared to peers in business-as-usual classes, do high school students who participate in TimeSnap lessons demonstrate greater gains in history content knowledge about topics in American history and in historical thinking skills? How do students relate to and experience history content in a VR-supplemented lesson?

Lessons, Revisions, and Conclusions

Testing has repeatedly shown that students find TimeSnap to be appealing and immersive, a welcome change in the way they approach course material. However, measurable change in students’ approach to historical thinking remains elusive. Since January 2019, our team has drawn on research findings and other insights from our partners at the Education Development Center, the American Social History Project, and other expert advisors in history pedagogy to revise and strengthen TimeSnap: Mission 1. We have taken steps to clarify the mission goal, expand the role of the in-game audio guide, and create space for reflection and synthesis. The Virtual Tour of Mission 1 included earlier in this article reflects those revisions to the design of the game. We believe that the simplified mission, enhanced support from Wells, and deliberately reflective room questions will produce meaningful learning opportunities. We launched Phase II testing in December 2019 in three New Jersey high schools. As of the submission of this article, those tests were ongoing. While we wait for the data and results, we have reflected on our process and identified three critical best practices for would-be developers. As you embark on your own VR production process, here are lessons to keep in mind.

Lesson #1: Plan for Classroom Realities

Bringing interactive technology into the classroom means designing for conditions of scarcity. Even in school districts that value technology in the classroom or experimental instructional design, there are limits on the amount of time and money departments can dedicate to VR. We knew it was essential to design a teaching tool that teachers would actually have the resources to implement. To keep TimeSnap teacher friendly, we have adhered to three core principles:

  • Short: Here, VR best practices align with classroom needs. Industry guidelines suggest that users should not exceed 30 minutes of continuous play, as they then become more likely to report symptoms of simulator sickness, such as nausea, disorientation, and eyestrain (Smith and Burd 2019). In our experience, most students reported little to no discomfort when adhering to these limits. Classroom time available for novel learning experiences is also limited, making the time constraints on VR a compatible limitation. The VR portion of the TimeSnap mission takes twenty to thirty minutes to complete, less than a standard class period.
  • Mobile: Mobile headsets, like the Oculus Go, are less expensive than room-scale VR systems and require far less set up. While mobile VR headsets do not offer the ability to walk or grasp objects in virtual space, they still provide an immersive experience without breaking a district’s technology budget. While new mobile headsets like the Oculus Quest provide six-degrees of freedom, consider that it is not very practical to have twenty-five students trying to walk around the actual classroom!
  • Flexible: Some teachers may have a week to explore the nuances of a single historical event, but most must move speedily through their curriculum. We have created lesson materials that teachers may select from or adapt to their own purposes, from a simple worksheet to guide students through field note analysis to a full DBQ. We also believe that our focus on teaching historical thinking skills (beyond the specific historical context) helps justify additional time spent.

Lesson #2: Less is Still More

Educators have looked to digital technologies to support student learning, including to help shoulder a student’s cognitive load while they wrestle with new or complex information. However, in a VR experience like TimeSnap, there is a risk that the very supports meant to be helpful will inundate students with new information and without any time or mechanism to process that information. Earlier iterations of TimeSnap provided more detail and allowed students more freedom to explore each room at the cost of their comprehension. This is why, despite the fact that some players have requested more interactions and greater freedom of movement, we embarked on a program of simplification ahead of our Phase II testing.

  • Clear Mission: It is critical for students to understand their primary purpose while in VR. We found that having a secondary mission sapped students’ cognitive load without adding to their interest or learning. We pared down the in-game mission question, saving the subtleties of sourcing for the classroom exercise.
  • Audio Guidance: Use a narrator or guide to support students as they navigate virtual space. A guide can do more than just give orders or answer basic questions: she can shape the way students think about the information they encounter. We expanded the role of our in-game audio guide, Director Wells; in addition to her existing function answering hotspot questions, Wells will prime students for a focus task within a room (“I wonder if these people would agree with Revere’s version of events…”) and act as an external memory (“This must be the same print we saw…”).
  • Structured Discovery: Be wary of calls for free exploration of the VR landscape. Some creators understandably believe that more unstructured experiences that allow students to move and explore at will must generate high user engagement. More freedom, however, often creates instructional and logistical problems. Students who are free to explore are also free to miss essential information, and the ability to transition back and forth between rooms makes the VR experience longer and increasingly uncomfortable. We introduced the tableaux system, curtailing player ability to move between sections of a room and complete in their preferred order. This allows us to control the flow of information to students; we feed them information in an order that makes sense. This has the added benefit of reducing the need to script complex conditional answers based on what a student has or has not encountered yet.
  • Repetition: The primary affordance of VR—immersion in a new and exciting virtual space—can be distracting. With so much to look at and absorb, students can easily miss key details unless they are exposed to them multiple times. We threw off our fear of repetition and began recycling key phrases and ideas. The language of the mission question and the various potential causes resurface again and again in the script. Repeated exposure to these ideas, some of them quite unfamiliar, gives students a chance to recognize that this information might be worth hanging on to.

Lesson #3: Cultivating Curiosity

Historical VR experiences are often framed as time travel, where learners can visit the past in the same way they might visit Paris. But what kind of tourists will they be? Sometimes, exploring a nominally “interactive” virtual environment is downright passive. To ensure that students are pursuing and synthesizing information, not just hitting “next,” our production team deliberately constructed a mission that students could get excited to complete. Our developer team designed game mechanics that motivate students to explore widely and make meaningful choices, even within the constraints set by cognitive load. These Phase II revisions reflect recommendations from Phase I and interim testing to encourage more student reflection and synthesis within the VR.

  • Tools for Problem-Solving: Make gameplay captivating by presenting a problem and equipping students with the tools to solve it. By setting a mission goal and creating opportunities for interaction and meaningful choice, TimeSnap presents a compelling problem space for students to navigate. In the C.A.R.P.A. Lab, Director Wells assigns the student a task and models the “hotspot” method they will use to extract information from the objects they encounter on their mission.
  • Meaningful Choice: Prevent students from passively clicking through interactives by prompting them to make decisions. Mind Melds and Room Questions provide TimeSnap’s primary opportunity for meaningful choice. Unlike documents, Mind Melds offer branching choices. When a student selects a follow up option in a Mind Meld, they cannot return to listen to the other option later. This encourages students to select the most interesting or relevant information and can lead to variation in student experience and field note collection. At the end of each room, students are prompted to select a significant artifact to return to C.A.R.P.A. While this ultimately does not affect the outcome of the mission or the information in their field notes, students must use their judgment in choosing what artifact they believe is most relevant to the mission.
  • Rewards: Use in-game reward systems to encourage learning behaviors and help students monitor their improvement or progress. We developed specific game mechanics meant to motivate users to actively explore their environment. For example, students can see how many field notes they have collected on return to the C.A.R.P.A. Lab. This in-game feedback informs students that they are making progress towards their goal. However, equally as motivating is the thrill and wonder of “hands”-on discovery. Phase I research and interim testing indicated that students were most excited to “touch” virtual objects (rather than to read virtual documents) and enjoy discovering “hidden” items. These encounters drive them to keep interacting with the virtual environment in search of new secrets to uncover.

Mission 1: King Street, 1770 received a final round of classroom testing in December and January 2019–2020; Mission 2 is currently in production and will begin testing, conditions permitting, in Fall 2020. We are eager to see how this current iteration of Mission 1 can produce measurable improvements in student knowledge-acquisition[2] and historical thinking, and we will carry forward any new insights we glean from these tests into Mission 2 and beyond. Grappling with the technical and cognitive challenges of VR in the classroom has been a productive process; each frustration forced us to adapt and innovate and ultimately create a better product.

Though it is hardly still “early days” for VR, this technology remains underutilized in education because of significant logistical impediments, and our work to mitigate these obstacles is one part in a long process to make VR a practical and effective pedagogical tool. Including educator voices is an essential component of that long-term mission and one that developers would do well to prioritize. Our developer team is perhaps uniquely well-positioned to partner with educators: after a decade of interdisciplinary collaboration on the Mission US game series, Electric Funstuff has built a robust network of educational researchers, curriculum specialists, and classroom instructors. Even with our considerable experience designing and developing educational games, we actively solicited insight and guidance from these partners. Developers best understand the technical possibilities afforded by new and evolving technologies, but only educators can point us to the areas of greatest need in their classrooms. Seeking balance between freedom and structure, depth of content and cognitive load limits, we will continue to iterate a compelling educational instrument where even the laws of physics are no barrier to historical learning.

Notes

[1] Mission US: Timesnap was developed by Electric Funstuff in partnership with the Educational Development Center and the American Social History Project/Center for Media and Learning at the CUNY Graduate Center. The authors would like to further acknowledge Dr. William Tally, who kindly reviewed drafts of this article and provided invaluable feedback, along with Dr. James Diamond, James Hung, Valentine Burr, Pennee Bender, Donna Thompson, Joshua Brown, Michelle Chen, Jill Peters, Dale Gordon, Benjamin Galynker, Robert Duncan, Caitlin Burns and Peter Wood, each of whom has contributed their talents and expertise to this project.

[2] While pandemic control measures have indefinitely delayed our test of the second TimeSnap mission, we are excited to share preliminary data from the Mission 1 tests from December through January. Independent sample t-tests were conducted to compare treatment and comparison students’ change from pre- to post-assessment on a historical thinking subscale (possible range 1 to 6) and a historical knowledge subscale (possible range 0 to 1). Treatment students showed statistically significant greater pre-post change (M=.19) on the Historical Knowledge sub-scale than the comparison students (M=.04) (t=-2.7, p=.007, Cohen’s d=.54 indicating a medium effect size). Further analysis is ongoing. A full report will be made available on the project website when it is complete.

Bibliography

American Social History Project. 2019. “Home.” Accessed December 12, 2019.
https://ashp.cuny.edu/

Chiang, Tosti., Stephen J. H. Yang, and Gwo-Jen Hwang. 2014. “An Augmented Reality-based Mobile Learning System to Improve Students’ Learning Achievements and Motivations in Natural Science Inquiry Activities.” Educational Technology & Society 17, no. 4: 352–365.

Cornell, Saul, and Diane Dagefoerde. 1996. “Multimedia Presentations: Lecturing in the Age of MTV.” Perspectives on History (January). https://www.historians.org/publications-and-directories/perspectives-on-history/january-1996/multimedia-presentations-lecturing-in-the-age-of-mtv

Dede, Christopher J. 1992. “The Future of Multimedia: Bridging to Virtual Worlds.” Educational Technology 32, no. 5: 54–60. www.jstor.org/stable/44425648.

Evans, Charles T., and Robert Brown. 1998. “Teaching the History Survey Course Using Multimedia Techniques.” Perspectives on History (February). https://www.historians.org/publications-and-directories/perspectives-on-history/february-1998/teaching-the-history-survey-course-using-multimedia-techniques

Glenberg, Arthur M. 1997. “What Memory Is For.” Behavioral and Brain Sciences 20, no. 1 (March): 1–19.

Ijaz, Kiran, Anton Bogdanovych, and Tomas Trescak. 2017. “Virtual worlds vs Books and Videos in History Education.” Interactive Learning Environments 25, no. 7: 904–929.

Kolb, David A. 2014. Experiential Learning: Experience as the Source of Learning and Development, 2nd ed. Upper Saddle River, New Jersey: Pearson Education, Inc.

Kontra, Carly, Susan Goldin-Meadow, and Sian L. Beilock. 2012. “Embodied Learning Across the Life Span.” Topics in Cognitive Science 4, no. 4, 731–739.

Lakoff, George, and Mark Johnson 1999. Philosophy in the Flesh: the Embodied Mind and Its Challenge to Western Thought. New York: Basic Books.

Love, Mary Susan. 2004. “Multimodality of Learning Through Anchored Instruction.” Journal of Adolescent & Adult Literacy 48, no. 4: 300–10. www.jstor.org/stable/40016917.

National Center for History in the Schools. 2019. “Introduction to Standards in Historical Thinking.” Accessed December 12, 2019. https://phi.history.ucla.edu/nchs/historical-thinking-standards/

Osgood-Campbell, E. 2015. “Investigating the Educational Implications of Embodied Cognition: A Model Interdisciplinary Inquiry in Mind, Brain, and Education Curricula.” Mind, Brain, and Education 9, no. 1: 3–9. https://doi.org/10.1111/mbe.12063

Parsons, Thomas D., Christopher G. Courtney, Michael E. Dawson, Albert A. Rizzo, and Brian J. Arizmendi. 2013. “Visuospatial processing and learning effects in virtual reality based mental rotation and navigational tasks.” International Conference on Engineering Psychology and Cognitive Ergonomics: Understanding Human Cognition. EPCE 2013. Lecture Notes in Computer Science, vol. 8019. https://doi.org/10.1007/978-3-642-39360-0_9

Repetto, Claudia, Silvia Serino, Manuela Macedonia, and Giuseppe Riva. 2016. “Virtual Reality as an Embodied Tool to Enhance Episodic Memory in Elderly.” Frontiers in Psychology 7, 1839.

Roy Rosenzweig Center for History News and Media. n.d. “Mission.” Accessed December 12, 2019. https://rrchnm.org/our-story/

Smith, Shamus P. and Elizabeth L. Burd. 2019. “Response Activation and Inhibition After Exposure to Virtual Reality.” Vol. 3–4. https://doi.org/10.1016/j.array.2019.100010. http://www.sciencedirect.com/science/article/pii/S2590005619300104.

Saye, John W. and Thomas Brush. 2006. “Comparing Teachers’ Strategies for Supporting Student Inquiry in a Problem-Based Multimedia-Enhanced History Unit.” Theory and Research in Social Education 34, no. 2 (Spring): 183–212.

Schubert, Thomas, Frank Friedmann, and Hoiger Regenbrecht. 2001. “The Experience of Presence: Factor Analytic Insights.” Presence: Teleoperators and Virtual Environments 10, no. 3 (June): 266–81.

Stanford History Education Group. n.d. “History Assessments.” Accessed December 12, 2019. https://sheg.stanford.edu/history-assessments

Voet, Michiel, and Bram De Wever. 2017. “Preparing Pre-service History Teachers for Organizing Inquiry-Based Learning: The Effects of an Introductory Training Program.” Teaching and Teacher Education 63: 206–17.

Wiggins, Grant. 2015. “Why Do So Many HS History Teachers Lecture So Much?” Granted, and… (blog). April 24, 2015. https://grantwiggins.wordpress.com/2015/04/24/why-do-so-many-hs-history-teachers-lecture-so-much/

Yilmaz, Kaya. 2011. “The Cognitive Perspective on Learning: Its Theoretical Underpinnings and Implications for Classroom Practices.” The Clearing House: A Journal of Educational Strategies, Issues and Ideas 84, no. 5: 204–12.

Appendix: January 2019 Testing Session Overview

Approximate Duration Research Activities
2 min. The research team shared a brief introduction to study activities and objectives.
5 min. Students completed an online 10-question pre-assessment, which included six multiple choice and four open-ended questions.
5 min. The teacher introduced the lesson with a provided script and by sharing a player onboarding video.
15–22 min. Students engaged with the VR experience.
10 min. Students completed an online Simulator Sickness Questionnaire, Self-Assessment Manikin survey, and a post-assessment that was identical to the pre-assessment taken earlier.
3 min. Students filled in a “memory map” graphic organizer in which they were asked to record all the relevant information they remembered from each VR room visited.
5–10 min. The teacher then facilitated a class discussion in response to the following three prompts:

  • Whose account of the massacre was more believable to you?
  • Did witnesses agree that Preston gave the order to fire? Can this fact be corroborated? Or is it contested?
  • Do you think Preston should have been found guilty?
15 min. Students were grouped according to whether or not they thought Preston was guilty, with two students in the “guilty” group, and three students in the “not guilty” group. In these groups they:

  • Discussed and responded to two sourcing questions in reference to Captain Preston.
  • Read and discussed an excerpt from “The Case of Capt. Preston of the 29th Regiment.”
  • Discussed and responded to two contextualization questions and
    two close reading questions.
5 min. Individually, students assessed the trustworthiness of Paul Revere’s Bloody Massacre print by circling details in the document and briefly describing their significance.
5 min. The teacher led a discussion by sharing John Adams’ role in the trial and connecting the case to the concept of “presumption of innocence.”
25 min. An Education Development Center (EDC) researcher led a debrief interview with students with support from other members of the project team.
40 min. An EDC researcher and the EFS PI interviewed the facilitating teacher at a later date.

About the Authors

Alison Burke is an instructional designer and writer at Electric Funstuff. She leads the research and writing for Mission US: TimeSnap. An educator and public historian, she creates meaningful and accessible encounters with the past for audiences of all ages. She holds an MA in Public History from New York University.

Elana Blinder is the curriculum director at The League of Young Inventors, an interdisciplinary STEAM + Social Studies program for students in grades K–5. In her previous role as a design researcher at The Center for Children and Technology | EDC, she conducted formative and summative research to support the ongoing development of Mission US: TimeSnap and a variety of other educational media products.

Leah Potter is a senior instructional designer and writer at Electric Funstuff. She is also co-founder and president of Hats & Ladders Inc., a social impact organization dedicated to helping all youth become more confident and better-informed career thinkers.

David Langendoen is the President and lead game designer of Electric Funstuff, an NYC educational game studio, makers of Mission US––the critically acclaimed history learning games, produced by WNET, with over 2 million users.

A hand holds a digital tablet over a page of text with a decorative border, while the tablet's screen displays a 3D model of a cathedral.
0

Truly Immersive Worlds? The Pedagogical Implications of Extended Reality

Abstract

This article provides an overview of the extended reality applications virtual reality (VR) and augmented reality (AR) and examines the affordances and constraints of each with regards to their application in the humanities. The interactive nature of these extended realities engages their audiences in new and compelling ways. VR and AR applications have moved beyond gaming and are proving particularly effective and engaging for historic recreations. However, these technologies also present new challenges, precisely because they create immersive worlds so captivating that these environments may be perceived as “real” rather than as simulacra, especially by students and the general public. Using both VR and AR projects based in medieval Europe (Bologna 3D Open Repository and 3D Paris Saga) as case histories, we discuss some of the issues that these technologies pose to their creators and to their consumers—from how they might be used to make a heritage site more meaningful, to how they pose dangers of an excess of verisimilitude. As these technologies become more ubiquitous in academic settings, these early ventures into extended realities highlight some perhaps hitherto unconsidered pitfalls as well as demonstrate the promise that these new technologies offer in terms of pedagogy and community outreach.

Introduction

In the summer of 2016, the world was introduced to the emerging technology of augmented reality (AR) in the form of Pokémon Go, a location-based, AR-enhanced game that became one of the most popular mobile apps of the year. Many people were already familiar with virtual reality (VR), “a medium composed of interactive computer simulations that sense the participant’s position and actions and replace or augment the feedback to one or more senses, giving the feeling of being mentally immersed in the simulation (a virtual world)” (Sherman and Craig 2003, 13). As a popular gaming environment, VR has four key elements: it is a virtual space for the participant; it is immersive on both a physical and mental level for the participant; it provides sensory feedback directly to the participant; and it is interactive, responding to the participant’s actions (Sherman and Craig 2003, 6–11).[1] VR, in its most effective form, requires the user to be isolated from a conscious awareness of the real world by some sort of head-mounted display, such as Oculus Rift, Microsoft HoloLens, or HTC Vive. Alternatively, the user can experience VR in an enclosed, projection-based or flat-monitor-based environment, such as a CAVE.[2] Typically, the experience must be held in a static, controlled space; otherwise, the user might collide with real-world objects in the effort to participate fully in the virtual world. And, for many individuals, the VR experience results in motion sickness, sometimes known as VR sickness or cybersickness.[3] In contrast, AR is a medium in which digital information is overlaid on the physical world that is in both spatial and temporal registration (i.e., alignment) with the physical world and that is interactive in real time (Craig 2013, 36). Consequently, AR is much more accessible because the required equipment, usually a smart device (iPad, iPhone, Android tablet, or Android phone), is minimal. The fact the user remains cognizant of the real world around them while using the technology reduces the possibility of motion sickness and does not typically limit the user to a static, controlled space for the experience.

Both technologies have applications beyond gaming and are proving particularly effective and engaging for historic recreations. Such recreations can have a significant impact on learning, for they engage viewers—both the general public and students—in an educational immersive experience. Many of these viewers may never visit the actual historic site in their lifetime, so accuracy is important. Consequently, we need to keep in mind that a 3D digital model is a re-creation and not the real place. And as we move forward with VR and AR, we must give serious consideration to the goals we need and/or wish the technologies to meet, particularly with respect to pedagogy. At this point in time, VR and AR are very successful in engaging audiences for both entertainment and educational purposes:

The increasing development of VR technologies, interfaces, interaction techniques and devices has greatly improved the efficacy and usability of VR, providing more natural and obvious modes of interaction and motivational elements. This has helped institutions of informal education, such as museums, media research, and cultural centers to embrace virtual technologies and support their transition from the research laboratory to the public realm. (Rousso 2002, 93)

For the user visiting a virtual heritage site, the experience can be highly engaging and educational as long as expert guidance is provided. VR and AR cannot substitute for pedagogical instruction. It is not so much that the user must be reminded that the virtualization is not real; rather, supporting documentation must be easily accessible within the virtual world to help the participant understand the meaning and significance of the 3D models they encounter. And content builders must take an interdisciplinary, if not transdisciplinary, approach to the creation of the 3D models and their VR- or AR-enhanced worlds if the learning experience of the participant is to be as significant and valuable.

These technologies have the promise not only of engaging students in the history itself, but also of inviting them to consider how the work of history is done. As scholars and experts, we require the 3D models and their environments to be historically accurate, but that accuracy is necessarily limited. All models are inevitably interpretations of available evidence, and making that process more transparent to the student leads not only to a better understanding of the subject matter but of the process as well. As Willard McCarty has noted,

The best model [e.g., digital humanities tool] of something, that is, comes as close as possible to what we think we know about the thing in question yet fails to duplicate perfectly that knowledge. Failure of the model in an engineering sense is its success as an epistemological instrument of research, because skillfully engineered failure shows us where we are ignorant. (McCarty 2003, 1232)

Failing to create the perfect 3D model of an object in terms of historical authenticity is to be expected and appreciated for what it can teach us not just about the technology but about the 3D model itself in terms of our understanding of its historic accuracy. As teaching tools, VR and AR force the historical experts, as content creators, and their students, as content consumers, to think very carefully and intentionally about the recreation. For example, precise verisimilitude of a medieval English village could only be achieved by travelling back in time to the Middle Ages to conduct the kind of fieldwork envisioned by Connie Willis in her 1992 science fiction novel The Doomsday Book—an unlikely prospect by anyone’s standards.[4] However, it is important that we think beyond what VR and AR can do today. Even if we fail to achieve what we want the technology to do, we will learn from our mistakes and, in so doing, improve both the technology and our students’ understanding of the historical method.

Historical Accuracy: A Theoretical Approach

Virtual constructions of historical objects and architecture raise very real concerns about verisimilitude. To what extent are such 3D models accurate representations of the original? In many ways, VR serves to validate Jean Baudrillard’s understanding of simulacra and concerns about the hyperreal. In Simulacra and Simulation, he argues that the loss of distinction between reality and its representation results in the hyperreal—a world “without origin or reality” (Baudrillard 1994, 1–7). It is pure simulation and, as a result, creates an anxiety of origin and authenticity. Virtual worlds, including those associated with VR, can evoke an apprehension about the hyperreal, especially if the 3D model is used to substitute for the original. The current interest by computer graphic experts and enthusiasts in the creation and redistribution of virtual historic sites illustrates the problem. “Archaeological illustration and reconstruction is not new,” as Clifford L. Ogleby notes,

but the advent of high-speed affordable computers and the associated graphics capability gives people the opportunity to create better looking imagery. The imagery, however, is often the result of the technology, not archaeological or historical research. When this imagery is distributed without the accompanying research that explains the decisions made in the reconstruction, it is open to a variety [of] interpretations. This problem is compounded when the imagery is posted on the [world wide web], as the image can be extracted from the surrounding text and interpreted as an artifact rather than as a diagram. (Ogleby 2007)

Ogleby demonstrates this issue using easily obtainable images from the web that purport to portray accurate reconstructions (some computer generated) of the mausoleum at the ancient Greek city of Halicarnassus.[5] The images are imprecise and even erroneous, yet accepted by the general public as real: “Many people will tend to ‘see’ a photo-like image to be more like a photograph, and therefore a record of a real place in time” (Ogleby 2007). Not surprisingly, these online images almost always fail to include provenance, authorship, and veracity—information that would help the viewer to determine the authenticity of each 3D model and would serve as a reminder that the image being viewed is just that, an image, and not the original. The problem is only exacerbated when these models are incorporated into a virtual environment such as Google Earth or Second Life (Ogleby 2007).[6] These immersive and interactive worlds can encourage the non-expert user, such as a student, to accept the computer-generated model as an overly realistic recreation of the original.

Nevertheless, we should not be dissuaded from using the technology for pedagogical purposes both in the classroom and the community at large. Pierre Lévy argues convincingly against viewing the virtual as simply unreal: “The virtual, strictly defined, has little relationship to that which is false, illusory, or imaginary. The virtual is by no means the opposite of the real. On the contrary, it is a fecund and powerful mode of being that expands the process of creation, opens up the future, injects a core of meaning beneath the platitude of immediate physical presence” (Lévy 1998, 16). It is an actualization rather than a realization, one that involves “the production of new qualities, a transformation of ideas, a true becoming which nourishes the virtual in a feedback process” (Lévy 1998, 15).[7] The virtual and the real are not binary opposites. Rather, they exist on a continuum that supports a complete range of realness from the fully real to the fully virtual. Such a reality-virtuality continuum was first proposed by Paul Milgram and his colleagues. They suggest that everything in between is a mixture of reality and virtuality, including AR in which the real world is augmented by virtual enhancements and AV (augmented virtuality) in which the virtual world is augmented by the real (Milgram et al. 1995, 282–92).[8] The more obviously artificial nature of AR/VR visualizations may be used in a classroom setting to illustrate the sorts of choices that historians make in any evaluation/representation of historical data. What becomes important is not the degree of artificiality but rather the transparency of the method. Just as the creator of the virtual representation must make choices about how “real” to make their visualization (what to include and exclude), so the historian makes choices regarding what data to include and how that data is represented. The artificiality of extended reality technologies thus opens the door to conversations about not only the material being studied, but also the means by which it is studied.

The appeal of VR and AR is not new. Humanity has long held a fascination for trying to create a virtual experience of reality. In the nineteenth and early twentieth centuries, panoramic paintings became particularly popular, including the development of 360º murals that were intended to fill the entire field of vision and make the viewer feel as if he or she were in the virtual world depicted by the paintings (Thompson 2015).[9] The nineteenth century also saw the development of the stereoscopic[10] viewer and images, precursors to the View-Master and, more recently, Google Cardboard (Virtual Reality Society 2016). Experimentation in film also contributed to the development of the technology, particularly the widescreen camera lens. French filmmaker Abel Gance introduced “polyvision,” a specialized widescreen film format that involved the simultaneous projection of three reels of film in a lateral montage, in his 1927 silent epic Napoléon (Cuff 2015, 24). Polyvision, as well as the later development of CinemaScope and Panavision using widescreen lenses, gave the audience a panoramic and, subsequently, more immersive film experience. It was not until 1929 and the development of the flight simulator (Virtual Reality Society 2016) that a virtual environment was designed for teaching rather than for entertainment purposes. This focus on the pedagogical potential of virtual environments has become even more important today as VR and AR evolve from game platforms to teaching tools.

Both technologies exemplify the concerns faced by experts building virtual heritage sites.[11] For historians, archaeologists, and other scholars, the photorealism of the 3D models is the primary goal. In general, there are ten principles of 3D photorealism: clutter and chaos; personality and expectations; believability; surface texture; specularity; aging dirt, rust, and rot; flaws, tears, and cracks; rounded edges; object material depth; and radiosity (light reflections off diffused surfaces) (Fleming 1998, 3). To achieve photorealism, the computer-generated object should demonstrate at least seven of these ten principles (Fleming 1998, 3–4). The virtual world should not be pristine and unblemished because reality is messy and dirty. This concern for photorealism does not, however, apply in the same way to human 3D models. In fact, few virtual heritage reconstructions include human figures and for good reason. Firstly, creating realistic human models is time consuming and expensive since it requires a digital artist with considerable skill in drawing and modelling figures from life. Architectural and cultural artifacts are usually less difficult to build as 3D models. Secondly, living models, unlike objects, are expected to move in some way. Animation adds a complex layer of technology that is usually not the primary focus of the recreated physical environment. Thirdly, and most importantly from a pedagogical point of view, human 3D models can complicate the virtual experience by encouraging the user to try and interact with them rather than focus on the physical reconstruction of the heritage site. Finally, there is the consideration of how exactly “real” such human figures should be. The more realistic the 3D model of the living figure, the more likely that it will become an example of the uncanny valley phenomenon described in social robotics: that is, the 3D model will be almost too real so that the minor imperfections of the recreation become disturbing and even repulsive.[12] Thus a caricature of a human figure may be more appealing and effective than a truly realistic and complex representation in VR or AR.

Two Historic Recreations: Modelling Challenges

Bologna 3D Open Repository is the result of a collaborative project between the municipality of the city of Bologna and CINECA Interuniversity Consortium, an academic supercomputing group that offers technological support to education, business, and the community. The project’s primary goal was to build 3D models for the creation of a virtual Bologna that the municipality could use to promote the candidacy of the city’s historic porticoes, or arcades, as a UNESCO World Heritage Site. The repository is now maintained as a site dedicated to the collection and sharing of the 3D models for didactic purposes—namely teaching students about the city and its history. Figures 1 through 3 show some of the 3D models created by the consortium:

View of 3D model of the Portico of San Luca in Bologna

Figure 1. Portico of San Luca.

Aerial view of 3D model of the hilly landscape south of Bologna

Figure 2. Hilly landscape south of the city.

3D model of the medieval character, Apa, leaning against a desk with an open book on it

Figure 3. Scene of a medieval university lecture.

Through these visualizations, students can learn about the architectural history of Bologna from the medieval period through to the 18th century. The computer graphics are high quality and demonstrate a number of the principles of digital photorealism. In particular, the architecture and landscapes exhibit great attention to detail and authenticity. The project includes human figures, not typical of most historic recreations, and these figures are generally caricatures rather than realistic representations of people. Certainly, such a use of humor in a virtual historic re-creation emphasizes the project’s desire to appeal to a broad, public audience (Guidazzoli, Liguori, and Felicori 2013, 58–65).[13] And the less-than-realistic style of the human figures avoids the potential issue of the uncanny valley.

Like the Bologna 3D Open Repository, the 3D Paris Saga project uses AR and VR to tell the narrative of the architectural history of Paris. Their approach, however, differed considerably. Dassault Systèmes, a European software company that specializes in 3D design, built a complex virtual world that traces the history of the city through almost 2,000 years with a special focus on a 3D reconstruction and interactive experience of the fourteenth-century Palais de la Cité and the Sainte-Chapelle (“Voici” 2015). The project originally included a 90-minute television documentary, a CAVE experience of the virtual world using 3D glasses (Vitaliev 2013), a PC-compatible interactive 3D website, and an AR-enhanced print book (Dassault Systèmes 2012). The visual accuracy and detail of the 3D architecture, topography, and atmosphere enrich the photorealism of the virtual world (see Figure 4). The fact that familiar monuments are shown in various stages of construction transforms the virtual experience into a deeper educational one. Considerable attention is also given to the appearance of the skies, reflecting typical Parisian weather rather than an idealized and eternal perfect sunny day (see Figure 5). Again, 3D human models that inhabit the virtual city are not a common feature of such historic recreations. They are merely shadowy figures and remind the viewer that Paris was always inhabited; however, because the figures are so ethereal, they avoid the uncanny valley phenomenon and encourage the viewer to explore the historic constructions rather than try to interact with the animated models themselves.

Aerial view of 3D model of the Grande Cour and Trésor de Chartres in Paris

Figure 4. View of the Grande Cour and Trésor de Chartres with shadowed human figures in the courtyard (Dassault Systèmes).

Despite its initial success, the VR element of the project is no longer easily accessible: the CAVE environment is only available at Dassault’s Paris headquarters by appointment to select visitors.

View of 3D model of the rose window on the west façade of Notre Dame Cathedral in Paris

Figure 5. View of the rose window on the west facade (Dassault Systèmes).

Virtual reconstructions such as these help students understand cultures, histories, and artifacts that are physically, temporally, or culturally distant. While it may be difficult for American students to visit Notre Dame, extended realities can help them experience it in a way that more traditional media cannot.[14]

The AR-Enhanced Text

The most successful component of the 3D Paris Saga has been the AR-enhanced companion print book published by Flammarion. Whereas current AR technology uses a mobile application on a smart device to trigger the digital enhancements embedded in the printed page, Dassault requires the user to hold select pages from the print volume up to the web camera on a PC.[15] Like a virtual pop-up book, the 3D models appear on the page as viewed through the computer screen (see Figure 6).

AR-enhanced book opened to show the 3D model of Paris emerging from the printed page

Figure 6. AR-enhanced print text (Dassault Systèmes).

The user may turn the book in order to see all sides of the 3D model, thereby gaining a greater appreciation of Parisian architecture throughout history, including the Middle Ages. However, interacting with the book and the technology is awkward and lacks the mobility that a smart device offers. It is also counterintuitive to the standard reading process since the user holds the book but looks away from it at the computer screen.

AR-enhanced texts are not new. Mark Billinghurst and his team at HitlabNZ (the Human Interface Technology Lab at the University of Canterbury, New Zealand) created some of the first examples in the early 2000s. Called “MagicBooks,” the texts are designed to encourage children to read:

The computer interface has become invisible and the user can interact with graphical content as easily as reading a book. This is because the MagicBook interface metaphors are consistent with the form of the physical objects used. Turning a book page to change virtual scenes is as natural as rotating the page to see a different side of the virtual models. Holding up the AR display to the face to see an enhanced view is similar to using reading glasses or a magnifying lens. Rather than using a mouse and keyboard based interface users manipulate virtual models using real physical objects and natural motions. Although the graphical content is not real, it looks and behaves like a real object, increasing ease of use. (Billinghurst, Kato, and Poupyrev 2001, 747)

Although early forms of AR used abstract, specifically designed images (often QR codes) to trigger enhancements, the technology has advanced to the point that any complex, informationally dense image may serve as a fiducial marker. The use of mobile apps and smart devices makes interaction with the text easy and intuitive.

A new wave of AR technology seems to be driven by the increased capability and ubiquity of our mobile devices. Jordan Frith notes that early theories about the internet hypothesized that humanity (or at least that bit of it that could afford computers) would become more isolated and private—living their lives at home—we assume spending their time (and money) ordering from Amazon (Frith 2002, 136). Mobile computing has diverted us from this possible future. Instead, we are bringing our private lives into public spaces, attempting to control these spaces through our AirPods or earbuds, our Google maps, and Four Square—all the while curating our experience of the urban environment on social media.

It is to this mobile landscape that AR brings such promise. AR’s ability to overlay the physical world with digital information offers a new kind of experience and understanding of our world. Victoria Szabo argues that AR may be used to make the site of cultural history more meaningful to their visitors through the layering of digital information over the physical space. As she explains, “Mobile AR systems have the potential to help users create situated knowledge by bringing scholarly interpretation and archival resources in dialog with the lived experience of a space or object” (Szabo 2018, 373). In so doing, she argues, the visitors move from comprehension of the site which entails historical distance and critical interpretation—in other words traditional educational materials that might guide visitors through the site—to apprehension. Apprehension is more experiential learning and “relies on the tangible and felt qualities of the immediate experiences” (Martin 2017, 837; quoted in Szabo 2018, 374). The ability of AR to merge the “real” physical world of the historical site with digital material such as reconstructions, interpretive data, etc. facilitates both apprehension and comprehension.

When we consider an AR publication, however, we are moving away from Szabo’s paradigm to its inverse. With the book form, we are beginning not with the physical space—which already brings with it the tangible learning central to apprehension—but with the more traditional way of making meaning within education: the book. AR is still in its infancy in the publishing industry, but interest in its possibilities is growing. According to one 2017 poll, only 9% of Americans have experienced an AR application (Martin 2017, 20). Yet in this same year, five major tech companies, including Apple, launched AR frameworks or apps following the surprising success of the AR game Pokémon Go in 2016 (Tan 2018, 22). According to Digital Capital, an investment group, AR and VR are poised to become major players in technology. They estimate an AR/VR market of $108 billion with AR as the primary force and with predicted revenues of $90 billion by 2022 (Tan 2018, 22). This market data may seem irrelevant to academia, but what it means is that publishers are beginning to move into AR as well, creating new opportunities for academic AR publications. Major news media such as The New York Times, The Guardian, The Wall Street Journal, BBC, CNN, Hulu, and Huffington Post have all experimented with some form of Virtual, Augmented, or Mixed Reality (VAMR) media (Martin 2017, 21). Deniz Ergurel, technology journalist and founder of the media start-up Haptical, asserts that VAMR marks the next major technological shift. According to Ergurel, “Every 10–15 years, the technology landscape is reshaped by a major new cycle. In 1980s, it was the PC. In 1994, it was the Internet. And in 2007, it was the smartphone. By 2020, the next big computing platform will be virtual reality” (Martin 2017, 20).

AR text, because it is multisensory, can bring some of the features of experiential learning to its readers including the visual features of the text, historical contextualization, images, audio, video, data visualizations, supplementary text, and most importantly, 3D AR augmentations. The multimodal possibilities of AR texts make them particularly useful to teachers of literature that is culturally or historically distant because, through such reading environments, students may be more easily introduced to the material culture that surrounds and creates the texts they are studying. Furthermore, this approach allows the students to engage with the material in a multimodal fashion, appealing not only to the language centers of the brain, but to the visual and aural centers as well. The digital environment encourages the reader (and even the author) to “play” with the text in terms of design and interactive engagement (Douglas 2000, 65). The brain’s ability to play is something we, like many animals, are hardwired to do for survival; consequently, the process of reading text, especially digital text, has neurological value precisely because it encourages the brain’s playfulness (Armstrong 2013, 26–53).

Conclusion: The Future of VR and AR

The argument can be made that neither VR nor AR offers a truly immersive experience because not all five primary senses of the participant are engaged. Certainly, computer technology can generate both visual and aural enhancements in the form of 3D models and recorded sound. However, touch, smell, and taste are more challenging. Haptic tools, such as gloves or a stylus device, are becoming more popular and offer both the VR and AR user the ability to touch and sense physical contact with virtual objects. AR actually has the advantage of offering much more real-world haptic information by default than VR can. With AR, the user can feel the actual book because it can be a real-world object, but, in VR, the technology must do something to allow the participant to feel such an object because the entire environment is computer created. Demand has been less so far for smell and taste, although there have been some experiments, largely unsuccessful, in adding odors to virtual worlds. Recent developments in the creation of technological tools to trigger the sensation of taste in an individual, such as the “digital lollipop” (Ramasinghe and Do 2016) and Electronic Food Texture System (Niijima and Ogawa 2016, 48–9), show promise for the eventual incorporation of this primary sense into the VR experience.

If full sensory engagement is required for a virtual world to be completely realized, then perhaps the most immersive and interactive experience of the Middle Ages may be one that is not computer-generated at all: Jorvik Viking Centre. Located in York, England, the museum and tourist attraction was created in 1984 and has long been famous for its appeal to the senses of its visitors, most significantly the sense of smell. A quick glance at such online review sites as Trip Advisor, Virtualtourist.com, etc. makes it clear that the intentional smells associated with the exhibit are not just memorable but also a significant factor in recommending the Jorvik Viking Centre. The exhibit’s use of scents to enhance the Viking experience has even generated scholarship exploring the effectiveness of odor in retrieving the memory of the tourist experience. Apparently, it is very effective (Aggleton and Waskett 1999, 1–7).[16] The Centre, in fact, intentionally engages all the senses of its visitors in order to make the historic re-creation a memorable and educational experience. In 2015, it actively promoted its non-digital exhibit in the language of virtual and augmented technologies, inviting guests to have a 4D Viking encounter rather than a mere 3D one. In this campaign, the Centre emphasized that all five primary senses of its visitors will be fully engaged (Jorkvik Viking Centre 2015):

  • Touch: Handling collection of Viking Age artefacts, including bone, antler and pottery, on offer to visitors in the queue—participants will be blindfolded and asked to identify the object/material.”
  • Sight: Binoculars are available in the ‘Time Capsules’ that take visitors around the recreated Viking city. These are to be used to spot the various animals that inhabit the scenes of the ride experience. A ‘spotter’s guide’ will be issued, allowing visitors to score themselves against their finds.”
  • Taste: A Viking Host will be on hand to explain the Viking diet and offer up tasters of unsalted, dried cod (a Norse delicacy) and for visitors over 18, Mead, a beverage made of fermented honey, will be available.”
  • Smell: JORVIK is already famed for its re-creation of the smells of the 10th century York but this will be taken a step further with the introduction of ‘smell boxes’ in the ‘Artefacts Alive’ gallery. A new aroma will be located next to a display of object, with the smell paired to match the contents. [Four] smells will be available: Iron (for the Iron working display), Leather (next to the leather and shoemaking), Beef (for the general living display), and wood (for our wood finds).”
  • Sound: A Viking will entertain visitors with period-specific musical instruments (including a recreation of the panpipes found at Coppergate) and retellings of some favourite Viking sagas.”

But as entertaining as the Jorvik Viking Centre clearly is, do we really want, or even need, a fully immersive and interactive experience? From the perspective of pedagogical effectiveness and student engagement, perhaps not. AR may, in fact, be the technology that has greater potential as a pedagogical tool precisely because it allows the user to learn in a digital environment while always keeping a strong foothold in the physical world—a reminder that the 3D world is not, ultimately, a real place.

Notes

[1] For further discussion of these key elements, see Søraker 2011, 44–72.

[2] Cave Automatic Virtual Environment: an immersive video theatre experience in which a participant wearing shuttering glasses views stereoscopic images as they are projected on the walls of a self-contained space in response to the participant’s position and actions.

[3] Such motion sickness may be caused by display and technology issues, sensory conflict, or postural instability; see LaViola, Jr. 2000, 47–56.

[4] Curiously, in 1935, a version of what we consider to be VR glasses was, in fact, envisioned by science fiction writer Stanley Grauman Weinbaum in his short story “Pygmalion’s Spectacles”; see Project Gutenberg http://www.gutenberg.org/files/22893/22893-h/22893-h.htm.

[5] Given the current interest in VR and AR, it is tempting to turn to 3D model sites, such as TurboSquid, to purchase ready-to-use models; however, evaluating these models for historical accuracy is essential. For example, searching on “medieval castle” brings up a wide selection of 3D models from fairly realistic structures to fantasy, fairytale confections that should be avoided for virtual historic sites; searching on “medieval woman” is even more problematic in terms of the results.

[6] Rousso expresses similar concerns about virtual heritage representation: “First, the issue of validity of information, commonly referred as authenticity. Second, the importance of accuracy in the representation of this information. Authenticity and accuracy are characteristics that archeologists, historians, and museum people strive to achieve and that the general public comes to expect from them. On the other hand, technologists dealing with the visualization of certain content are more concerned with the technical issues that pertain to implementation of the visualization and less concerned with authenticity and accuracy of the content itself” (Rousso 2002, 93).

[7] For a fuller analysis of Lévy’s understanding of actualization, see Ryan 1999, 78–107.

[8] For a detailed analysis of Milgram’s concept, see Craig 2013, 28–35.

[9] For an example of a 360º mural, see the Mural Room of the Santa Barbara County Courthouse which depicts the history of Santa Barbara, California, painted by Daniel Sayre Groesbeck in the early twentieth century: https://www.billheller.com/vr/Santa-Barbara-County-Courthouse-Mural-Room-360/.

[10] Stereoscopic imaging is the technique of creating an illusion of depth by using two offset images, one for the left eye and the other for the right, so that the brain processes both as a single, 3D image.

[11] We are making a distinction here between virtual heritage sites, which are 3D reconstructions of archaeological sites, architecture, or any other type of object, and 3D “real virtual worlds,” which combine 3D with “community, creation, and commerce,” such as World of Warcraft and Second Life; see Sivan 2008, 1–32.

[12] The phenomenon was first described by Masahiro Mori in 1970 and translated as “uncanny valley” by Jasia Reichard (Mori 1978).

[13] The project team has, in fact, used the 3D models to produce an award-winning stereoscopic short film, APA Etruscan (2012), for the Museum of the History of Bologna in which APA, an Etruscan character (see Figure 3), takes the viewer through a virtual history of the city.

[14] It is perhaps worth noting that, even though such virtual reconstructions are typically informed by the real world, the 3D digital exterior model of Notre-Dame de Paris created by Dassault Systèmes for the 3D Paris Saga as well as the 3D interior model created by Unisoft for the game Assassin’s Creed Unity may prove to be valuable resources for the rebuilding of the Cathedral after it was severely damaged by fire on April 15, 2019. Ironically, the real may now be informed by the virtual; see Wong 2019.

[15] For an example of how the book works, please see the following video: https://www.youtube.com/watch?v=sbZuQcXchkM.

[16] Capitalizing on the Centre’s success with odor and its notoriety, York’s tourism board published Britain’s first scented tourist guidebook in 2014 (Gordon 2014).

Bibliography

Aggleton, J.P., and L. Waskett. 1999. “The Ability of Odors to Serve as State-Dependent Cues for Real-World Memories: Can Viking Smells Aid the Recall of Viking Experiences?” British Journal of Psychology 90.

Armstrong, Paul B. 2013. How Literature Plays with the Brain: The Neuroscience of Reading and Art. Baltimore, MD: The Johns Hopkins University Press.

Baudrillard, Jean. 1994. Simulacra and Simulation. Trans. Sheila Faria Glaser. Ann Arbor: University of Michigan Press.

Billinghurst, Mark, Hirokazu Kato, and Ivan Poupyrev. 2001. “The MagicBook: A Transitional AR Interface.” Computers & Graphics 25, no. 5.

Craig, Alan B. 2013. Understanding Augmented Reality: Concepts and Applications. Waltham, MA: Elsevier.

Cuff, Paul. 2015. A Revolution for the Screen: Abel Gance’s Napoleon, Film Culture in Transition. Amsterdam: Amsterdam University Press.

Dassault Systèmes. 2012. “Revivez Paris en 3D! Dassault Systèmes nous raconte l’histoire fascinante de la Ville Lumière.” EXALEAD Blog. September 27, 2012. Accessed December 17, 2019, https://blogs.3ds.com/exalead/fr/2012/09/27/revivez-paris-en-3d-dassault-systemes-nous-raconte-lhistoire-fascinante-de-la-ville-lumiere/.

Dieleman, Hans, and Donald Huisingh. 2006. “Games by which to Learn and Teach about Sustainable Development: Exploring the Relevance of Games and Experiential Learning in Sustainability,” Journal of Cleaner Production 14.

Douglas, J. Yellowlees. 2000. The End of Books-Or Books without End: Reading Interactive Narratives. Ann Arbor, MI: The University of Michigan Press.

Fleming, Bill. 1998. 3D Photorealism Toolkit. New York: John Wiley & Sons.

Frith, Jordan. 2012. “Splintered Space: Hybrid Spaces and Differential Mobility.” Mobilities 7, no. 1.

Gordon, Sarah. 2014. “The scratch ‘n sniff tourist guide! York issues the UK’s FIRST scented guidebook to tempt the noses of tourists.” DailyMail.com, March 7, 2014. Accessed December 17, 2019. http://www.dailymail.co.uk/travel/article-2575496/York-releases-UKs-scented-guidebook-tempt-noses-tourists.html.

Guidazzoli, Antonella, Maria Chiara Liguori, and Mauro Felicori. 2013. “Open Creative Framework for a Smart Cultural City: Bologna Porticoes and the Involvement of Citizens for a UNESCO Candidacy.” Information Technologies for Performing Arts, Media Access, and Entertainment Lecture Notes in Computer Science, Vol. 7990: 58–65.

Jorvik Viking Centre. 2015. “Forget 3D – Discover Vikings in 4D this Summer.” Jorvik Viking Centre. Accessed December 17, 2019. http://www.thejorvikgroup.com/press/press-releases/forget-3d-discover-vikings-in-4d-this-summer/.

LaViola, Jr., Jospeh L. 2000. “A Discussion of Cybersickness in Virtual Environments.” SIGCHI Bulletin 32, no. 1: 47–56.

Lévy, Pierre. 1998. Becoming Virtual: Reality In The Digital Age. Trans. Robert Bononno. New York: Plenum Press.

Martin, Erik J. 2017. EContent Magazine. May/June 2017.

Masahiro, Mori. Robotics: Fact, Fiction, and Prediction. Trans, Jasia Reichard. Viking Press, 1978.

McCarty, Willard. 2003. “Humanities Computing.” Encyclopedia of Library and Information Science. New York: Marcel Dekker.

Milgram, Paul, et al. 1995. “Augmented Reality: A Class of Displays on the Reality-Virtuality Continuum.” Proceedings of the SPIE Conference on Telemanipulator and Telepresent Technologies 2351: 282–92.

Niijima, Arinobu and Takefumi Ogawa. 2016. “Virtual Food Texture by Electrical Muscle Stimulation.” Proceedings of the 2016 ACM International Symposium on Wearable Computers. New York: ACM.

Ogleby, Clifford. 2007. “The ‘Truthlikeness’ of Virtual Reality Reconstructions of Architectural Heritage: Concepts and Metadata.” 3D-ARCH 2007: 3D Virtual Reconstruction and Visualization of Complex Architectures. International Society for Photogrammetry and Remote Sensing (ISPRS). 36, no. 5: n.p. Accessed December 17, 2019, https://www.semanticscholar.org/paper/THE-%22-TRUTHLIKENESS-%22-OF-VIRTUAL-REALITY-OF-%3A-AND-Ogleby/87533316e22a172b015d10e67cd69f758f18c887.

Ramasinghe, Nimesha, and Ellen Li-Yuen Do. 2016. “Digital Lollipop: Studying Electrical Stimulation on the Human Tongue to Simulate Taste Sensations.” ACM Transactions on Multimedia Computing, Communications, and Applications 13, no. 1.

Rousso, Maria. 2002. “Virtual Heritage: From the Research Lab to the Broad Public.” Virtual Archaeology: Proceedings of the VAST Euroconference. Ed. Franco Niccolucci (BAR International Series 1075.

Ryan, Marie-Laure. “Cyberspace, Virtuality, and the Text.” Cyberspace Textuality: Computer Technology and Literary Theory. Ed. Marie-Laure Ryan. Bloomington: Indiana University Press. 78–107.

Sherman, William R., and Alan B. Craig. 2003. Understanding Virtual Reality: Interface, Application, and Design. The Morgan Kaufmann Series in Computer Graphics. San Francisco: Morgan Kaufmann Publishing.

Sivan, Yesha. 2008. “3D3C Real Virtual Worlds Defined: The Immense Potential of Merging 3D, Community, Creation, and Commerce.” Journal of Virtual Worlds Research 1, no. 1: 1–32.

Søraker, Johnny Hartz. 2011. “Virtual Entities, Environments, Worlds and Reality: Suggested Definitions and Taxonomy.” Trust and Virtual Worlds: Contemporary Perspectives. Eds. Charles Ess and May Thorseth. New York: Peter Lang Publishing. 44–72.

Szabo, Victoria. 2018. “Apprehending the Past: Augmented Reality, Archives, and Cultural Memory.” The Routledge Companion to Media Studies and Digital Humanities. Ed. Jentery Sayers. New York: Routledge.

Tan, Teri. 2018 “Making AR and VR Work in Publishing.” Publishers Weekly. July 2, 2018.

Thompson, Seth. 2015. “VR Panoramic Photography and Hypermedia: Drawing from the Panorama’s Past.” ISEA 2015: Proceedings from the 2015 International Symposium on Electronic Art. n.p. Accessed December 19, 2019. http://isea2015.org/proceeding/submissions/ISEA2015_submission_46.pdf.

Vitaliev, Vitali. 2013. “AR and 3D in Travel and History Applications, “ Engineering and Technology Magazine 8, no. 4. Accessed December17, 2019, https://eandt.theiet.org/content/articles/2013/04/ar-and-3d-in-travel-and-history-applications/.

Virtual Reality Society. 2016. “The History of Virtual Reality.” Virtual Reality Society. Accessed December 17, 2019. http://www.vrs.org.uk/virtual-reality/history.html.

“Voici à quoi ressemblait le Sainte-Chapelle au 14e siècle.” 2015. Sciences et Avenir. June 25, 2015. Accessed December 17, 2019, http://www.sciencesetavenir.fr/voyage/20150622.OBS1313/voici-a-quoi-ressemblait-la-sainte-chapelle-au-14eme-siecle.html.

Wong, Kenneth. 2019. “Dassault Systèmes, Ubisoft Pledge to Help Rebuild Notre-Dame.” Digital Engineering 247. April 17, 2019.
https://www.digitalengineering247.com/article/dassault-systemes-unisoft-pledge-to-help-rebuild-notre-dame/simulate.

Acknowledgments

We are extremely grateful to Alan B. Craig for reading and commenting on an earlier version of this article and for drawing our attention to the Bologna 3D Open Repository.

About the Authors

Tamara F. O’Callaghan is a Professor of English at Northern Kentucky University where she teaches medieval literature, history of the English language, and introductory linguistics as well as digital humanities approaches to literature. She received a Ph.D. in medieval studies from the Centre for Medieval Studies, University of Toronto, with a specialization in Middle English and Old French literature and medieval manuscript studies. She is the co-author of the textbook Introducing English Studies (Bloomsbury, 2020) and has published on medieval literature and manuscript studies as well as on the digital humanities and teaching. She also co-directs The Augmented Palimpsest Project, a digital humanities tool which explores how the medium of AR can be used in teaching medieval literature.

Andrea R. Harbin is an Associate Professor of English at the State University of New York, Cortland where she teaches medieval literature, the history of English, and Shakespeare and serves as department chair.  She has worked as a digital humanist since 1998 as curator/editor of NetSERF: an Internet Database of Medieval Studies. She received a Ph.D. in Medieval English Literature with a specialization in medieval drama from The Catholic University of America, and has published articles on digital humanities, pedagogy in medieval studies, and medieval drama. She is likewise a co-director of The Augmented Palimpsest Project.

Screenshot of the Chirality VR experience displaying two 3D models being manipulated by virtual hands.
1

Virtual Chirality: A Constructivist Approach to a Chemical Education Concept in Virtual Reality

Abstract

Extended reality (XR) is a growing interest in academia as instructors seek out new ways to engage students beyond traditional learning material. At the University of Florida, a team of library workers at Marston Science Library identify and design virtual reality (VR) learning objects for faculty and staff across campus. In summer 2019, the team at Marston Science Library and faculty in the Department of Chemistry partnered to pilot the implementation of a VR learning object into a general chemistry course. The Library team met with chemistry faculty and teaching assistants and developed a corresponding experience in virtual reality using the Unreal Engine, a game engine used in VR development. The VR learning object was designed for the section of the course related to chirality, an important chemistry concept that requires spatial awareness to understand. This article will explore VR as an approach to constructivist pedagogy and its application in chemistry education, specifically as a tool to positively impact spatial awareness. The results of the pilot implementation of the VR learning object was successful as chemistry faculty anecdotally noted increased student engagement and understanding of the course material. After a successful pilot, the learning object was also deployed in two organic chemistry courses. A survey was used to collect information from the students’ perspectives and demonstrated that the experience was beneficial for users developing spatial awareness of molecules for chemistry education.

Introduction

Extended reality (XR) and its subsets, virtual reality (VR), augmented reality (AR), and mixed reality (MR), have expanded their roles in academia as researchers continue to seek out emerging technologies to solve modern problems. In the realm of teaching and learning, instructors are turning to XR learning objects as potential improvements on traditional learning objects. In response to this interest, universities have grappled with deploying VR learning objects in spite of associated costs and complicated logistics (Kavanagh et al. 2017). At the University of Florida (UF), Marston Science Library (hereafter Library) created MADE@UF, a virtual reality development space run by library staff, with the vision of providing VR technology to all of campus. The Library’s mission for MADE@UF is two-fold: supporting student learning and development of virtual reality, and aiding faculty in identifying, developing, and implementing VR experiences in curriculum.

In maintaining and coordinating a VR development space, the Library collaborates with faculty across campus from various disciplines including English, medieval studies, astronomy, psychology, and tourism, to name a few. These collaborations can involve identifying existing VR experiences to deploy as well as creating VR experiences for specific courses. In summer 2019, faculty from the Department of Chemistry approached the Library with an idea for VR experience to be designed for an Accelerated General Chemistry course in fall 2019. The Library assembled its team of experts, consisting of the Engineering Education Librarian, who is also the director of MADE@UF, the Chemical Sciences Librarian, and the 3D and Emerging Technologies Manager. Each Library team member brought their individual expertise in learning theories and pedagogies, chemistry education, and virtual reality development, respectively, to consider how creating a virtual simulation would benefit teaching and learning in this Accelerated General Chemistry course.

Constructivist Pedagogy in Virtual Reality

Constructivism as a learning theory involves the learner constructing their knowledge based out of their experiences in which the learner is an active participant (Glasersfeld 2003). Most VR experiences discussed in scholarly literature are not built on constructivist pedagogy; rather, most practitioners focus on and research intrinsic factors, such as immersion, motivation, and enjoyment, as essential to using virtual reality applications in teaching and learning (Kavanagh et al. 2017). This initial oversight is understandable, as before establishing the pedagogy of a virtual reality experience, the most fundamental aspects of virtual reality must be established. However, early VR researchers reference the importance of presence, accommodation, and collaboration while advocating for VR as a framework for constructivism (Bricken 1990). Immersion, another intrinsic factor, is fundamental in creating a virtual reality experience that is compatible with constructivism, specifically that “immersion in a virtual world allows us to construct knowledge from direct experience, not from descriptions of experience” (Winn 1993). These fundamental aspects of a VR framework must emphasize the importance of establishing identity, presence, and collaboration within a virtual space, all of which would be self-evident in physical spaces; this would then allow the learner to experience conceptualization, construction, and dialogue, which are staples of constructivist pedagogy (Fowler 2015). These intrinsic factors guide the creation and implementation of VR learning environments as frameworks for pedagogies to build upon (O’Connor and Domingo 2017).

In addition to focusing on intrinsic factors, researchers are also attempting to categorize VR learning objects retroactively under various pedagogies and learning theories such as experiential learning, situated cognition, or constructivism (Johnston et al. 2018). Of all the pedagogies and learning theories used in VR, constructivism is the most referenced pedagogy to accompany virtual reality experiences in education (Kavanagh et al. 2017). Constructivism is not inherent to all virtual reality experiences as it requires more than VR can independently provide. Rather, constructivist VR experiences should aim to provide feedback that results in revision and restructuring of previous knowledge constructs (Aiello et al. 2012). The VR experience also needs to include active learning, a component of constructivism in which learners derive meaning from their sensory inputs, so learners can freely explore and manipulate their environment while receiving sensory feedback (Chen 2009). VR experiences using a constructivist approach can facilitate knowledge construction and reflection as well as social collaboration (Neale et al. 1999). A benefit of a constructivist approach in virtual reality is improving the learners’ perceived usefulness of the learning material, which is the most significant contributor to positive learner attitude (Huang and Liaw 2018). A constructivist approach to VR has also led to gains in knowledge, skills, and personal development in a VR learning environment (Bair 2013). Spatial visualization, an important factor in chemistry education, has proven malleable and positively impacted by VR designed with constructivism pedagogy (Samsudin et al. 2014).

Chemistry Education Background

The molecular properties and chemical reactivity of compounds rely heavily on the way molecules are arranged and oriented in three-dimensional space, which is referred to as the stereochemistry of a molecule (Brown et al. 2018). A fundamental skill for chemistry students is the development of spatial awareness at the molecular level: understanding the structural geometries and relative sizes of molecules, as well as how to mentally translate between different visual representations of molecules, is a prerequisite to understanding and predicting chemical phenomena (Oliver-Hoyo and Babilonia-Rosa 2017). Teaching students how to visualize molecules in space is one of the quintessential challenges in chemistry education, particularly because the nebulous nature of chemistry concepts can be difficult to make tangible. Because there is no way to directly observe a molecule or molecular interactions at the sub-nanometer scale, models are used to represent chemistry concepts in both chemistry education and practice. A particular stereochemistry concept introduced at the undergraduate level is the chirality, or handedness, of organic molecules. Chirality refers to the relationship between objects that are mirror images of one another but cannot be perfectly aligned (or “superimposed”) on top of each other (Brown et al. 2018). This property is visible in all everyday objects that aren’t perfectly symmetric, such as a person’s left and right hands, threaded screws, and headphone earbuds. At the molecular level, organic compounds have a chiral center at any carbon atom with four different groups attached to it. Recognizing chirality and systematically naming chiral molecules are particularly troublesome tasks for undergraduate students due in large part to the difficulty of mental 3D visualization required to “see” these properties (Ayorinde 1983; Beauchamp 1984).

Research has indicated that handling concrete and pseudo-concrete representations of molecules (tactile models and computer-generated graphics) improves students’ spatial understanding of molecular structures in comparison to abstract 2D representations (Ferk et al. 2003). Educators have deployed a variety of visualization tools to help students translate the 2D representations of compounds in the pages of their textbooks into visualized 3D objects, including handheld “ball-and-stick” modeling kits and computer-based modeling programs. Ball-and-stick models were first employed in the mid-nineteenth century (Matthew F. Schlecht 1998) and are still the most widely used method for 3D visualization in undergraduate chemistry curricula. However, these model kits make a number of assumptions about molecular structures that are not accurate, including that bond lengths and atom sizes are all uniform. Commercially available modeling kits vary widely and leave more advanced visualization nuances to the imagination of the students. Computer modeling programs have the ability to represent individual molecules more accurately in terms of bond lengths, bond angles, and atom sizes because they do not rely on fixed physical pieces that the user assembles. Most of these programs are streamlined for ease of use and there are many free and open source software options for students to access (Pirhadi, Sunseri, and Koes 2016), including the popular programs Avogadro, JMol, MolView, and Visual Molecular Dynamics. The largest drawback of computer graphic representations for student learning is that they are not tactile and are typically viewed on a computer screen. A comparison of the 2D structural drawings common in chemistry materials, 3D models built with ball-and-stick model kits, and pseudo-3D digital images generated by computer software are shown in Figure 1.

The chiral molecule bromochlorofluoromethane as represented by the typical 2D line-angle formula created in Chem Draw, two different commercial ball-and-stick model kits, and computer modeling generated in Mol View.
Figure 1. The chiral molecule bromochlorofluoromethane (CHBrClF) as represented by (a) the typical 2D line-angle formula created in ChemDraw; (b) two different commercial ball-and-stick model kits; and (c) computer modeling generated in MolView.

Now that the costs of developing XR learning objects and obtaining the equipment necessary for students to experience them are becoming more obtainable, chemistry educators are exploring the use of AR, MR, and VR in the classroom. A review on the use of XR in education highlighted that course content being presented in a novel and exciting way, the ability to physically interact with the media, and the direction of students’ attention to the important learning objectives were all positive factors in the success of XR lessons (Radu 2014). Some examples specific to the chemistry domain include laboratory experiments designed in game engines like Second Life (Pence, Williams, and Belford 2015), AR smartphone applications that allow molecules to jump off the pages of lecture notes as 3D structures (Borrel and Fourches 2017), molecule building and structure interactions with AR (Singhal et al. 2012), environmental chemistry fieldwork simulated through VR (Fung et al. 2019), and VR experiences involving interactive computational chemistry (Ferrell et al. 2019). For teaching students about stereochemistry and chirality, the power of VR to bridge the divide between the structural accuracy of computer modeling and the tactile advantage of ball-and-stick model kits seems promising.

Many chemistry-education protocols have proposed that using multiple model types is the most beneficial approach for teaching students who may learn in different ways (Dori and Barak 2001). While there is evidence that viewing instructors manipulate computer models on a screen does improve student understanding in large chemistry lecture courses (Springer 2014), allowing for students to directly manipulate the model themselves has been suggested as the ideal approach to implementing computer modeling whenever feasible (Wu and Shah 2004). Encouraging students to translate between 2D and 3D representations during a facilitated interaction with 3D models has also been suggested to improve students’ ability to reason with chemical formulae, as opposed to students using models on their own with no instructor intervention (Abraham, Varghese, and Tang 2010). Combining these constructivist and chemistry education pedagogical insights, we chose to design and implement a lesson on visualizing, handling, and naming chiral organic molecules using an in-house built VR experience. During this lesson, the following strategies were employed:

  1. Undergraduate chemistry students in the class were previously instructed on the concept of chirality in their lecture course and had been exposed to 2D representations of chiral molecules.
  2. Each student had the opportunity to individually participate in the VR experience.
  3. Students were able to freely handle, rotate, and superimpose the molecules in the 3D virtual space.
  4. Students in groups were asked to make observations and explain the chemical phenomena in the virtual experience.

Design and Implementation of the VR Learning Object

The VR chirality experience was designed for CHM 2047, a one-semester, accelerated undergraduate General Chemistry course designed for students with a strong high school chemistry background who are interested in moving into upper-level chemistry courses. The course met three times a week with two lecture periods and one discussion period. The faculty member led the weekly lectures and split the students into five groups for the weekly discussion periods; each of the discussion groups was led by a peer mentor, an undergraduate student who had recently completed CHM 2047 and finished at the top of the class. Chemistry doctoral students were also involved in the course as teaching assistants (TAs) and participated in some supervised instruction as well as oversaw the undergraduate peer mentors. For the discussion period related to chirality, the faculty member for CHM 2047 solicited the expertise of the Library team to incorporate a virtual reality learning object. The Library team created a virtual reality template for classes to use in an assignment that allows learners to interact with 3D molecular models using virtual tactility and physics. During this interaction, the Library team devised a constructivist approach for the learning object.

Learners would recall knowledge learned in prior and current chemistry courses, specifically knowledge related to chirality and systematically describing chiral geometries. Drawing on this knowledge, students in groups would hypothesize and discuss their observations of the virtual environment and the molecules within it. Students would interact with other group members, testing their ideas about the virtual experience, and constructing an understanding of the learning object. Ultimately the objective for the students is to locate the chiral center of a molecule, describe the geometry of this chiral center, and realize the non-superimposable nature of chiral pairs. Additionally, students may be able to create a mental visualization of the molecules and improve their spatial awareness.

In order to prepare the VR template for use in the course, the instructing professors were asked to compile a list of relevant chiral molecule examples, generate computer models of these molecules using the software of their choice, and provide the models to the Library team in .PBD file type form. Although this activity was focused on small organic molecules, the Library team proposed this workflow because .PBD file types can accommodate small molecules as well as large macromolecules, such as proteins and polymers. This practice would allow for the use of protein structures from the Protein Data Bank (PDB), a global archive of 3D structure data of biological macromolecules (wwPDB consortium 2019), in future VR activities with ease. It is also possible to allow students to directly generate structures and provide them to the Library team, rather than the course instructors, as a part of the chirality lesson. The Library team was then able to import these 3D models into the game engine while retaining all color information provided in the original software. The Library team used an in-browser file converter designed by chemists to rapidly generate XR files from chemical structure files called RealityConvert (Borrel and Fourches 2017) to process the models from their original filetype (.PDB) to .OBJ 3D models with associated .PNG and .MTL files for mapping color to the model’s topography.

The team chose Unreal Engine V 4.20 because it is free to use for educational purposes and boasts pre-built VR interactive tools. Aside from its practicality, Unreal Engine can reproduce a project for Windows, Mac, mobile, HTML5, and other platforms. Once the VR template is set, it is relatively easy to drag and drop a new molecule model into the program and view it in immersive VR. The template was designed to show every loaded model in a museum-style room on a pedestal with the name of the molecule displayed above. The learner can approach each model, walk around it, and see from every angle. They can pick it up using motion controls and rotate the model in their hands. They can also grab a model in each hand to freely move the models around and compare. Once released, the model snaps back to its original position. For increased usability, the team felt it was necessary to design a physics object that had a natural feel when the viewer grabbed the model and rotated it using their own wrist and controller movement; this is notable as the team removed any physics interaction created by overlapping objects as well as the game engine’s own preset “gravity.”

The team chose a very plain room to model, using rectangular topography so as not to distract the learner from the molecules placed throughout the space; ample lighting was generated to create a well-lit space to explore. Additional lights were added below each of the models to highlight the topography and heighten the sense of three-dimensionality. The experience allowed the learner to move around the room by two separate methods depending upon the configuration of the VR experience. Either the learner could physically move through the space if using a VR setup that allows for full-range motion tracking; or the learner could use a trigger on the hand controller to point to a specific point in the virtual space and “jump” to it when releasing the trigger. A simple text document was provided to explain the controls.

The pedestals in the room were arranged according to a grid with three pedestals in each row. The learning objective of the VR experience was for students to compare the two versions of chiral arrangement for each molecule selected by the instructor. Chiral molecules are systematically classified as either R (“Rectus,” right-handed) or S (“Sinister,” left-handed) configurations. In each row of three pedestals, the R and S versions of the molecule were placed on the far left and right sides of the row. On the center pedestal of each row, a side-by-side display of both R and S versions was shown for the students to view. Because students were expected to determine and assign R or S configuration to the molecules they viewed, the R and S structures were intentionally randomized in regard to their positions on the “left” or “right” side of the room so as not to indicate chiral configuration. For example, one molecule might be arranged as R, R and S, S in its row in the room, while another might be arranged as S, R and S, R.

It is worthy of note that because the 3D models were placed in the VR space as non-rigid bodies—meaning that the objects can clip through one another and occupy the same virtual space—students were able to experience the non-superimposability of chiral molecules in a unique way. The defining feature of chiral molecules is that they cannot be perfectly aligned on top of one another, and typically ball-and-stick models of the two versions are held side-by-side as closely as possible to demonstrate this property. However, in this VR environment, students were able to hold one version in the same space as the other version for each chiral pair and see that no matter how they manipulated the models, they could not align all atoms in a way that matched.

Screenshot of the Chirality VR experience displaying two 3D models being manipulated by virtual hands.
Figure 2. Screenshot of the Chirality VR experience displaying two 3D models being manipulated by virtual hands.

Once the template was updated to include the student-created models, the VR learning object was installed on VR-ready computers in the MADE@UF space at Marston Science Library. Library workers set up three Oculus Rifts on VR-ready computers in MADE@UF for five consecutive class periods on the day of a discussion period. Groups of two to four students moved to the VR stations, each with an Oculus Rift headset for the student and a monitor for the supervisor, a role filled by the peer mentor, teaching assistant, faculty member, or chemistry librarian. The role of the supervisor was to explain logistical questions with minimal input about the content of the experience, although supervisors would intervene if the students’ conclusions about the virtual experience were incorrect. The students interacted with the five sets of molecules, each set increasing in complexity as the student progressed through the virtual space. The students were able to manipulate, compare, and superimpose the two models in order to assign R/S configuration.

Further Use and Assessment

The CHM 2047 course instructor was looking to expose students to more advanced chemical concepts beyond the typical first-year general chemistry curricula in an innovative way. Chirality is a concept that may sometimes be introduced at the general chemistry level but is universally taught during the subsequent organic chemistry sequence. After the VR program was created and implemented in CHM 2047 during the fall of 2019, the same program was used for facilitated VR experiences in Fundamentals of Organic Chemistry (CHM 2200) and Organic Chemistry and Biochemistry 1 (CHM 3217) during the spring 2020 semester.

After these VR sessions were completed, a brief survey instrument (see Appendixes) was deployed in order to assess the student’s perceptions of the VR experience’s effectiveness in improving their understanding of chirality, including in comparison with other chemistry model types, and whether the students experienced any accessibility barriers during the process. Responses were collected from twenty-one students in total from the three chemistry courses.

Students were asked which molecular visualization methods they have used while studying chemistry and which they found most valuable to their understanding of chemical concepts. The four methods were VR (used by nineteen), ball-and-stick models (used by sixteen), drawings (used by nineteen), or a non-VR computer model (used by nine).

Bar graph depicting student use of visualization methods in chemistry.
Figure 3. Student use of visualization methods in chemistry.

In terms of ranking how valuable each visualization method was, VR was ranked as the top choice with ten of twenty-one students, followed by drawings (seven students) and ball-and-stick models (four students). Two students ranked VR as their lowest choice method. Although nine students answered that they have used non-VR computer modeling before, none of the respondents ranked computer modeling as their top preferred visualization tool, which may be related to the intangible nature of computer modeling for novice chemistry learners.

Bar graph depicting student ranking of preferred visualization method while studying chemistry.
Figure 4. Student ranking of preferred visualization method while studying chemistry.

The majority of students believed that virtual reality was a benefit to their spatial awareness of molecules. Eighteen of twenty-one students believed that manipulating the molecules in the virtual reality experience improved their ability to make R/S assignments. Sixteen of twenty-one students believed that manipulating the molecules in the virtual reality experience improved their understanding of the non-superimposibility of enantiomers. Lastly, seventeen of twenty-one students believed that manipulating the molecules in the virtual reality experience improved their ability to mentally visualize molecules. Students who answered in the affirmative to these questions often referenced that being able to see, visualize, move, hold, and touch the molecules was a benefit. One student described the experience as “incredibly helpful experience for someone like me that isn’t the best at spatial configurations,” while another mentioned that they “can still visualize how the molecules looked in the virtual reality experience and it has helped me to visualize molecules in my head.” A small group of students did not believe the VR experience was helpful. These students indicated that they already had an understanding of the concepts or that ball-and-stick models were superior. One student noted that “ball and stick models do the same without all the fancy equipment.”

The student responses to the survey highlight a need for improved methods for teaching content that requires spatial reasoning. While some students already have the requisite spatial reasoning skills, other students struggle with converting 2D, non-tangible drawings to a 3D mental construction. VR in chemistry can then serve as a tool to create more accessible content for a subset of students who historically have struggled with spatial reasoning. VR could then be used in conjunction with the traditional 2D drawings and ball-and-stick models.

One area of improvement for the VR experience was related to the visual accessibility of the program. Survey responses recorded that one out of the twenty-one respondents experienced “barriers” during the lesson, but this respondent did not disclose specific details of the accessibility issue. However, during one of the sessions hosted in the library, a library facilitator was needed to dictate the colors of specific atoms and indicate their identities to a user with color blindness. This accessibility concern is widespread in chemistry and chemistry education because periodic table elements are typically designated by a common color scheme and visualized molecules usually do not contain textures or patterns in addition to color coding. In future iterations of this VR experience, finding ways to depict atom identities that do not rely on color perception will increase user accessibility.

Reflection and Conclusion

Overall, the VR experience was successful. Chemistry faculty and TAs conducted informal debriefing sessions with the students following the Library VR session. Students provided positive feedback, with several noting an increased understanding of chirality following the VR experience. The faculty and instructors noticed the students were more engaged during the VR session than during other discussion or lecture periods, a feat that was observed to be uncommon for undergraduate chemistry courses. The course instructor mentioned that in previous years, the typical assignment on chirality involved students drawing 2D representations of 3D structures on paper; after the VR experience this semester, students commented on the ease of model manipulation the experience granted and said that they “truly understood” what the concept of chirality meant. The professor also noted that the undergraduate peer mentors (who had previously been students in the course before the VR lesson was implemented) “were particularly content on the new way to look at molecules, describing it as a more direct way to understand the role of 3D in chemistry.” The chemistry faculty are already interested in using the experience again for their Fall 2020 coursework, and several other chemistry faculty have also contacted the Library about deploying a similar VR learning object for their classes. The CHM 2047 professor commented that “it is clear from the success of this assignment that teaming up chemistry instructors with experienced librarians is the best combination to implement new technologies within the chemistry curricula.”

Bibliography

Abraham, Michael, Valsamma Varghese, and Hui Tang. 2010. “Using Molecular Representations To Aid Student Understanding of Stereochemical Concepts.” Journal of Chemical Education 87, no. 12: 1425–29. https://doi.org/10.1021/ed100497f.

Aiello, P., F. D’Elia, S. Di Tore, and M. Sibilio. 2012. “A Constructivist Approach to Virtual Reality for Experiential Learning.” E-Learning and Digital Media 9, no. 3: 317–24. https://doi.org/10.2304/elea.2012.9.3.317.

Ayorinde, F. O. 1983. “A New Gimmick for Assigning Absolute Configuration.” Journal of Chemical Education 60, no. 11: 928. https://doi.org/10.1021/ed060p928.

Bair, Richard A. 2013. “3D Virtual Reality Check: Learner Engagement and Constructivist Theory.” PhD diss. Capella University. https://search.proquest.com/docview/1447009219/abstract/902A5625FF94EA2PQ/1.

Beauchamp, Philip S. 1984. “‘Absolutely’ Simple Stereochemistry.” Journal of Chemical Education 61, no. 8: 666–67. https://doi.org/10.1021/ed061p666.

Borrel, Alexandre, and Denis Fourches. 2017. “RealityConvert: A Tool for Preparing 3D Models of Biochemical Structures for Augmented and Virtual Reality.” Bioinformatics 33, no. 23: 3816–18. https://doi.org/10.1093/bioinformatics/btx485.

Bricken, William. 1990. “Learning in Virtual Reality.” HITL-TR-M-90-5. Washington University, Seattle. Washington Technology Center. https://files.eric.ed.gov/fulltext/ED359950.pdf.

Brown, William Henry, Eric V. Anslyn, Christopher S. Foote, and Brent L. Iverson. 2018. Organic Chemistry. 8th ed. Boston: Cengage Learning.

Chen, Chwen Jen. 2009. “Theoretical Bases for Using Virtual Reality in Education.” Themes in Science and Technology Education 2: 71–90. https://files.eric.ed.gov/fulltext/EJ1131320.pdf.

Dori, Yehudit Judy, and Miri Barak. 2001. “Virtual and Physical Molecular Modeling: Fostering Model Perception and Spatial Understanding.” Educational Technology & Society 4 no. 1: 61–74.

Ferk, Vesna, Margareta Vrtacnik, Andrej Blejec, and Alenka Gril. 2003. “Students’ Understanding of Molecular Structure Representations.” International Journal of Science Education 25, no. 10: 1227–45. https://doi.org/10.1080/0950069022000038231.

Ferrell, Jonathon B., Joseph P. Campbell, Dillon R. McCarthy, Kyle T. McKay, Magenta Hensinger, Ramya Srinivasan, Xiaochuan Zhao, Alexander Wurthmann, Jianing Li, and Severin T. Schneebeli. 2019. “Chemical Exploration with Virtual Reality in Organic Teaching Laboratories.” Journal of Chemical Education 96, no. 9: 1961–66. https://doi.org/10.1021/acs.jchemed.9b00036.

Fowler, Chris. 2015. “Virtual Reality and Learning: Where Is the Pedagogy?” British Journal of Educational Technology 46, no. 2: 412–22. https://doi.org/10.1111/bjet.12135.

Fung, Fun Man, Wen Yi Choo, Alvita Ardisara, Christoph Dominik Zimmermann, Simon Watts, Thierry Koscielniak, Etienne Blanc, Xavier Coumoul, and Rainer Dumke. 2019. “Applying a Virtual Reality Platform in Environmental Chemistry Education To Conduct a Field Trip to an Overseas Site.” Journal of Chemical Education 96, no. 2: 382–86. https://doi.org/10.1021/acs.jchemed.8b00728.

Glasersfeld, Ernst von. 2003. Radical Constructivism: A Way of Knowing and Learning. Vol. 6. Studies in Mathematics Education Series. London: Routledge Falmer. EBSCOhost. https://eric.ed.gov/?id=ED381352.

Huang, Hsiu-Mei, and Shu-Sheng Liaw. 2018. “An Analysis of Learners’ Intentions Toward Virtual Reality Learning Based on Constructivist and Technology Acceptance Approaches.” The International Review of Research in Open and Distributed Learning 19, no. 1. https://doi.org/10.19173/irrodl.v19i1.2503.

Johnston, Elizabeth, Gerald Olivas, Patricia Steele, Cassandra Smith, and Liston Bailey. 2018. “Exploring Pedagogical Foundations of Existing Virtual Reality Educational Applications: A Content Analysis Study.” Journal of Educational Technology Systems 46, no. 4: 414–39. https://doi.org/10.1177/0047239517745560.

Kavanagh, Sam, Andrew Luxton-Reilly, Burkhard Wuensche, and Beryl Plimmer. 2017. “A Systematic Review of Virtual Reality in Education.” Themes in Science and Technology Education 10, no. 2: 85–119. https://eric.ed.gov/?id=EJ1165633.

Matthew F. Schlecht. 1998. “Historical Overview of Molecular Modeling.” In Molecular Modeling on the PC, edited by Matthew E. Schlecht, 3–10. New York: Wiley-VCH.

Neale, H. R., D. J. Brown, S. V. G. Cobb, and J. R. Wilson. 1999. “Structured Evaluation of Virtual Environments for Special-Needs Education.” Presence: Teleoperators and Virtual Environments 8, no. 3: 264–82. https://doi.org/10.1162/105474699566224.

O’Connor, Eileen A., and Jelia Domingo. 2017. “A Practical Guide, With Theoretical Underpinnings, for Creating Effective Virtual Reality Learning Environments:” Journal of Educational Technology Systems 45, no. 3: 343–64. https://doi.org/10.1177/0047239516673361.

Oliver-Hoyo, Maria, and Melissa A. Babilonia-Rosa. 2017. “Promotion of Spatial Skills in Chemistry and Biochemistry Education at the College Level.” Journal of Chemical Education 94, no. 8: 996–1006. https://doi.org/10.1021/acs.jchemed.7b00094.

Pence, Harry E., Antony J. Williams, and Robert E. Belford. 2015. “New Tools and Challenges for Chemical Education: Mobile Learning, Augmented Reality, and Distributed Cognition in the Dawn of the Social and Semantic Web.” In Chemistry Education, edited by Javier García-Martínez and Elena Serrano-Torregrosa, 693–734. Weinheim, Germany: Wiley-VCH Verlag GmbH & Co. KGaA. https://doi.org/10.1002/9783527679300.ch28.

Pirhadi, Somayeh, Jocelyn Sunseri, and David Ryan Koes. 2016. “Open Source Molecular Modeling.” Journal of Molecular Graphics and Modelling 69 (September): 127–43. https://doi.org/10.1016/j.jmgm.2016.07.008.

Radu, Iulian. 2014. “Augmented Reality in Education: A Meta-Review and Cross-Media Analysis.” Personal and Ubiquitous Computing 18, no. 6: 1533–43. https://doi.org/10.1007/s00779-013-0747-y.

Samsudin, Khairulanuar, Ahmad Rafi, Ahmad Zamzuri Mohamad Ali, and Nazre ABD Rashid. 2014. “Enhancing a Low-Cost Virtual Reality Application through Constructivist Approach: The Case of Spatial Training of Middle Graders.” The Turkish Online Journal of Educational Technology 13, no. 3: 8. https://files.eric.ed.gov/fulltext/EJ1034227.pdf.

Singhal, Samarth, Sameer Bagga, Praroop Goyal, and Vikas Saxena. 2012. “Augmented Chemistry: Interactive Education System.” International Journal of Computer Applications 49, no. 15: 1–5. https://doi.org/10.5120/7700-1041.

Springer, Mike T. 2014. “Improving Students’ Understanding of Molecular Structure through Broad-Based Use of Computer Models in the Undergraduate Organic Chemistry Lecture.” Journal of Chemical Education 91, no. 8: 1162–68. https://doi.org/10.1021/ed400054a.

Winn, William. 1993. “A Conceptual Basis for Educational Applications.” Technical Publication R-93-9. Seattle, WA: Human Interface Technology Laboratory, Washington Technology Center, University of Washington, 1993. http://www.hitl.washington.edu/research/learning_center/winn/winn-paper.html~.

Wu, Hsin-Kai, and Priti Shah. 2004. “Exploring Visuospatial Thinking in Chemistry Learning.” Science Education 88, no. 3: 465–92. https://doi.org/10.1002/sce.10126.

Appendix A: Survey Instrument

Appendix A

Appendix B: Survey Results

Appendix B

About the Authors

Samuel R. Putnam is the Engineering Education Librarian at the University of Florida, where he is the mechanical and aerospace engineering and engineering education liaison and directs the MADE@UF virtual reality development space. He received his MLIS from Florida State University in 2009, focusing on library management and leadership. Samuel’s current research focuses on multimodal and multimedia instruction as a means to promote information literacy and active learning.

Michelle Nolan is the Chemical Sciences Librarian at the Marston Science Library in the University of Florida, where she serves as the reference and instruction specialist for users pursuing chemical research. She received her PhD in chemistry from the University of Florida in 2018, where her doctoral studies focused on organometallic synthesis and materials deposition, and she transitioned from bench scientist to library employee later that year. Michelle’s current interests include student-centered learning related to chemical information and the promotion of social justice in STEM disciplines.​

Ernie Williams-Roby is a visual artist and designer based in Gainesville, Florida. He holds an MFA in Art + Technology from the University of Florida. He has contributed internationally to digital media artmaking and invention in the academic and public spheres for over a decade.

Skip to toolbar