Daily Archives: May 16, 2019

Alice in Wonderland sitting in a chair playing with her kitten and a ball of yarn.
3

Introduction: Issue Fifteen

For many, imagining the possibilities of digital technologies, in classrooms and in our lives, conjures up two dystopian extremes: unregulated chaos or constant surveillance. These nightmares are animated by a fear that the digital is something created for us, something we receive rather than construct. Headlines promise us that we are falling into our screens like Alice into the rabbit hole, and we may never emerge from the mind- and reality-bending places we go. This might be true—and perhaps it’s to our benefit. For teachers and scholars, engaging with digital environments need not be a lockstep march toward automation or a devaluation of our profession, but can instead offer chances to examine and revise knowledge and the many frames that shape it, for ourselves and for our students. Using new tools to examine old ideas can create a mutual sense of agency and empathy between participants in teaching and learning.

Games, archives, and assignments require scholars and teachers to consider what end-users should know and what experiences they should have, and also offer many opportunities to reflect upon how knowledge is constructed. Active learning environments draw students and teachers alike into spaces that require trust, attention, reflection, and openness. Decisions should be intentional and purposeful. Commitment to the deep inquiry that these experiences demand invites students to engage with content in generative ways, but also—and very importantly—requires scholars to be in an ongoing, exhilarating, and reflective relationship with the materials they teach and the methods they use. These are not transactional modes of education, but rather approaches that honor the complex ways in which learners can generate knowledge and engage with the disciplines.

The Journal of Interactive Technology and Pedagogy has regularly pushed back against the notion that digital technologies are neutral. Our fifteenth issue presents pieces about archives and archive building, games, the pedagogical implications of digital tools, and various elements of digital pedagogy in undergraduate courses. Together they unravel the mystique of digital scholarship and pedagogy while asking practical questions about prior knowledge and assumptions, labor and the dynamics of collaboration, and the challenges of sustainability and corralling institutional support.

Drilling down into tools, environments, and processes, asking how they work and don’t work and where they lead users to and away from— these are all crucial parts of digital scholarship and teaching. The pieces that follow situate the project-based work of interactive technology and pedagogy within the university. They interrogate decisions big and small, weighing how biases may shape how various audiences perceive information. It’s important that this thinking be made explicit to students and to audiences, and these pieces do just that. Such pedagogical work differentiates scholars from entrepreneurs, and open systems from closed ones. It propels teaching away from transactional models of learning, forcing instructors to make the process transparent at every turn. Learning happens not only in the doing of things, but in processing and reflecting upon the why and the how of that doing. The eight pieces that we present in this issue explore different facets of these principles.

In “‘So You Want to Build a Digital Archive?’ A Dialogue on Critical Digital Humanities Graduate Pedagogy,” Bibhushana Poudyal of the University of Texas at El Paso and Laura Gonzales of the University of Florida provide an account of building a digital archive about Nepal, interrogating the role that search engines and algorithms play in how we experience and know the world, and the gaps that they leave. The authors explain the steps and hurdles they needed to negotiate—including platform and materials selection, technical expertise, and user-experience testing—in ways that honor and amplify the local expertise of Kathmandu residents. Their work is an example of digital archiving that espouses a feminist and decolonial agenda and explicitly acknowledges the tensions that underlie all knowledge-creating endeavors.

The need for critically examining how the medium influences the agenda behind digital material is also examined in another piece in this issue. In “Confidence and Critical Thinking Are Differentially Affected by Content Intelligibility and Source Reliability: Implications for Game-Based Learning in Higher Education,” Robert O. Duncan of York College and The Graduate Center, CUNY, presents a study on how the intelligibility of information and reliability of sources influence performance and confidence among participants in a critical-thinking game. The results indicate the more environmentally induced difficulty in reading text, the more critically students engaged with it. The type of information source, however, appeared to be less influential on students’ performance, with little variation between conditions in which participants were or were not told which information was derived from a reliable source. These findings point toward a few practical implications for instruction and game design around information literacy, and help to increase awareness regarding opportunities to teach students how to evaluate the reliability of sources, before critically evaluating and using the information they provide.

While games can be used to promote critical thinking, how digital games are implemented by instructors matters, as well. Cristyne Hébert of the University of Regina and Jennifer Jenson of the University of British Columbia describe nine different strategies for instructors for grade-school students in “Digital Game-Based Pedagogies: Developing Teaching Strategies for Game-Based Learning.” The themes they identify were derived from a content analysis of material collected through observations and interviews conducted during a professional development session for teachers of children in grades 6 to 8. Three general categories of digital game-based strategies are recommended, including those which focus on gameplay, lesson planning and delivery, and how technology is framed within the game. These strategies provide a practical framework for integrating game-based learning into primary and secondary education.

In “Music Making in Scratch: High Floors, Low Ceilings, and Narrow Walls?” William Payne and S. Alex Ruthmann of New York University evaluate how Scratch, a prominent, block-based free programming language used extensively by young learners, both facilitates and frustrates digital music making. They’re hopeful that this approach can become more accessible to the community of learners who engage with computer science through Scratch, but are also concerned that the structural elements of the tool may impede students who want to pursue such a path. They detail these concerns, drawing upon theories of music cognition and coding, and offer concrete suggestions for addressing the shortcomings in the tool that will be of use both to teachers who use Scratch and to software developers building out digital music-making environments.

Taking into account the broader instructional context, particularly for collaborative work, can help educators make a more productive learning experience. “Creating Dynamic Undergraduate Learning Laboratories through Collaboration Between Archives, Libraries, and Digital Humanities,” by Kent Gerber, Charlie Goldberg, and Diana L. Magnuson of Bethel University, presents both a rationale and a procedure for collaborative work between departments and between faculty and students. They detail their process for creating an entry-level Digital Humanities course that taught students both physical and digital archival management, while providing a venue for teachers to grapple with what students needed to learn, and what parts of their own institutional history needed to be prioritized for preservation. They present us with a flexible model for creating fruitful partnerships between departments, centers, and libraries that also centers student learning goals within its structure.

While learning through digital pedagogy may be a collaborative experience, for the learner it must also enable the pursuit of personally meaningful knowledge construction. In “Teaching with Objects: Individuating Media Archaeology in Digital Studies,” University of Mary Washington’s Zach Whalen details an Introduction to Digital Studies course built around student inquiry into the physical artifacts of digital media. The assignment requires students to intensively research and then creatively present on artifacts they select, situating them in economic, ethical, social, and political histories. Drawing heavily from theories of digital archaeology and positioning students as detectives who define and then pursue their own questions about tools, this project immerses students in thinking about, around, and through the material goods of digital culture. It builds upon claims from other digital studies scholars that the field should do more to uncover and confront the social implications of the digital world.

In addition to analyzing what the learner knows and understands about a digital tool, it may also be just as useful to consider as what the learner does not yet know. Filipa Calado of the Graduate Center, CUNY presents a refreshing look at digital tools for reading in “‘Imagining What We Don’t Know’: Technological Ignorance as Condition for Learning.” Examining both Voyant Tools and Women Writers Online, Calado delves into the ways that these tools force readers into unfamiliar ways of interacting with text. By working carefully with these tools, reader-users are capable of stepping into new pedagogical and epistemological territory, regardless of whether or not the user possesses the technical acumen to control a tool’s source code. Her focus on the pleasure of discovery reminds us of the delight that can come from open exploration in the classroom.

We close the issue with “What Do You Do with 11,000 Blogs? Preserving, Archiving, and Maintaining UMW Blogs—A Case Study.” Angie Kemp of the University of Mary Washington, Lee Skallerup Bessette of Georgetown University, and Kris Shaffer from New Knowledge walk through the process of archiving ten years of activity on a large, university-based publishing platform. The piece demonstrates the range of knowledge, skills, and persistent community and scholarly engagement necessary to ethically and effectively manage an open system that operates at an enterprise scale. Collaboration and transparency is key, and this piece will benefit scholars at any institution who are grappling with how to honor, preserve, and protect the exponentially increasing amount of digital work our students and colleagues produce.

Knowledge construction should be a joyful process. The authors who have contributed to this issue have fully integrated that understanding into their approaches to scholarship, teaching, and preservation work through the use of digital technologies. Students and instructors alike stand to benefit from appreciating the joy that goes into learning, regardless of the choices of digital tools and strategies. It is our hope that this appreciation of the process of knowledge construction benefits the readers of this issue, as well. In the spirit of appreciating knowledge as that which is collaboratively built and generative, we hope that readers of JITP may be inspired to pursue new and innovative digital pedagogical approaches in their teaching and scholarship.

About the Editors

Lisa Brundage is Director of Teaching, Learning, and Technology at Macaulay Honors College at CUNY. She oversees Macaulay’s unique Instructional Technology Fellow program, which provides doctoral candidates with comprehensive training in the digital liberal arts and student-centered pedagogy methods, and pairs them with honors seminar faculty to implement digital projects in their classrooms. She holds a Ph.D. in English from the Graduate Center, CUNY, and is herself an alumna of the Instructional Technology Fellow program. She is chair of the CUNY IT Conference, is a member of Interactive Technology and Pedagogy Certificate Program, and teaches Macaulay’s Springboard senior thesis course. She has recently published, with Emily Sherwood and Karen Gregory, in Bodies of Information: Intersectional Feminism and Digital Humanities, edited by Elizabeth Losh and Jacqueline Wernimont.

Teresa Ober is a doctoral candidate in Educational Psychology at the Graduate Center, CUNY. Teresa is interested in the role of executive functions in language and literacy. Her research has focused on the development of cognition and language skills, as well as how technologies, including digital games, can be used to improve learning.

Luke Waltzer is the Director of the Teaching and Learning Center at the Graduate Center, CUNY, where he supports graduate students and faculty in their teaching across the CUNY system, and works on a variety of pedagogical and digital projects. He previously was the founding director of the Center for Teaching and Learning at Baruch College. He holds a Ph.D. in History from the Graduate Center, serves as Director of Community Projects for the CUNY Academic Commons, is a faculty member in the Interactive Technology and Pedagogy Certificate and MA in Digital Humanities programs, and directs the development of Vocat, an open-source multimedia evaluation and assessment tool. He has contributed essays to Matthew K. Gold’s Debates in the Digital Humanities and, with Thomas Harbison, to Jack Dougherty and Kristen Nawrotzki’s Writing History in the Digital Age.

Vintage advertisement showing a typewriter and the words “a variety of work on the same machine.
4

Table of Contents: Issue Fifteen

Introduction
Lisa Brundage, Teresa Ober, and Luke Waltzer

“So You Want to Build a Digital Archive?” A Dialogue on Critical Digital Humanities Graduate Pedagogy
Bibhushana Poudyal and Laura Gonzales

Confidence and Critical Thinking Are Differentially Affected by Content Intelligibility and Source Reliability: Implications for Game-Based Learning in Higher Education
Robert O. Duncan

Music Making in Scratch: High Floors, Low Ceilings, and Narrow Walls?
William Payne and S. Alex Ruthmann

Digital Game-Based Pedagogies: Developing Teaching Strategies for Game-Based Learning
Cristyne Hébert and Jennifer Jenson

Creating Dynamic Undergraduate Learning Laboratories through Collaboration Between Archives, Libraries, and Digital Humanities
Kent Gerber, Charlie Goldberg, and Diana L. Magnuson

Teaching with Objects: Individuating Media Archaeology in Digital Studies
Zach Whalen

“Imagining What We Don’t Know”: Technological Ignorance as Condition for Learning
Filipa Calado

What Do You Do with 11,000 Blogs? Preserving, Archiving, and Maintaining UMW Blogs—A Case Study
Angie Kemp, Lee Skallerup Bessette, and Kris Shaffer

Issue Fifteen Masthead

Issue Editors
Lisa Brundage
Teresa Ober
Luke Waltzer

Managing Editor
Patrick DeDauw

Copyeditors
Param Ajmera
Elizabeth Alsop
Patrick DeDauw
Jojo Karlin
Benjamin Miller
Angel David Nieves
Brandon Walsh
Nicole Zeftel

Staging Editors
Inés Vañó García
Lisa Brundage
Anne Donlon
Laura Wildemann Kane
Krystyna Michael
Benjamin Miller
Teresa Ober
Danica Savonick
sava saheli singh
Luke Waltzer

Image depicts the streets of Kathmandu, Nepal. People walk in various directions as birds fly overhead. Pigeons, strollers, and passersby surround Boudhanath Stupa in the morning.
3

“So You Want to Build a Digital Archive?” A Dialogue on Critical Digital Humanities Graduate Pedagogy

Abstract

This article presents conversations between an Assistant Professor and graduate student as they negotiate various methods and approaches to designing a digital archive. The authors describe their processes for deciding to develop a digital archive of street art in Kathmandu, Nepal through an anticolonial, feminist perspective that highlights community knowledge-making practices while also leveraging the affordances of digital representation. Written in the style of a dialogue, this article illustrates the various tensions and negotiations that interdisciplinary student-instructor teams may encounter when deciding how to design a digital archive through critical frameworks. These challenges include making decisions about how to represent cultural practices and values in online spaces, negotiating technological and cultural literacies to make an accessible, usable archive, and putting together a team of researchers who are invested in a specific digital archiving project. The purpose of the article is to extend a conversation about possible approaches and challenges that new faculty and students may encounter when engaging in digital archiving work.

Image depicts the streets of Kathmandu, Nepal. People walk in various directions as birds fly overhead. Pigeons, strollers, and passersby surround Boudhanath Stupa in the morning.

Figure 1. Image of Kathmandu streets by Bibhushana Poudyal.

Bibhushana

I want to start with a particular incident to introduce and contextualize myself and my project, though this was not the only or the most significant thing to trigger my interests in my current work. It happened during my first semester in graduate school, in my second month of living in the U.S. I was waiting for the campus shuttle to get back to my apartment. A guy came up to me and started talking. After some casual exchanges, he asked,

“Where are you from?”

“Nepal,” I said.

“Where is that?” He asked.

I felt like he had to know Nepal without any further references. Then, I remembered that there are countries I don’t know about, too. Because “no one” talks about them. [The question here is also who is/are “no one?”].

And so I said, “It’s in South Asia.”

“You mean the Philippines?” He asked.

“Isn’t that a different country? Maybe you should try to Google Nepal,” I told him.

At this point, I just wanted to be done with this conversation.

“Yeah, you are right. I will,” he said.

I smiled and turned my head to the street, continuing to wait for the bus. Then, something even more dreadful occurred to me. I considered what Google might say about Nepal, aside from providing some tourist guide kind of thing. Earthquakes? Floods? The Chhaupadi system? Discrimination against women? Some local rituals? And so on. Well, all of these statements are true. But is that all that’s true about Nepal? What about the other multiple narratives that are easily overshadowed by the dominant and much disseminated, algorithmic, exotic, and damaging narrative of Nepal? Don’t I, you, she, he, they, it, we, that, this also exist? I feared that this man from the bus stop might Google Nepal and start feeling sorry for me in a way I would never feel for myself.

I hastily turned towards the stranger and said, “Actually, I don’t recommend you Googling. Google doesn’t tell you much about the places you don’t know and want to know more about.” I knew he wouldn’t Google anyway. Perhaps he did not even remember my country’s name anymore. But from then on, I knew that I would never again say, “why don’t you Google Nepal?” to a stranger.

I always knew there is something “wrong” with Google. Consider, for example, Safiya Umoja Noble’s (2018) Algorithms of Oppression, where she discusses the multiple ways in which digital algorithms oppress non-Western communities and communities of color. The representation of Nepali culture in digital spaces started becoming a major concern for me after I moved to the U.S. I felt like postcolonialism and its debates (which I had studied in Nepal) started making much more sense after my move to a “new” or “foreign” country. In the U.S., people frequently conclude things about me based on my skin color and the way I speak English in an “un-English” way. Why would—or what makes—someone conclude things about me in an absolute manner before even knowing me? These questions and experiences, among many others, triggered my curiosity in and decision to make interventions in digital archiving. Specifically, I decided to create a digital archive of street photography in Kathmandu, Nepal (non-West) from the physical location of the U.S. (West). The purpose of this archive is to illustrate the many artistic expressions and ways of being that may not traditionally or inherently be used to describe my culture and my country in current digital spaces.

Through some preliminary research, I found some digital representations of Nepal that may be considered archives, even if they are not formally identified as such (see De Kosnik’s (2016) discussion of “metaphorical instances of ‘archives’ in the digital age”). What I found is that most of these digital representations of Nepal are created and maintained by non-Nepalis, most of whom are situated in Western contexts. For example, the Digital Archeology Foundation was founded after the 2015 earthquake in Nepal to collect data “for research, heritage preservation, heritage appreciation, reconstruction planning, educational programs and 3D replication to aid in rebuilding and restoration work.” Data collected through this site is “sent to the IDA in Oxford for referencing and preservation.” The site is privately funded by David Ways through his travel-guide project The Longest Way Home. Another example, Archive-IT, hosts an archive of the “2015 Nepal Earthquake.” The purpose of the archive is to host “a collection of web resources related to the April 25th, 2015 earthquake and its after effects in Nepal. Contributors to this collection include Columbia University Library, Yale University Library, and the Bodleian Library,” none of whom are identified as being from Nepal.

Academic representations of Nepal in online spaces also portray essentializing notions of Nepali culture. For example, many U.S. Universities established websites of their South Asian Studies departments (see for example: https://www.southasia.upenn.edu/, http://southasia.rutgers.edu/, https://www.brown.edu/academics/south-asia/, and http://piirs.princeton.edu/sas). While much of the information on these sites is useful, all of these sites include photographs depicting Nepali culture (or South Asian cultures) as subjects of inquiry, with images of traditional Nepali religious rituals and ceremonies countering images of classroom discussion and intellectual engagement used to depict U.S. students in U.S. classrooms. In short, contemporary digital representations of Nepal remain largely what Said (1978) describes as a “great collective appropriation of one country by another,” leaving much room to expand and (re)build Nepali online identities (84).

Laura

During my graduate program at a large, public, Predominantly White Institution (PWI) in the Midwest, I had the opportunity to participate in and lead a few Digital Humanities projects. As a South American, White-presenting Latina immigrant in the U.S., it was my dream to apply my training in Digital Humanities in a context that would directly benefit (and stem from) minoritized communities. I dreamt of designing digital platforms that were not only designed “for” minoritized communities, but that were also co-designed with communities for our communities’ specific expertise, desires, and ideas. After graduating and coming to work at a university in the Southwest with an 80% Latinx student population, and in a graduate program that has the privilege of hosting a large international student population, predominantly from Southeast Asia, Africa, and Mexico, I was immediately motivated to start working with our brilliant students to not only critique the current normalized Western-dominant state of technology innovation in and beyond Digital Humanities, but to also build, along with students and communities, tools, technologies, and platforms that were designed by and for non-Western, non–English-dominant audiences.

It was in this context that I met Bibhushana, a brilliant PhD student who has interest and experience in Digital Humanities and who also entered our PhD program with extensive experience in critical theory (one of my favorite first memories of Bibhushana is from one of the first days I had her in class, when she told a story and casually mentioned that a year back she was “having lunch with Spivak and discussing critical theory”). In short, Bibhushana, like many of our international students pursuing graduate education in the U.S., has extensive training, experiences, and ideas that can and should inform U.S.-based institutions and disciplines and their orientations to Digital Humanities research. Yet, even in my short time at an institution that hosts a much more diverse student population than where I had my own graduate training, I note drastic discrepancies in how this innovative digital building work is supported when it stems from “non-traditional” students—students who are positioned by the institution to need “traditional” training in canonical disciplinary texts and practices (Sanchez-Martin et al. 2019). The flexibility, trust, and material support for innovative DH work is something that needs to be fostered and grown in the context in which Bibhushana and I met, and, we imagine, in other contexts hosting historically minoritized student populations.

As researchers with interdisciplinary research interests and from significantly different backgrounds, we are working to establish ethical protocols for building digital archives that are culturally sustaining rather than representative, and that intentionally avoid cultural essentialism. The stories and perspectives that we share in this dialogue illustrate some of the ways in which we are aiming to practice this type of ethical protocol both in the development of an archive and by reflecting on our participatory, community-driven methods. Drawing on Jacqueline Jones Royster and Gesa E Kirsch’s (2012) notion of strategic contemplation, we write this piece in the style of a dialogue to search for, reflect upon, and make visible the ways in which feminist rhetorical practices and relationships influence this critical digital archiving project and Critical Digital Humanities projects more broadly. According to Royster and Kirsch (2012), strategic contemplation “allows scholars to observe and notice, to listen to and hear voices often neglected or silenced, and to notice more overtly their own responses to what they are seeing, reading, reflecting on, and encountering during their research processes” (86). In this way, using strategic contemplation through our dialogue structure helps us both reflect on and illustrate the importance of embracing a critical awareness during decision-making processes in DH work.

Bibhushana

October 5, 2017

Dear Dr. Gonzales,

I am Bibhushana, RWS doctoral candidate, 1st semester …. I just wanted to ask you ‘are you into DH?’ I was just searching for someone in UTEP to talk about it. It’s my newly found curiosity about which I don’t know much. Right after [attending a] DH seminar in Nepal, I was curious (and excited) to know about UTEP’s approach to ‘rhetoric and technology’. Only couple of days back, I got to know that you will be teaching the class. I am very much looking forward to be in your class next semester.

Sincerely,

Bibhu

From the time I got into my PhD program in the U.S., I started searching for people who are working or willing to work in DH. For me, it was difficult to even imagine having to spend my doctoral years without being engaged in conversation with DH scholarship, theories, and praxis. I am not implying that every researcher should do DH work, but I do think that every university space should have some established DH infrastructures. My email to Dr. Gonzales was driven by my desire to work in DH after learning about this field during my last two months in Nepal. During this time, I participated in a #DHNepal2017 Summer Institute led by Professor Scott Kleinman, director of the Center for Digital Humanities at California State University at Northridge. During this workshop, I became interested in creating DH projects through postcolonial and feminist lenses that connect to my own disciplinary background in critical theory. However, I also realized that in order to create such projects, I would need to be connected to other DH scholars and have access to DH infrastructures and resources after the conclusion of the workshop. Professor Arun Gupto, director of the Institute of Advanced Communication Education and Research (IACER) in Nepal, where the #DHNepal2017 workshop was hosted, insisted that despite our lack of formal DH infrastructures in Nepal, we should keep trying to establish DH projects and programs in collaboration with other scholars. Having more opportunities to engage with DH projects would allow students and scholars to establish a broader network and audience for the critical humanities work that is already taking place in Nepal and in South Asia more broadly. Thus, as I began my doctoral program in the U.S. with many questions, I continued seeking opportunities to bring together work in DH, literary theory, and critical cultural studies.

Laura

When I first received the email from Bibhu asking if I was “into” DH, I wasn’t sure how to respond. Sure, I had worked on DH projects myself, but did I really have the training and expertise to guide a graduate student into this field? What would this guidance require, and how should I prepare? And, perhaps more importantly, what resources could I really provide Bibhu, particularly given the fact that I was only in my second year as an Assistant Professor in my current institution, and that I hadn’t heard the term DH be used on our campus? I was excited that Bibhu had interests in DH and that she would be in my “Rhetoric and Technology” course the following semester. In that course, I try to incorporate opportunities for students to define for themselves what the terms “Rhetoric and Technology” might mean in their careers, finding ways to combine our course readings with their own projects and interests. Although the “Rhetoric and Technology” course that is incorporated into our PhD program in Rhetoric and Writing Studies does not always cover Digital Humanities scholarship, my hope was that Bibhu would continue to develop her interests in DH and find ways to make connections in our course content. So, on the first day of class, I asked Bibhu (along with all my students) to describe their interests, and I encouraged them to continue pursuing this work throughout our class. Bibhu mentioned she was interested in DH and was thinking about building a “non-representational Nepali digital archive.” I was immediately intrigued, and I suggested that she might look into building this type of archive as her final course project. Our first class took place on January 17, 2018.

Bibhushana

January 19, 2018

Hi Dr. Gonzales,

It’s Bibhushana.

I wanted to tell you that I liked … your idea [from class] about creating a database from South Asia. I talked to my professor Arun Gupto about it and he too liked the idea. We discussed about the ways I can use it in my other research projects as well. But the problem is I have absolutely no idea about creating a database. I am excited about this project because it is a kind of initiation towards working with technology the way I have never done before. Even if it’s very intimidating to start from the scratch, I am looking forward to it. So, I will be needing your guidance from the very first step. It would be wonderful if you could tell me how and where do I start. What should be my first step? I know it’s going to be very challenging and I might tire you with my questions too. 🙂

Regards,

Bibhu

Laura

Shortly after I received the message from Bibhu expressing her continued interest in building her digital archive (i.e., database), I decided to try to connect her with our university library resources to provide some background in the “technicalities” training that Bibhu mentions above. I know that Bibhu has extensive training in postcolonial and decolonial scholarship, and that she is incredibly qualified to build the archive she wants to build. However, I also recognize and understand her hesitance toward identifying as a “tech-savvy” DH scholar who can build an archive from scratch. Further, I recognize Gabriela Raquel Ríos’s important clarification that terms like “colonial” and “decolonial,” while now deployed frequently “in recent scholarship on the rise of new media and digital humanities,” should not be used metaphorically without considering “the relationship between colonization and indigeneity (broadly) that the trope evokes” (Cobos et al. 2018, 146). As Ríos explains in Cobos et al., (2018), “for scholars of indigenous rhetoric, the trope of colonization matters differently … than it might for other scholars, and it probably matters differently for students and faculty who are marked as indigenous or who identify as indigenous as well” (146). Thus, to say that we want to build Bibhu’s archive through anticolonial frameworks means that we have to have the resources, awareness, skills, and community commitments and connections to do so in ethical, participatory ways.

Although our immediate university resources at the time were not able to provide much training in programming and digital archive design, what Bibhu and I learned through our early conversations is that the core of developing this archive lies in the willingness to try something new, and in the understanding that despite what may be perceived as a “lack” of “tech-savvy” knowledge, students like Bibhushana have the critical and cultural knowledge and experiences to help digital archivists rethink their approaches to cultural (re)presentation in online spaces. What Bibhushana had, even in our early conversations, was a willingness to engage and experiment with various interfaces and to fail and try again when certain digital elements did not work as she as initially hoped. She also has a clear understanding and commitment to avoiding fetishization and false claims of representation in her work. Thus, as we began exploring platforms and resources, Bibhushana and I were able to also continue expanding our theoretical, practical, and disciplinary frameworks for approaching this archiving project.

Bibhushana and Laura Discuss Critical DH Methodologies

As we began the process of conceptualizing and collaborating on Bibhushana’s archive, we found it important to also read and write together about the specific epistemological groundings that this project would entail. For example, to work toward decolonizing knowledge production and representation in Digital Humanities research and pedagogy, we embraced a shift from Digital Humanities (DH) to Critical Digital Humanities (CDH). Arguing that the Digital Humanities, as evidenced in its “digital humanities associations, conferences, journals, and projects,” lacks cultural critique, Alan Liu (2012) writes,

While digital humanists develop tools, data, and metadata critically (e.g., debating the “ordered hierarchy of content objects” principle; disputing whether computation is best used for truth finding or, as Lisa Samuels and Jerome McGann put it, “deformance”; and so on), rarely do they extend their critique to the full register of society, economics, politics, or culture. (n.p.)

In building our archive, we want to remain mindful of the ongoing, continuous relationship between critiquing and building, embracing the value of strategic contemplation while also remaining grounded in the everyday tasks and skills that DH projects require. As Liu posits, to frame the beginning of our project, we wondered, “How [do] the digital humanities advance, channel, or resist today’s great postindustrial, neoliberal, corporate, and global flows of information-cum-capital?” and how do we build an archive that tells many stories, from multiple perspectives, without claiming to represent a perceivably homogenous country, culture, and community?

To work toward these aims, we looked to examples of existing critical digital archives and decolonial DH projects, including: Slave Voyages (Emory Center for Digital Scholarship), which maps the “dispersal of enslaved Africans across the Atlantic world”; Torn Apart/Separados, “a deep and radically new look at the territory and infrastructure of ICE’s financial regime in the USA” that seeks to peel “back layers of culpability behind the humanitarian crisis of 2018”; Mapping Inequality, which offers “an extraordinary view of the contours of wealth and racial inequality in Depression-era American cities and insights into discriminatory policies and practices that so profoundly shaped cities that we feel their legacy to this day”; and SAADA: South Asian American Digital Archive, which “digitally documents, preserves, and shares stories of South Asian Americans.” In addition to these post projects, we are inspired by feminist digital archives such as Rise Up!, a digital archive of “feminist activism in Canada from the 1970s to the 1990s”; and Digital Feminist Archives, which offers “a snapshot of feminist history in the 1960s and 1970s.” Together, these projects, along with others listed in a Women’s Studies online database of the University of Michigan Library, provide us with useful models and inspiration for developing a digital archive of Nepal street photography that is both feminist and decolonial in its orientation to and claims about cultural (re)presentation.

Bibhushana

Through my decision to build a digital archive in the U.S., I thread my own experiences of gender discrimination in Nepal with racial and gender relations in the U.S. Engaging with “questions at the intersections of theory and praxis as we consider how tools can be theorized, hacked, and used in service of decolonization,” (Risam and Cardenas 2015), my goal is to problematize “imperialist archives that establish Western tradition by collecting and preserving artifacts from othered traditions” (Cushman 2013, 118). As Miriam Posner (2016) argues, CDH (and, we argue, decolonial digital archiving specifically) is “not only about shifting the focus of projects so that they feature marginalized communities more prominently; it is about ripping apart and rebuilding the machinery of the archive and database so that it does not reproduce the logic that got us here in the first place.” This is no easy task.

As alluring the idea of building a digital archive might sound, it is even more challenging. Digital archiving is collaborative work, and this kind of project needs a team, which I was still in search of at my new institution. In addition, I had another conceptual dilemma. As Kurtz (2006) explains in his discussion of the relationship between postcolonialism and archiving, archiving “is a literal re-centering of material for the construction and contestation of knowledge, whereas postcolonialism often works toward a figurative decentering of that same material” (25). With digital archiving, this contradiction between postcolonialism and archiving takes another dimension. As challenging it is, my aim in this project is not (only) to build a digital archive, but to document the journey itself, acknowledging my own positionality in this process. It is important not only to talk about what should be done to decolonize digital archives, but also to document and tell the stories of what happens when one undertakes this journey. My confusion and lack of tech-savviness is not only my personal story, but it is also a story that reveals a lot about resources within and beyond academia and many other socio-cultural-economic ecologies, particularly those situated in non-Western contexts.

Currently, I am building a prototype of the digital archive, which can be found at http://cassacda.com/. I named the project Rethinking South Asia via Critical Digital Archiving: Political, Ethical, Philosophical, and Aesthetic Journeys to emphasize the necessity of studying and building digital archives through critical lenses that help me relentlessly dig out and exhibit the complexities involved in the performance of archiving. If the goal is to decolonize and depatriarchalize digital archives and/in DH, then the purpose of my digital archive is to demonstrate such complexities and to show that there is no way one can represent any phenomena, country, or culture in an ethical way.

In order to work toward building this archive through anticolonial and feminist frameworks, I designed my site with an emphasis on collaboratively selecting and showcasing visuals and metadata. My goal is to avoid any insinuation of a singular, homogeneous representation of my home country and its various communities. Based on feedback from an initial IRB-approved online usability study that I conducted with participants in Nepal, I plan to insert my audiences’ experiences of and responses to the photographs in my archive as metadata. In this way, I seek to tell multiple narratives through various layers of the archive, remaining only one of many authors and designers on this project. Currently, I include both Nepali and English in the archive, and hope to extend to include other languages through further usability testing and collaboration.

Rather than categorizing collections in the archive through conventional tropes of religion, rituals, and landscapes, I continuously change photographs on the landing page of the archive, all of which depict Nepali community members partaking in everyday tasks like strolling down the streets, making tea in roadside stalls, and riding motorcycles. Figure 2 is a screenshot of the recent landing page of the archive, where four images showcase community members engaging in everyday activities in the streets of Kathmandu, Nepal.

Figure 2: The image depicts the landing page of our digital archive. Four pictures of the streets in Kathmandu are positioned in block format and labeled, Boudhanath Stupa," and "Streets in Thamel, Asan, & Indrachowk."

Figure 2. Archive landing page by Bibhushana Poudyal.

Through a feminist perspective, I also seek to counter conventional notions of Nepali women in my archive, specifically by showcasing images of women in their everyday lives perceivably defying traditional narratives and representations. Figures 3, 4, and 5, for example, portray women of various ages riding in cars and motorcycles, changing a flat tire, shopping, making tea, and going to school.

Figure 3: Two side-by-side images, one depicting a woman in the back of a car holding a baby and waving; the other depicting two young girls with braids walking with backpacks on.

Figure 3. Archive landing page by Bibhushana Poudyal.

Figure 4: Two side-by-side images, one depicting a woman riding a motorcycle down the street and another depicting a person changing the tire of a mini-bus.

Figure 4. Archive landing page by Bibhushana Poudyal.

Figure 5: Two side-by-side images depict women at an early morning coffee stall at San market and walking through its residential square

Figure 5. Archive landing page by Bibhushana Poudyal.

Images such as those portrayed in Figures 3, 4, and 5 are prevalent throughout my archive, and further illustrate the ways in which I seek to rebuild and reposition common portrayals of Nepal in online spaces. My goal is to shift my potential audiences’ and my own expectations and wishes to see Nepal portrayed in certain ways. One of the hardest dilemmas I face in building this archive is the challenge of both weaving and representing visual stories of Nepal through my images without narrowing or limiting representations in one way or another. To deal with that to a certain extent, the images on the landing page keep changing so that the archive does not stick to one (or my) way of de/re/presenting Nepal and Nepalis. As I continue uploading thousands of images, my so-called authorship will be challenged in a subtle and necessary way as I continue building the archive through feedback from various audiences.

Despite my attempts to unpack and decenter a singular perspective of culture, there is always a problem with the concept of representation, particularly in archiving. Archiving is always situated. There is always what Mignolo (2003) defines as a “locus of enunciation” (5). The locus of enunciation, according to Mignolo (2002), references “the geopolitics of knowledge and the colonial difference” in the push for representation, which is never neutral (61). My goal through this project thus is to demonstrate that locus of enunciation and problematize the assurance of representation through depictions that may be deemed unconventional or unusual or that counter established assumptions about a specific group of people.

Currently, the archive hosts several photography collections that illustrate the streets of Kathmandu like those presented in Figures 2, 3, 4, and 5. My overarching definition of critical digital archiving is on the landing page of the archive, stating from the beginning that the purpose of the site is not to represent, but rather to present possibilities to question the whole idea of representation via archiving of my own street photography. On the archive’s About page, I offer guiding questions and exigencies for the project, which include the goal of “building a depatriarchal-decolonial digital archive … in a non-representational manner.”

After conducting a landscape analysis of free and open-source CMS blogging platforms (like Squarespace, WordPress, Wix, Weebly, Drupal) and noting and comparing the affordances and constraints of these platforms, I decided to build the archive using Omeka. Omeka is a “web publishing platform and a content management system (CMS), developed by the Center for History and New Media (CHNM) at George Mason University,” that was “developed specifically for scholarly content, with particular emphasis on digital collections and exhibits” (Bushong and King 2013). Because Omeka is a CMS designed for projects like my archive, I did not have to worry about my “lack” of coding literacy to start building an archive of Nepali street photography. At the same time, however, Omeka does not have abundant online tutorials available. So, it took a long time to figure out how to create items, collections, and exhibitions in this platform and to change themes and insert plugins. Further, working on this project in an institution that does not have formal infrastructures to support digital research emerging from the humanities made it more difficult and isolating to undergo the process of learning Omeka’s features and possibilities. Besides extensive metadata space (with the Dublin Core element set of fifteen metadata sets), Omeka has plugins like Neatline that allow users to weave narratives with maps and timelines and interact with different elements within the archive. My goal after setting up this initial prototype is to continue working with plugins like Neatline as I also continue having conversations and collaborating with various stakeholders who can contribute to the dynamic nature of this project.

Laura

In addition to setting up the initial infrastructure, we are also in the process of conducting participatory design and usability studies with several stakeholders who may be interested and invested in the project. As I agreed to continue working with Bibhu on this project as her dissertation advisor, I also made the decision to leave my current institution in the upcoming year. Together, Bibhu and I realized that we needed to find more resources if we were really going to have the time and space to devote to this project cross-institutionally during Bibhu’s time as a PhD student. Thus, as Bibhu was setting up the infrastructure of her project, I started seeking grant opportunities that could help us expand on her work. It was at this point that I found a grant opportunity that allowed us to work with a team of designer and user-experience researchers to develop future plans for this project. Most importantly, this grant will allow Bibhu and I to travel to Nepal in the summer of 2019 to conduct design, usability, and user experience testing with community members in Kathmandu, Nepal, allowing us to incorporate critical perspectives from Nepali communities as we continue building this project from the U.S. Through a participatory research framework (Rose and Cardinal 2018; Simmons and Grabill 2007), user testing in Nepal will allow us to get on-the-ground perspectives from Nepali communities about the things that they value in online representations of their home country. Furthermore, conducting in-person usability tests and participatory interviews with participants in Nepal will allow us to establish a team and a network for updating and maintaining future iterations of the archive.

Bibhushana

During our trip to Nepal, we will share prototypes of the archive with Nepali students, professors, and community members, as well as with other (non-Nepali) individuals who want to know more about Nepal. We are hoping that these user tests will help us make careful and responsible decisions regarding photographs and the nature of metadata. In a previous stage of the project, I conducted an online usability study that asked participants in Nepal to visit my archive and answer questions regarding the structure and usability of the site in its current stage. Questions included in this study helped me make decisions about the header text and landing page of the archive, where participants commented that the archive allowed audiences to see “the unnoticeable everyday life of local people in Nepal.” The full list of survey questions can be accessed here.

Although the online survey allowed me to gain some insights into Nepali community members’ perceptions of my archive, I was only able to get 16 responses to my study, despite my many efforts to disseminate my online survey through various platforms. This limited response echoes discrepancies in digital access that are common in communities in my home country who may not feel comfortable sharing their perspectives through online mediums that have historically fetishized and misrepresented non-Western contexts. For this reason, visiting Nepal in person to share the archive and conduct further testing will allow us to gain more responses that can continue shaping the direction of this project.

Through further in-person usability tests, we also hope to see if and how the digital archive is producing and/or reproducing traditional (i.e., colonial) representations of Nepali practices, and if/how the archive encourages further imagining of the multiple narratives embodied in places and people across cultures. Although we may not have all the necessary material and physical resources to consult with at our current institution, by increasing the visibility of this project in its prototype stages, and by working with participants in Nepal to continue designing and testing the archive, we hope to build and connect with networks of DH scholars beyond our local context who have experience designing archives and who may be interested in contributing to the development of this project. As this will be my first trip back to Nepal since coming to study in the U.S., I hope that this will be an opportunity to share my work with scholars and researchers in Nepal who are working in the area of Nepali Studies and South Asian Studies, continuing to develop frameworks for participatory, cross-institutional DH research.

Conclusion

Fostering the space to innovate digital archiving practices is a collaborative effort that requires individual researchers and institutions to move beyond binaries and disciplinary boundaries, engaging in a paradigmatic shift necessary to decolonize knowledge and its production and dissemination. As Jamie “Skye” Bianco (2012) explains, “This is not a moment to abdicate the political, social, cultural, and philosophical, but rather one for an open discussion of their inclusion in the ethology and methods of the digital humanities” (102). As Bianco (2012) continues, “we [in the Digital Humanities] are not required to choose between the philosophical, critical, cultural, and computational; we are required to integrate and to experiment,” particularly through ethical frameworks that value and centralize community knowledge (101).

Our project engages in rigorous conversations and questions that have been central to the work of the humanities, particularly in relation to cultural criticism, capitalism, and digital making. Critical digital archiving, and the process of engaging in CDH work more broadly, does not provide any static formula to decolonize or depatriarchalize digital archives, and neither do we. Instead, developing ethical protocols for creating DH projects, at least as we document in this article, requires researchers and student-teacher teams to explore multiple methods that purposely work against fetishization and essentialism through collaboration and participatory research.

By presenting a dialogue and preliminary plan for creating an anticolonial digital archive of Nepali street photography, we hope to engage in further conversations about the non-hierarchical interdisciplinary methodologies, inquiries, concerns, theories, and praxes that can be incorporated into CDH research. Through our collective conversations, we hope to further illustrate how issues of access, innovation, and cultural training intersect in the design and dissemination of contemporary digital archives and archiving practices, and how collaboration and participatory research, which have always been at the heart of DH, can also be critical components of building CDH infrastructures in perceivably “non-traditional” spaces. We hope that other teacher-researcher DH teams can thus learn from and build on our stories.

Bibliography

Bianco, Jamie “Skye.” 2012. “This Digital Humanities Which is Not One.” In Debates in the Digital Humanities, edited by Matthew K. Gold, 96–112. Minneapolis: University of Minnesota Press.

Bushong, Anthony, and King, David. 2013. “Intro to Digital Humanities.” UCLA Center for Digital Humanities.
http://dh101.humanities.ucla.edu/?page_id=104.

Copozzi, Mae. 2014. “A Postcolonial ‘Distant Reading’ of Eighteenth and Nineteenth Century Anglophone Literature.” Postcolonial Digital Humanities. http://dhpoco.org/blog/2014/04/08/a-postcolonial-distant-reading-of-eighteenth-and-nineteenth-century-anglophone-literature/.

Critical Digital Archiving. http://cassacda.com/.

Cobos, Cassie, Gabriela Raquel Ríos, Donnie Johnson Sackey, Jennifer Sano- Franchini & Angela M. Haas. 2018. “Interfacing Cultural Rhetorics: A History and a Call.” Rhetoric Review 37, no. 2: 139–154.

Cushman, Ellen. 2013. “Wampum, Sequoyan, and Story: Decolonizing the Digital Archive.” College English 76, no. 2: 115–135.

DeVoss, Dànielle, Angela Haas, and Jacqueline Rhodes. 2019. “Introduction by the guest editors.” TechnoFeminism: (Re)Generations and Intersectional Futures.
http://cconlinejournal.org/techfem_si/00_Editors/.

Kurtz, Matthew. 2006. “A Postcolonial Archive? On the Paradox of Practice in a Northwest Alaska Project”. Archivaria, 61.
https://archivaria.ca/index.php/archivaria/article/view/12535.

Decolonising Archives. 2016. L’Internationale. 2016.
https://www.internationaleonline.org/media/files/decolonisingarchives_pdf_def_02.pdf [pdf].

De Kosnik, A. 2016. Rogue Archives: Digital Cultural Memory and Media Fandom. Cambridge: The MIT Press.

Digital Feminist Archives. Bernard Center for Research on Women. http://bcrw.barnard.edu/digital-feminist-archives/.

Liu, Alan. 2012. “Where Is Cultural Criticism in the Digital Humanities?”. In Debates in the Digital Humanities, edited by Matthew K. Gold. Minneapolis: University of Minnesota Press. http://dhdebates.gc.cuny.edu/debates/text/20.

Mapping Inequality: Redlining in New Deal America
https://dsl.richmond.edu/panorama/redlining/#loc=4/36.71/-96.93&opacity=0.8.  

Mignolo, Walter D. 2003. The Darker Side of the Renaissance: Literacy, Territoriality, and Colonization. Ann Arbor: University of Michigan Press.

—. 2002. “The Geopolitics of Knowledge and the Colonial Difference.” The South Atlantic Quarterly, 101 (1): 57–96.

Noble, Safiya Umoja. 2018. Algorithms of Oppression: How Search Engines Reinforce Racism. New York: NYU Press.

Posner, Miriam. 2016 “What’s Next: The Radical Unrealised Potential of Digital Humanities.” In Debates in the Digital Humanities, edited by Matthew K. Gold and Lauren F. Klein. Minneapolis: University of Minnesota Press. 
http://dhdebates.gc.cuny.edu/debates/text/54.

Risam, Roopika and micha cardenas. 2015. “De/Postcolonial Digital Humanities.” http://dhtraining.org/hilt/course/depostcolonial-digital-humanities/.

Risam. Roopika. 2018. New Digital Worlds: Postcolonial Digital Humanities in Theory, Praxis, and Pedagogy. Evanston: Northwestern University Press.

Rise Up! https://riseupfeministarchive.ca/.

Rose, Emma and Alison Cardinal. 2018. “Participatory Video Methods in UX: Sharing Power with Users to Gain Insights into Everyday Life.” Communication Design Quarterly, 6, no. 2: 9-20.

Royster, Jacqueline Jones and Gesa E. Kirsch. 2012. Feminist Rhetorical Practices: New Horizons for Rhetoric, Composition, and Literacy Studies. Carbondale: Southern Illinois University Press.

SAADA: South Asian American Digital Archive. https://www.saada.org/.

Said, Edward. 1978. Orientalism. London: Penguin Books.

Sanchez-Martin, Cristina, Hirsu, Lavinia, Gonzales, Laura, & Sara P. Alvarez. 2019. “Pedagogies of Digital Composing through a Translingual Approach.” Computers and Composition: 1–18.

Sayers, Jentery. 2016. “Dropping the Digital.” In Debates in the Digital Humanities, edited by Matthew K. Gold and Lauren F. Klein. Minneapolis: University of Minnesota Press. http://dhdebates.gc.cuny.edu/debates/text/88.

Simmons, W. Michele, and Jeffrey T. Grabill. 2007. “Toward a Civic Rhetoric for Technologically and Scientifically Complex Places: Invention, Performance, and Participation.” College Composition and Communication: 419–448.

Slave Voyages. Emory Center for Digital Scholarship. https://www.slavevoyages.org/.

Torn Apart/Separados. http://xpmethod.plaintext.in/torn-apart/volume/2/index.

“Women’s Studies.” The University of Michigan Library. https://guides.lib.umich.edu/c.php?g=282777&p=1884212.

About the Authors

Bibhushana Poudyal is currently a doctoral student and Assistant Instructor in the Rhetoric and Writing Studies program at The University of Texas at El Paso (UTEP). Her research areas are: rethinking South Asia, depatriarchal/feminist and de/post/anticolonial critical digital archiving, Critical Digital Humanities, and Digital Humanities in transnational contexts. She is also a researcher and Honorary Overseas Digital Humanities Consultant at two South Asian research centers CASSA (Center for Advanced Studies in South Asia) and SAFAR (South Asian Foundation for Academic Research).

Laura Gonzales studies and practices multilingual, community-driven technology design. She is the author of Sites of Translation: What Multilinguals Can Teach Us About Digital Writing and Rhetoric (University of Michigan Press 2018), which was awarded the 2016 Sweetland/University of Michigan Press Digital Rhetoric Collaborative Book Prize. In the Fall of 2019, Laura will be an Assistant Professor of Digital Writing and Cultural Rhetorics in the English Department at the University of Florida. You can learn more about her work at www.gonzlaur.com.

This figure displays the visual stimulus for experiment 3. A representative question is presented along with four possible answers and iconography indicating the source of information. Two answers are presented on a low-contrast background field, which is altered to look like crumpled paper. The media sources corresponding to each icon are displayed in text at the bottom for reference. Icons represent peer-reviewed journals, newspapers, websites, and friends.
1

Confidence and Critical Thinking Are Differentially Affected by Content Intelligibility and Source Reliability: Implications for Game-based Learning in Higher Education

Abstract

Game-based learning can foster critical thinking in undergraduate students. However, less is known how cognitive and metacognitive factors interact to support critical thinking in games. The intelligibility of information and the reliability of its source were manipulated to affect performance and confidence in a simple critical thinking game. Undergraduates (N=864) were presented with questions in a four-alternative forced-choice paradigm.

Choices were accompanied by icons representing four different sources. In the reliable-source condition, an icon representing peer-reviewed sources was paired with the correct answer 50% of the time. In the unreliable-source condition, icons were randomly paired with answers. In Experiment 2, additional assessments of confidence were obtained. In Experiment 3, the perceptual intelligibility of the content was manipulated. Participants in the intelligible-content condition were presented with high-contrast text, and participants in the unintelligible-content condition were presented with masked, low-contrast text. Participants in the reliable-source condition did not perform better than the unreliable-source condition despite hints provided by the iconography. Curiously, participants did not use the source of the information to guide decisions.

Performance for the unintelligible-content condition was better than for the intelligible-content condition. Unintelligible content may have prompted closer inspection, resulting in improved performance. A double dissociation between confidence and performance implies two cognitive systems: (1) an intuitive system providing higher confidence but poorer performance; and (2) a deliberate system providing better performance but lower confidence. Content intelligibility and source reliability should be considered in game-based learning because they differentially affect cognitive and metacognitive influences on decision making.

During the last few decades, technology has radically altered the distribution and assimilation of knowledge (Jenkins 2008; Tyner 1994). Different media channels offer different opportunities and challenges for learning (Norman 1988). A migration to Internet-based platforms poses problems for educators and students. There is an overwhelming amount of content available to learners, but the reliability of Internet sources is highly variable. Without proper curation, the burden of determining the reliability of Internet sources is placed upon the student. The lack of editorial review results in content that appears reliable to the student, but might be false, unsubstantiated, or mere opinion. The source a student uses may depend more on accessibility, intelligibility, and cognitive load than on reliability (Ginns and Leppink 2019). The availability of pithy Internet sources might draw students away from more reliable primary sources that are difficult to access or understand. Research is needed to quantify the relationship between the accessibility/intelligibility of source materials, the decision to use these materials, and performance measures of learning outcomes. Understanding the relationship between cognitive and metacognitive processes in critical thinking will inform educators on how to guide undergraduates who are sifting through a variety of information sources. Several metacognitive factors are known to influence critical thinking. In Tversky and Kahneman’s (1973) seminal work, individuals overestimated the probable outcome of unlikely events if the outcome was highly desired (e.g., the state lottery). It is now widely recognized that two systems contribute to decision making (Stanovich and West 2000). System 1 automatically processes incoming information without placing too large a demand on attention. System 2 requires effort, places significant demands on attention, and typically serves deliberated decisions. Individuals are more likely to provide an incorrect answer that is immediately provided by System 1 than spend effort to arrive at the correct solution provided by System 2. Complex mental calculations like 6543 x 438 are more likely to involve System 2, which is typically engaged when System 1 fails to produce an automatic solution to a problem. However, a classic cognitive reflection test (CRT) demonstrates that System 1 may still have influence over judgments when System 2 is engaged. Take the following for example: “A bat and a ball cost $1.10 in total. The bat costs $1.00 more than the ball. How much does the ball cost? _____ cents.” The correct answer is 5 cents, but people usually rush to the more intuitive answer of 10 cents. Individuals are more likely to provide an incorrect answer that is immediately provided by System 1 than spend effort to arrive at the correct solution provided by System 2. Even when participants provide correct answers, evidence suggests the intuitive response was considered first (Frederick 2005).

The relationship between cognitive System 1 and 2 may be particularly relevant for undergraduate students who are challenged with difficult texts. In recent years, STEM majors are being asked to read the primary scientific literature either for class or to support undergraduate research. In addition to the facts that find their way into textbooks, students need to be acculturated to scientific thinking and the iterative nature of research. One of the reasons these texts are challenging is because they contain domain-specific vocabulary and jargon. Providing annotations or paraphrasing journal articles are two scaffolding methods that have helped students understand scientific literature (Falk et al. 2008; Kararo and McCartney 2019). Nevertheless, there is much to learn about how cognitive System 1 and 2 are engaged when reading difficult texts. Understanding how these cognitive and metacognitive factors influence the reading of difficult texts might be useful to educators.

Fluency is the degree to which information can be processed automatically. Fluently processed stimuli are presumably processed without effort. Conversely, disfluently processed stimuli require more effort to process. Processing fluency has been shown to influence judgments in various modalities, including semantic priming (e.g., Begg, Anas, and Farinacci 1992), visual clarity (e.g., Reber and Schwarz 1999), and phonological priming (e.g., McGlone and Tofighbakhsh 2000). The most researched variety of fluency, physical perceptual fluency, varies the saliency or discriminability of the target, including manipulating the legibility of font (Alter and Oppenheimer 2009). In these experiments, intelligible stimuli are judged to be more truthful (Schwarz 2004) and more likeable (Zajonc 1968) than less legible stimuli, and people are more confident about choices under intelligible conditions (Kelley and Lindsay 1993; Koriat 1993).

Recent studies suggest the effects of fluency on decision making are not entirely clear. Besken and Mulligan (2014) presented participants with intact or degraded auditory stimuli. Even though participants perceived their ability was worse in disfluent conditions, performance was paradoxically better when compared to fluent conditions. Backward masking studies have also provided evidence of a double dissociation between fluency and performance (Hirshman, Trembath and Mulligan 1994; Mulligan and Lozito 2004; Nairne 1988). And another group of studies found enhanced recall for words that were made disfluent by transposing letters or removing letters from words (Begg et al. 1991; Kinoshita 1989; Mulligan 2002; Nairne and Widner 1988). In these cases, disfluency may function as a signal that motivates the perceiver to further scrutinize the target, which may lead to improved performance (Alter et al. 2007).

Game-based learning systems have been defined as tabletop or digital ecosystems that respond to player choices in a manner that shapes behavior or fosters learning outcomes toward an explicit goal (All et al. 2015, 2016). Recently, game-based learning systems are being designed to foster critical thinking. For example, “Mission US” guides students through critical epochs in American history while teaching evidence-based reasoning (Mission US 2019). However, there are very few studies on either critical thinking in games or how fluency affects critical thinking in game-based learning systems (e.g., Cicchino 2015; Eseryel et al. 2014).

Understanding the relationship between fluency and critical thinking should inform educators, game designers, and educational technologists about how to design games for critical thinking. According to a naïve theory of fluency, intelligible and reliable sources of information should be preferred by students, judged as more useful, and result in improved performance on assessments of critical thinking. Theories that postulate a double dissociation between fluency and performance make different predictions. Specifically, relative to fluent conditions, unintelligible information or unreliable sources should result in decreased confidence and increased performance. To determine which of these theories predicts performance in game-based learning systems, the intelligibility of information and the reliability of its source were manipulated in a simple critical thinking game. This game-like experience was comprised of elements one might find in a critical thinking game. However, the experiment required a stripped-down, reductionist model of a game to avoid confounding variables. Self-reports of confidence were compared to performance, and a double dissociation between the two measures was predicted. When either the information presented in the game was unintelligible or its source was unreliable, it was predicted that confidence would be lower and performance would be improved. When the information was intelligible or its source was reliable, confidence was expected to be higher and performance would decline.

Experiment 1

The purpose of Experiment 1 was to determine if participants could make an association between the reported reliability of a source and its accuracy. For the purposes of this experiment, source reliability is defined as an observer’s perception of the probability that the source is accurate. In classic definitions, source reliability refers to the actual probability that the source is accurate (e.g., Bovens and Hartman 2003). Kahneman and Tversky would describe both of these situations as decision making under uncertainty, where probability is never truly known by the observer. However, in this experiment, participants will be informed about the relative accuracy associated with each source. Thus, source reliability in this experiment can be considered equivalent to decision making under conditions of relative certainty. In the reliable-source condition, correct answers were more frequently associated with peer-reviewed sources. According to naïve models of fluency, participants were expected to use the source of information as a cue for the correct answer. In the unreliable-source condition, correct answers were pseudorandomly associated with each type of source 25% of the time. Performance in this condition was expected to be relatively worse because the cues did not reliably predict the correct answer.

Methods

Participants

York College is one of eleven senior colleges in the City University of New York (CUNY) consortium. The student body is comprised of just over 8,500 students who are pursuing over 70 B.A. and B.S. degrees. York is a Minority Serving Institution, and approximately 95% of students are supported by the state Tuition Assistance Program. Undergraduate students enrolled in lower division behavioral and social science courses (N = 78) were recruited from the York College Research Subject Pool using a cloud-based participant management software system (Sona Systems, Bethesda, MD). Participants received course credit for participation. Demographic data were not explicitly collected for this study, but the age (18.6 years old), gender (65.3% of the students identify as women, and 34.7% identified as men), and ethnic identity of the sample are thought to reflect the general population of the college (42.9% Black, 21.8% Hispanic, 27.1% Asian or Pacific Islander, 1% Native American, and 7.2% Caucasian). All policies and procedures comply with the APA ethical standards and were approved by the York College Internal Review Board. Participants were equally and randomly distributed into the reliable-source or unreliable-source conditions.

Apparatus

Participants were presented with a simple game where they selected the best of four possible solutions to a question in a four-alternative, forced-choice paradigm (4-AFC). The technology and the number game mechanics were intentionally minimized to reduce the number of potential confounding variables. Ninety-four questions and answers were derived from the Logical Reasoning portion of the 2007 Law School Admissions Test (LSAT). Questions and possible answers were presented simultaneously on a 15” MacBook Pro laptop computer (Apple, Cupertino, CA) using PowerPoint (Microsoft Corp., Redmond, WA) (Figure 1).

Figure 1. The visual stimulus for experiment 1. A representative question is presented along with four possible answers and iconography indicating the source of information. The correct answer is underlined on the screen. The media sources corresponding to each icon are displayed in text at the bottom for reference. Icons represent peer-reviewed journals, newspapers, websites, and friends.

Four different icons accompanied each of the four possible answers (book, newspaper, computer, and phone). Participants were told the book icon represented a peer-reviewed source that was “highly reliable.” The newspaper icon represented a “somewhat reliable” periodical that was not peer-reviewed. The computer icon represented a “somewhat unreliable” Internet source that was not peer-reviewed. The phone icon represented “highly unreliable” information from a friend. The book, newspaper, computer, and phone icons were reliably paired with the correct answer 50%, 25%, 12.5% and 12.5% of the time, respectively. Participants indicated their responses (A through D) using a score sheet and pencil. Participants were not allowed to correct answers from previous trials. After indicating their choice, participants could advance to the next slide where the correct answer was underlined on the screen. Participants were monitored to prevent cheating.

Procedures

After providing informed consent, participants were directed to one of three rooms measuring approximately 2 m2. Each room was isolated and quiet, with only a laptop, desk, and chair. Participants were read instructions from a script, and they could ask questions for clarification. A sample question and answer were presented as a demonstration. Participants were instructed to correctly answer as many questions as they could within one hour. At the conclusion of the experiment, participants were debriefed about the general nature of the experiment using a script.

Discussion

Experiment 1 was conducted to determine whether students refer to the source of information when confronted with difficult content (Detailed results are presented in Appendix B). Participants did not perform exceptionally well, which was expected given the difficulty of the task. However, they did perform better than chance. More importantly, there was no evidence that participants used the icons to guide their decisions. These results are less consistent with a naïve model of fluency and more consistent with dual-process theories (e.g., Petty and Cacioppo 1986; Kahneman 2003; Kruglanski and Van Lange 2012). Specifically, the difficulty of the content in Experiment 1 may have led participants to rely upon the deliberate cognitive System 2 rather than the more intuitive cognitive System 1, which would have likely revealed the correct answers on the assessment.

Considering that participants were told which sources were the most reliable, it is not unreasonable to expect that participants in the reliable-source condition could have achieved scores of at least 50% correct on average, and such a change should be easy to observe even with very few participants. Participants were too focused on the difficulty of the task to take advantage of the icons that reliably indicated the correct answers. These results suggest that, when faced with challenging texts, undergraduate students may ignore the source of the information. Instructors in higher education should use disciplinary literacy strategies such as sourcing to lead students away from random internet sources and toward more authoritative, peer-reviewed sources (Reisman 2012). Although this experiment uses a simplified model of game-based learning systems, the results imply that students who are engaged in more complicated game-based learning systems might also be too distracted by the difficulty of the content to use the source of information as a guide to the accuracy of the content.

Experiment 2

Additional participants were included in Experiment 2 to determine if increased statistical power would reveal a significant advantage for participants who were presented with reliable cues. All other methods were identical to Experiment 1. An additional assessment was used to measure the confidence of participants. Confidence and performance are correlated on a variety of tasks, including critical thinking (Bandura 1977; Ericsson, Krampe, and Tesch-Romen 1993; Harter 1978; Kuhl 1992; Nicholls 1984). Measurements of confidence and other affective assessments have provided insight as to why critical thinking varies between automatic System 1 and effortful System 2 cognitive processes (Alter and Oppenheimer 2009). This experiment assessed whether there was a double dissociation between the conditions of the independent variable (reliable-source versus unreliable-source) and the dependent variables (performance and confidence). According to the naïve model of fluency, participants who took advantage of cuing in the reliable-source condition were expected to have high confidence and perform better than students in the unreliable-source condition, who were expected to have low confidence and poor performance because of the lack of reliable cuing. By contrast, more recent models of fluency predicted confidence and performance would be inversely correlated.

Methods

Participants

Data was collected from 386 additional participants. 193 participants were randomly assigned to the reliable- and unreliable-source conditions, respectively. Participants in the reliable-source group were presented with icons that reliably predicted correct and incorrect answers with the same probability as those in Experiment 1. Participants in the unreliable-source condition were presented with icons that were pseudorandomly associated with correct answers. The correct answer was paired with each icon 25% of the time.

Apparatus

In addition to the critical thinking assessment used in Experiment 1, participants were asked to complete a self-assessment of confidence. The assessment was derived from several validated measures of academic self-efficacy (Bandura 1977; Honicke and Broadbent 2016; Pintrich and DeGroot 1990; Robbins et al. 2004), which is defined as a learner’s assessment of their ability to achieve academic goals (Elias and MacDonald, 2007). Rather than make predictive statements about academic success, participants reflected on their performance of the task. Five questions were counterbalanced to measure split-half reliability (e.g., “Was the game easy?” versus “Was the game difficult?”), yielding a total of ten questions. Participants indicated their responses to each question using a 5-point Likert scale {2 = agree; 1 = somewhat agree; 0 = neutral; -1 = somewhat disagree; and -2 = disagree}. The questions were presented in pseudorandom order. The complete list of questions appears in Appendix A.

Procedures

All procedures were identical to those of Experiment 1 except participants were given additional instructions for completing the confidence assessment, which were read by the experimenters from a script. Participants were given 10 minutes to complete the 10-item assessment, which proved to be a sufficient amount of time.

Discussion

Experiment 2 included assessments of confidence to determine whether there was a double dissociation between each condition of the independent variable (reliable- and unreliable-source conditions) and the two dependent variables (confidence and performance) (Detailed results are presented in Appendix B). Participants in both conditions performed poorly, but each group performed better than chance. There were also no speed-accuracy trade-offs that could account for differences between groups. Note that reaction time data was not collected explicitly. Rather, it was inferred based on the total number of questions completed in one hour. Therefore, no conclusions about changes in reaction-time data over time can be made. Data indicated that participants in the reliable-source condition did not use the provided iconography to guide them to the correct answer. Surprisingly, participants in the unreliable-source group outperformed participants in the reliable-source condition by a slight margin. Participants in the unreliable-source condition may have initially referred to the icons but, having discovered their lack of reliability, focused more intently upon the content of the questions.

Confidence ratings for the reliable-source condition were significantly higher than for the unreliable-source condition. Conversely, performance for the unreliable-source condition was better than the reliable-source condition. Consequently, there was a double dissociation between the levels of the independent variable and the two dependent variables. Specifically, despite having lower confidence, participants in the unreliable-source condition performed better on the critical thinking assessment than those in the reliable-source condition.

Participants who were faced with a difficult task appeared to rely upon the more deliberate cognitive system (System 2) even when the more intuitive system (System 1) might have provided a more accurate answer. Even though the icons in the reliable-source condition were more trustworthy than those in the unreliable-source condition, the icons were never 100% reliable. Participants may have observed instances where the icons failed to predict the correct answer and subsequently discounted the validity of the icons. These results may explain why students often ignore the source of information when challenged by difficult content in the primary literature. If students are unclear about the reliability of the source, they may ignore the source and focus on the content. Participants in the unreliable-source condition may have outperformed participants in the reliable-source condition because it may have taken them less time to recognize that the icons were not 100% reliable. Participants in the reliable-source condition may have used the icons until they proved less than 100% reliable, and then shifted their attention to the content.

The double dissociation between the source reliability and the dependent variables (confidence and performance) provides evidence in support of dual-process theory. Traditionally, dual-process experiments manipulate task quality or difficulty as an independent variable. Task difficulty was constant in this experiment. However, the two conditions had differential effects on confidence and performance. It is speculated that when source cuing is reliable, participants are more confident and rely upon cognitive System 1, which provides fewer correct conclusions. Conversely, when source cuing is unreliable, participants lack confidence and rely primarily upon cognitive System 2, which provides more correct answers.

Experiment 3

Experiments 1 and 2 manipulated the reliability of source materials while the intelligibility of the content was held constant. In game-based learning, students may be challenged with content that is easy or difficult to understand. There may be interaction effects between comprehension of the content and identification of reliable sources. Consequently, the intelligibility of the content was manipulated in Experiment 3. Intelligibility can be manipulated using a variety of techniques, including perceptual, cognitive, and linguistic methods (Begg, Anas and Farinacci 1992; McGlone and Tofighbakhsh 2000; Reber and Schwartz 1999). In this experiment, perceptual intelligibility was manipulated by degrading the text. Dual-process theories of fluency predicted that degrading the intelligibility of the text would engage cognitive System 2 and improve performance relative to conditions where text was not degraded.

Methods

Participants

Participants were recruited and randomly assigned to one of four possible groups (N = 400): (1) reliable-source/intelligible-content; (2) reliable-source/unintelligible-content; (3) unreliable-source/intelligible-content; and (4) unreliable-source/unintelligible-content. Participants in the reliable-source conditions were presented with icons that predicted correct answers with the same probability as those in Experiments 1 and 2. Participants in the unreliable-source conditions were presented with icons that were pseudorandomly associated with correct answers; the correct answer was paired with each icon 25% of the time. Participants in the intelligible-content conditions were presented with unmasked, high-contrast text, and participants in the unintelligible-content conditions were presented with masked, low-contrast text. Participants did not participate in the confidence assessment because comparing source reliability, content intelligibility, and confidence scores in a three-way ANOVA would be difficult, if not impossible, to interpret.

Apparatus and procedures

All materials and procedures were identical to the previous experiments with the exception that some conditions contained degraded text. For conditions where the contrast of the text was degraded, two of the four possible answers were presented at random on a low-luminance background (Figure 2).

Figure 2. The stimulus for Experiment 3. The stimulus is identical to that of Figure 1 with the following exceptions: In addition to manipulating the source reliability during the critical thinking assessment, the perceptual intelligibility of the content was manipulated by altering the contrast of the font. All stimuli and procedures for the critical thinking assessment were the same as Experiment 1 and 2. For unintelligible-content conditions, two answers were presented at random on a low-contrast background field. The background field was also altered to look like crumpled paper, which acted as a high-frequency mask.

This background effectively reduced the contrast of the text to near-threshold levels. The contrast range of laptop computers varies greatly with viewing angle and distance from the center of the screen. Participants viewed the screen from a comfortable angle, and they were allowed to adjust the screen to insure they could read the low-contrast text. The maximum luminance from the center to the edge of the screen varied from approximately 260 to 203 cd/m2. Weber contrast ratios (LMAX-LMIN/LBACKGROUND) for high- and low-contrast text were 90% and 10%, respectively [High-contrast text: background = 200 cd/m2; text = 20 cd/m2. Low-contrast text: background = 100 cd/m2; text = 90 cd/m2]. The background field was also altered to look like crumpled paper using a filter in PowerPoint, which acted as a high-frequency mask. The text was always above the threshold for detectability. However, participants reported that the text was more difficult to read than unaltered text. Note that the contrast of the text in Figure 2 is greater than that of the actual stimulus.

Discussion

The results of Experiment 3 indicate that content intelligibility affects the performance of students participating in critical thinking assessments (Detailed results are presented in Appendix B). Performance for unintelligible-content was better than for intelligible-content. The degraded text in the unintelligible-content condition may have demanded closer inspection, resulting in better performance. Similar to Experiments 1 and 2, participants failed to capitalize on the information provided by the icons in the reliable-source conditions. Even though there was a significant effect for the interaction of source reliability and content intelligibility, the effect size was modest and post-hoc tests did not find a difference between source conditions across both levels of the content factor. Strictly interpreted, the interaction effect suggests that performance is further improved under unintelligible-content conditions when the source is reliable. Degrading the stimulus most likely engages cognitive System 2, which improves overall performance. Additionally, degrading the stimulus may also slow participants to the point where they consider the meaning of the icons and use these clues to come to a correct decision. Nevertheless, it difficult to claim additional benefits for the reliable-source condition considering (1) the modest effect size for the interaction and (2) the fact that content intelligibility did not significantly vary across the levels of source reliability factor. The effect of content unintelligibility seems to engage System 2 to the point where source reliability does not make a significant difference, thus echoing the results and interpretations of Experiments 1 and 2. When challenged with difficult texts, participants appeared to ignore the reliability of the source type, depending mostly upon the content to make decisions.

The more difficult the content was to read, the more likely students were to pay close attention and answer correctly. Performance was further enhanced when the already difficult task was made more difficult by degrading the text. There may be a continuum within System 2, whereby performance may improve the more scrutiny a target gets. More research must be conducted to determine if System 2 participates in a gradient-like fashion, or if it is engaged using a winner-take-all mechanism that suppresses the output of System 1. These results may have serious implications for educators and designers of game-based learning systems, who may label sources of information to guide instruction, practice, or assessment.

General Discussion

This study manipulated content intelligibility and source reliability in a simple critical thinking game and found differential effects for confidence and performance. This double dissociation provides evidence for the dual-process model of cognition originally proposed by Tversky and Kahneman (1973). The results also contradict naïve theories of fluency that predict confidence and performance are correlated.

Processing fluency is a metacognitive factor that has a pronounced effect on undergraduates’ critical thinking and their decision to consider sources of information. Participants were informed that certain icons would provide the correct answer 50% of the time, yet performance indicated that participants were not using this information to guide decisions. Even when content intelligibility was not manipulated, the difficulty of the critical thinking game was sufficient enough to divert attention away from the icons and toward the content itself. These results support a dual-process model of cognition (e.g., Petty and Cacioppo 1986; Kruglanski and Van Lange 2012; Kahneman 2003), whereby decision making is influenced by metacognitive factors (Alter and Oppenheimer 2009).

A double dissociation between confidence and performance suggests that two cognitive systems, a faster intuitive system and a slower deliberative system, can be distinguished in a game-based learning environment. Overall, confidence and performance were inversely correlated. Compared to fluent conditions, when the content was unintelligible or the source was unreliable, confidence was relatively lower and performance was relatively better. Other studies have provided evidence of a double dissociation between fluency and performance (Hirshman, Trembath and Mulligan 1994; Mulligan and Lozito 2004; Nairne 1988; Pansky and Goldsmith 2014). However, the current study is most similar to reports of a double dissociation between confidence and decision making in audition, where participants made more correct judgments for degraded stimuli even though they had less confidence than participants who rated intact stimuli (Besken and Mulligan 2014). Together, these studies provide evidence of two independent processes: one that supports intuitive decision making and a second that supports deliberative decision making.

In lieu of an observational study of game play, this laboratory experiment was conducted to better control internal validity. However, the limited range of difficulty in the critical thinking game may have affected the interpretation of the results. The difficulty of the questions could have served as an independent variable, ranging from easy to difficult. If this were the case, the easy and difficult questions should have elicited fluent and disfluent processing in the observer, respectively. However, because the focus of the study was on metacognitive factors in the presence of difficult texts, only difficult questions were used. Nevertheless, even when the content was difficult, as is often the case when undergraduates read primary literature, further manipulations of content intelligibility affected performance. Degrading the text to create a perceptually disfluent condition increased performance regardless of the reliability of the source of the material. This result suggests that participants who had already discounted the source of information were paying even closer attention to the content. Alternately, participants who were using the icons to solve problems may have switched their attention to the content because of the manipulation to the text. These results are similar to studies where transposing letters or removing letters resulted in improved word recall (Begg et al. 1991; Kinoshita 1989; Mulligan 2002; Nairne and Widner 1988). Unintelligible text creates a disfluent condition that may function as a signal to motivate the perceiver to further scrutinize the target, which may lead to improved performance (Alter et al. 2007).

It remains to be seen whether this controlled laboratory study will generalize with respect to more complicated game-based learning experiences in naturalistic settings. The content and stimulus presentation for this study were initially developed as a prototype for a digital game to help students practice critical thinking skills. This study was motivated largely by the results from pilot studies that were difficult to interpret. While the information presented in the critical thinking game was as difficult as the primary source materials undergraduates encounter in many disciplines, the presentation of the sources could have been more realistic. Students are likely to encounter information from various sources via a web browser. To improve external validity, future studies could present discipline-specific content in a web-based game that mimics various sources of information students are likely to encounter (e.g., Wikipedia, peer-reviewed journals, the popular scientific press, and web blogs). The more realistic condition could then be compared to an even more elaborate game-based learning system, where students are challenged to solve scientific mysteries using various sources of information.

Additionally, the study may be limited because the participants were not sufficiently described. The study hypothesis was not directed at the effect of student demographics on critical thinking. Subsequently, detailed demographics were not collected to evaluate the influence of gender identity, ethnicity, social-economic status, first-gen status, GPA, course load, and other factors. Study participants in the research pool are varied and randomly assigned. Consequently, the conclusions are likely to generalize with respect to similar populations. Any questions about differences along one of these axes should be addressed in a subsequent study where the demographic of choice serves as the independent variable to be manipulated.

Another challenge comes from attempting to generalize our results with respect to the complex situations presented in game-based learning environments. Unlike controlled laboratory experiments, many games challenge students with multiple sources of information along various timelines. The unpredictability of games is partially what makes them so engaging. However, the interaction between cognitive and metacognitive factors in critical thinking would be difficult to study with more than a few independent variables. Fortunately, the most important effects were replicated in these three experiments, and thus future experiments will attempt additional replication using more game mechanics in a classroom setting. If these efforts are successful, the intent is to move the experiment to a fully online format, where students can practice domain-specific critical thinking skills on demand.

Task difficulty may be a potential limitation of the study. Undergraduate participants were asked to read texts and solve problems derived from the LSAT, which is far beyond their experience and training. Performance was below 50% correct for most participants. Even so, the results indicate that students performed significantly above chance, and the effect size was quite large despite the difficulty of the reading and the challenges to critical thinking. Ultimately, however, generalization of the conclusions may be limited to tasks with equivalent difficulty, and the conclusions may not generalize with respect to easier tasks.

This study has major implications for game-based learning. Digital learning environments present information through various channels and multiple sensory modalities (i.e., vision, hearing, etc.). Monitoring each channel can be as important as monitoring the information itself. For example, navigation cues in an open-world adventure game might be presented through audio information, capturing attention immediately because it is salient relative to the visual information presented via a potentially crowded heads-up display. If the information presented in a channel is complicated or competes with other information, the learner might ignore the source of information altogether. Imagine a scene where helpful advice originates from an angelic character whispering in a player’s ear, and bad advice is touted by a devilish character in the other ear. If the messages get too complicated, players might discount which channel the message is coming from and mistakenly follow the bad advice of the devilish character. This study also addresses the foundational question about how to make better games for learning. Game design and psychology have accepted constructs and vocabulary that do not necessarily align with each other. The two fields must work to understand how game design concepts such as “meaningful choice” align with psychological constructs including critical thinking and decision making. Aligning the vocabulary and constructs from each camp will not only improve how we measure the impact of games but, more importantly, how we can construct better games for learning.

This research also has broader implications for higher education in general. The results provide a possible explanation for successful approaches that foster critical thinking by teaching a step-wise process to problem solving. For example, “Mission US” (Mission US, 2019) is modeled after a successful curriculum, Reading Like a Historian, that explicitly teaches disciplinary literacy to high school students (Reisman 2012). To evaluate texts, students engage in four discrete disciplinary literacy strategies that encourage critical thinking: (1) contextualization, placing a document in temporal and geographical context; (2) sourcing, identifying the source and purpose of a document; (3) close reading, carefully considering the author’s use of language and word choice; and (4) corroboration, comparing multiple sources against each other. Similarly, academics have used variations of disciplinary literacy to guide students through primary literature (Hoskins 2008, 2010; Hoskins et al. 2011; Stevens and Hoskins 2014). In the C.R.E.A.T.E. method (Consider, Read, Elucidate the hypotheses, Analyze and interpret the data, and Think of the next Experiment), biology students analyze a series of papers from a particular lab, appreciate the evolution of the research program, and use other pedagogical tools to understand the material (e.g., concept mapping, sketching, visualization, transformation of data, creative experimental design). Both of these approaches succeed by disrupting the more intuitive cognitive System 1, which might lead to erroneous thinking, and engaging the more deliberative cognitive System 2, which supports evidence-based decision making and critical thinking. The current experiment provides evidence to support pedagogical methods such as disciplinary literacy and C.R.E.A.T.E., and our results suggest a cognitive mechanism that explains both the positive and negative outcomes that occur when students are engaged in reading difficult texts. These pedagogic approaches are particularly significant in today’s media landscape, where students are exposed to complicated ideas from legitimate and dubious sources alike. For example, a student might encounter a challenging article from a questionable source on social media that argues against vaccination. Without proper training in disciplinary literacy, the student might fail to consider the source of information and come to the inaccurate conclusion that all vaccines are harmful. Consequently, by training students in disciplinary literacy, educators may inoculate students against the cognitive biases that might lead to erroneous decision making.

Bibliography

All, Anissa, Castellar, Elena Patricia Nuñez, Van Looy, Jan. 2015. “Towards a Conceptual Framework for Assessing the Effectiveness of Digital Game-Based Learning,” Computers & Education 88: 29–37. http://dx.doi.org/10.1016/j.compedu.2015.04.012.

All, Anissa, Castellar, Elena Patricia Nuñez, Van Looy, Jan. 2016. “Assessing the Effectiveness of Digital Game-Based Learning: Best Practices,” Computers & Education 92–93: 90–103. http://dx.doi.org/10.1016/j.compedu.2015.10.007.

Alter, Adam L., and Oppenheimer, Daniel M. 2008. “Easy on the Mind, Easy on the Wallet: The Roles of Familiarity and Processing Fluency in Valuation Judgments,” Psychonomic Bulletin and Review 15, no. 5: 985–990. http://dx.doi.org/10.3758/PBR.15.5.985.

Alter, Adam L., and Oppenheimer, Daniel M. 2009. “United the Tribes of Fluency to Form a Metacognitive Nation,” Personality and Social Psychology Review 13, no. 3: 219–235. http://dx.doi.org/10.1177/1088868309341564.

Alter, Adam L., Oppenheimer, Daniel M., Epley, Nicholas, and Eyre, Rebecca N. 2007. “Overcoming Intuition: Metacognitive Difficulty Activates Analytic Reasoning,” Journal of Experimental Psychology: General 136, no. 4: 569–576. http://dx.doi.org/10.1037/0096-3445.136.4.569.

Bandura, Albert. 1977. “Self Efficacy: Towards a Unifying Theory of Behavioral Change,” Psychological Review 84, no. 2: 191–215. http://dx.doi.org/10.1037/0033-295X.84.2.191.

Begg, Ian M., Anas, Ann, and Farinacci, Suzanne. 1992. “Dissociation of Processes in Belief: Source Recollection, Statement Familiarity, and the Illusion of Truth,” Journal of Experimental Psychology: General 121, no. 4: 446–458. http://dx.doi.org/10.1037/0096-3445.121.4.446.

Begg, Ian, Vinski, Ede, Frankovich, Linda, and Holgate, Brian. 1991. “Generating Makes Words Memorable, But so Does Effective Reading,” Memory and Cognition 19, no 5: 487–497. http://dx.doi.org/10.3758/BF03199571.

Besken, Miri, and Mulligan, Neil W. 2014. “Perceptual Fluency, Auditory Generation, and Metamemory: Analyzing the Perceptual Fluency Hypothesis in the Auditory Modality,” Journal of Experimental Psychology: Learning, Memory, and Cognition 40, no. 2: 429–440. http://dx.doi.org/10.1037/a0034407.

Bovens, Luc, and Hartmann, Stephan. 2003.Bayesian Epistemology, Oxford, England: Clarendon Press.

Cicchino, Marc I. 2015. “Using Game-Based Learning to Foster Critical Thinking in Student Discourse,” Interdisciplinary Journal of Problem-Based Learning, 9(2). https://doi.org/10.7771/1541-5015.1481.

Elias, Steven M., and MacDonald, Scott. 2007. “Using Past Performance, Proxy Efficacy, and Academic Self-Efficacy to Predict College Performance,” Journal of Applied Social Psychology, 37(11): 2518-2531. http://dx.doi.org/10.1111/j.1559-1816.2007.00268.x.

Ericsson, Anders K., Krampe, Ralf T., and Tesch-Romen, Clemens. 1993. “The Role of Deliberate Practice in the Acquisition of Expert Performance,” Psychological Review 100, no 3: 363–406. DOI: 10.1037//0033-295X.100.3.363

Eseryel, Deniz, Law, Victor, Ifenthaler, Dirk, Ge, Xun, and Miller, Raymond. 2014. “An Investigation of the Interrelationships between Motivation, Engagement, and Complex Problem Solving in Game-Based Learning,” Educational Technology & Society, 17(1): 42–53.

Falk, Hedda, Brill, Gilat, Yarden, Anat. 2008. “Teaching a Biotechnology Curriculum Based on Adapted Primary Literature,” International Journal of Science Education, 30: 1841–1866. https://doi.org/10.1080/09500690701579553.

Frederick, Shane. 2005. “Cognitive Reflection and Decision Making,” Journal of Economic Perspectives 19, no. 4: 25–42. DOI: 10.1257/089533005775196732

Ginns, Paul, and Leppink, Jimmie. 2019. “Special Issue on Cognitive Load Theory: Editorial,” Educational Psychology Review 31: 1–5. https://doi.org/10.1007/s10648-019-09474-4.

Harter, Susan. 1978. “Effectance Motivation Reconsidered: Toward a Developmental Model,” Human Development 21: 34–64. https://doi.org/10.1159/000271574.

Heitz, Richard, and Schall, Jeffrey D. 2012. “Neural Mechanisms of Speed-Accuracy Tradeoff,” Neuron 76: 616–628. https://doi.org/10.1016/j.neuron.2012.08.030.

Hirshman, Elliot, Trembath, Dawn, and Mulligan, Neil W. 1994. “Theoretical Implications of the Mnemonic Benefits of Perceptual Interference,” Journal of Experimental Psychology: Learning, Memory, and Cognition 20: 608–620. http://dx.doi.org/10.1037/0278-7393.20.3.608.

Honicke, Toni, and Broadbent, Jaclyn. 2016. “The Relation of Academic Self-Efficacy to University Student Academic Performance: A Systematic Review,” Educational Research Review, 17: 63–84. http://dx.doi.org/10.1016/j.edurev.2015.11.002.

Hoskins, Sally G. 2008. “Using a Paradigm Shift to Teach Neurobiology and the Nature of Science—a C.R.E.A.T.E.-Based Approach,” The Journal of Undergraduate Neuroscience Education, 6(2): A40–A52.

Hoskins, Sally G. 2010. “ ‘But if It’s in the Newspaper, Doesn’t that Mean It’s True?’ Developing Critical Reading and Analysis Skills by Evaluating Newspaper Science with CREATE,” The American Biology Teacher, 72(7): 415–420.

Hoskins, Sally G., Lopatto, David, and Stevens, Leslie M. 2011. “The C.R.E.A.T.E. Approach to Primary Literature Shifts Undergraduates’ Self-assessed Ability to Read and Analyze Journal Articles, Attitudes about Science, and Epistemological Beliefs,” CBE—Life Sciences Education, 10: 368–378.

Jenkins, Henry. 2008. Convergence Culture: Where Old and New Media Collide. New York: NYU Press.

Kelley, Colleen M., and Lindsay, D. Steven. 1993. “Remembering Mistaken for Knowing: Ease of Retrieval as a Basis for Confidence in Answers to General Knowledge Questions,” Journal of Memory and Language 32: 1–24. https://doi.org/10.1006/jmla.1993.1001.

Kahneman, Daniel. 2003. “A Perspective on Judgement and Choice,” American Psychologist 58: 697–720. DOI: 10.1037/0003-066X.58.9.697.

Kararo, Matthew, and McCartney, Melissa. 2019. “Annotated Primary Scientific Literature: A Pedagogical Tool for Undergraduate Courses,” PLOS Biology, 17(1): e3000103. https://doi.org/10.1371/journal.pbio.3000103.

Kinoshita, Sachiko. 1989. “Generation Enhances Semantic Processing? The Role of Distinctiveness in the Generation Effect,” Memory and Cognition 17: 563–571. http://dx.doi.org/10.3758/BF03197079.

Koriat, Asher. 1993. “How Do We Know That We Know? The Accessibility Model of the Feeling of Knowing,” Psychological Review 100, no 4: 609–639. http://dx.doi.org/10.1037/0033-295X.100.4.609.

Kruglanski, Arie W., and Van Lange, Paul A. M. 2012. Handbook of Theories of Social Psychology. London: Sage.

Kuhl, Julius. 1992. “A Theory of Self-Regulation: Action versus State Orientation, Self-Discrimination, and Some Applications,” Applied Psychology: An International Review 41: 97–129. http://dx.doi.org/10.1111/j.1464-0597.1992.tb00688.x.

McGlone, Matthew S., and Tofighbakhsh, Jessica. 2000. “Birds of a Feather Flock Conjointly (?): Rhyme as Reason in Aphorisms,” Psychological Science 11: 424–428. https://doi.org/10.1111/1467-9280.00282.

Martin, Lillie J., and Müeller G. E. 1899. “Zur Analyse der Unterschiedsempfindlichkeit.” Leipzig: J. A. Barth.

“Mission US,” Electric Funstuff, accessed March 13, 2019, https://www.mission-us.org.

Mulligan, Neil W. 2002. “The Emergent Generation Effect and Hypermnesia: Influences of Semantic and Nonsemantic Generation Tasks,” Journal of Experimental Psychology: Learning, Memory, and Cognition 28: 541–554. http://dx.doi.org/10.1037/0278-7393.28.3.541.

Mulligan, Neil W., and Lozito, Jeffrey P. 2004. “Self-Generation and Memory.” In Psychology of Learning and Motivation, edited by B. H. Ross, 175–214. San Diego: Elsevier Academic Press.

Nairne, James S. 1988. “The Mnemonic Value of Perceptual Identification,” Journal of Experimental Psychology: Learning, Memory, and Cognition 14: 248–255. http://dx.doi.org/10.1037/0278-7393.14.4.694.

Nairne, James S., and Widner, R. L. 1988. “Familiarity and Lexicality as Determinants of the Generation Effect,” Journal of Experimental Psychology: Learning, Memory, and Cognition 14: 694–699. http://dx.doi.org/10.1037/ 0278-7393.14.4.694.

Nicholls, John G. 1984. “Achievement Motivation: Conceptions of Ability, Subjective Experience, Task Choice and Performance,” Psychological Review 91: 328-346. http://dx.doi.org/10.1037/0033-295X.91.3.328.

Norman, Donald A. 1988. The Psychology of Everyday Things. New York: Basic Books.

Petty, Richard E., and Cacioppo, John T. 1986. “The Elaboration Likelihood Model of Persuasion,” Advances in Experimental Social Psychology 19: 123–181. https://doi.org/10.1016/S0065-2601(08)60214-2.

Pansky, Ainat, and Goldsmith, Morris. 2014. “Metacognitive Effects of Initial Question Difficulty on Subsequent Memory Performance,” Psychonomic Bulletin and Review 21, no. 5: 1255–1262. http://dx.doi.org/10.3758/s13423-014-0597-2.

Pintrich, Paul R., & de Groot, Elisabeth V. 1990. “Motivational and Self-Regulated Learning Components of Classroom Academic Performance,” Journal of Educational Psychology, 82(1): 33-40. http://dx.doi.org/10.1037/0022-0663.82.1.33.

Reber, Rolf, and Schwarz, Norbert. 1999. “Effects of Perceptual Fluency on Judgments of Truth,” Consciousness and Cognition 8: 338–342. https://doi.org/10.1006/ccog.1999.0386.

Reisman, Anne. 2012. “Reading Like a Historian: A Document-Based History Curriculum Intervention in Urban High Schools,” Cognition and Instruction, 30(1): 86–112. DOI: 10.1080/07370008.2011.634081

Robbins, Steven B., Lauver, Kristy, Le, Huy, Davis, Daniel, Langley, Ronelle, and Carlstrom, Aaron. 2004. “Do Psychosocial and Study Skill Factors Predict College Outcomes? A Meta-Analysis,” Psychological Bulletin, 130(2): 261–88. DOI: 10.1037/0033-2909.130.2.261

Schwarz, Norbert. 2004. “Metacognitive Experiences in Consumer Judgment and Decision Making,” Journal of Consumer Psychology 14: 332–348. http://dx.doi.org/10.1207/s15327663jcp1404_2.

Stanovich, Keith E., and West, Richard F. 2000. “Individual Differences in Reasoning: Implications for the Rationality Debate?” Behavioral and Brain Sciences 22, no. 5: 645–726.

Stevens, Leslie M., and Hoskins, Sally G. 2014. “The CREATE Strategy for Intensive Analysis of Primary Literature Can Be Used Effectively by Newly Trained Faculty to Produce Multiple Gains in Diverse Students,” CBE—Life Sciences Education, 13: 224–242. DOI: 10.1187/cbe.13-12-0239

Tyner, Kathleen R. 1994. “Access in a Digital Age.” San Francisco: Strategies for Media Literacy.

Tversky, Amos, and Kahneman, Daniel. 1973. “A Heuristic for Judging Frequency and Probability,” Cognitive Psychology 5: 207–232. https://doi.org/10.1016/0010-0285(73)90033-9.

Wickelgren, Wayne A. 1977. “Speed-Accuracy Tradeoff and Information Processing Dynamics,” Acta Psychology 41: 67–85. https://doi.org/10.1016/0001-6918(77)90012-9.

Woodworth, Robert S. 1899. “The Accuracy of a Voluntary Movement,” Psychological Review 3: 1–114.

Zajonc, Robert B. 1968. “Attitudinal Effects of Mere Exposure,” Journal of Personality and Social Psychology 9: 1–27. http://dx.doi.org/10.1037/h0025848.

Appendix A

Assessment of Confidence for Experiment 2

  1. I found the game to be easy.
  2. I found the game to be difficult.
  3. I feel anxious about my performance.
  4. I feel confident about my performance.
  5. I found the game engaging.
  6. I found the game boring.
  7. My performance made me feel proud.
  8. My performance made me feel embarrassed.
  9. The game is a good assessment of my critical thinking skills.
  10. The game is a poor assessment of my critical thinking skills.

Appendix B

Results for Experiment 1

Data for the reliable- and unreliable-source conditions are presented in Figure 3. Performance on the critical thinking assessment is plotted as the mean score for each condition.

alt=""
Figure 3. A bar graph of the data for Experiment 1. Average score is presented for reliable and unreliable source conditions. No effect of source condition on performance in Experiment 1. Performance on the critical thinking assessment is plotted for the reliable-source and unreliable-source conditions. The mean scores for the reliable-source and unreliable-source conditions were 32.95% and 33.92%, respectively. There was no difference in performance for the reliable-source condition differed from that of the unreliable-source condition.

Scores for each participant are computed as the percent correct of the total questions answered for that participant. The mean score for the reliable-source condition, where icons were reliably paired with correct and incorrect answers, was 32.95% (SD = 10.53). The mean score for the unreliable-source condition, where icons were pseudorandomly paired with correct answers, was 33.92% (SD = 8.58). There was no difference in performance for the reliable-source condition compared to the unreliable-source condition, t(77) = 0.41, p = 0.66.

Given the difficulty of the LSAT, participants were not expected to perform exceptionally. The mean scores across all participants and conditions was 33.42% (SD = 9.55), but this score was significantly greater than chance (25%), which indicates that participants were not merely guessing, z(77) = 33.42, p < 0.0001, d = 0.88 (Figure 4). Computing the effect size using Cohen’s d for z-scores results in a value greater than 0.80, which is typically considered a large effect. This difference corresponds to 9.93 more correct answers on average, which is nearly a full letter grade if this were a classroom assessment.

alt=""
Figure 4. A histogram of the number of subjects binned by mean score in Experiment 1. Participants performed better than chance in Experiment 1. The mean score for all participants was significantly better than chance (25%), which suggests that participants were not guessing.

Speed-accuracy trade-offs are well studied in cognitive psychology (Heitz and Schall 2012; Martin and Müeller 1899; Wickelgren 1977; Woodworth 1899). In general, accuracy is sacrificed for speed and vice versa. Accordingly, poor performance might be explained if participants were rushing to finish the assessment. The number of correctly answered questions is plotted as a function of the total questions answered for each participant (Figure 5).

alt=""
Figure 5. A scatterplot of the number of correct answers for each participant as a function of the number of questions answered. The data is fit with a line using linear regression. The slope of the line is 0.35 and the offset is -0.86. The R-squared value is 0.71. No speed-accuracy trade-offs were observed in Experiment 1. The number of correctly answered questions is plotted as a function of the total questions answered for each participant. The performance of participants did not decline as a function of the number of questions answered, which is inconsistent with a speed-accuracy trade-off. The data suggest vigilant participants perform better.

Linear regression revealed the performance of participants did not decline as a function of the number of questions answered, F(1,77) = 188, p < 0.0001. This result is inconsistent with predictions derived from speed-accuracy trade-offs, suggesting vigilant participants performed better.

Results for Experiment 2

Performance on the critical thinking assessment is plotted as mean scores (Figure 6). The mean score for the reliable-source condition was 36% (SD = 17). The mean score for the unreliable-source condition was 43% (SD = 18). Performance for the unreliable-source condition was greater than the reliable-source condition, t(384) = 4.17, p < 0.0001.

alt=""
Figure 6. A bar graph for performance on the critical thinking assessment in Experiment 2. The average score for all participants is plotted as a function of source reliability. Performance in the unreliable-source condition was better than for the reliable-source condition.

The mean score across both conditions was 39.36% (SD = 17.67), which was significantly better than chance (25%), z(385) = 39.36, p < 0.0001. Performance improved as a function of the number of questions answered, F(1, 385) = 335, p < 0.0001, r2= 0.47, which indicates that participants who completed more trials performed better. The purpose of this analysis was to determine if participants who rushed through the assessment performed poorly (i.e., a speed-accuracy trade-off). No such trade-off was observed. It is easy to imagine that participants who completed very few questions became discouraged and started daydreaming. However, participants were monitored for compliance, and participants remained engaged with the task throughout the experiment. Also, there was also no difference between the regression slopes for the reliable- and unreliable-source conditions, t(382) = 0.08, p = 0.93.

Confidence scores were aggregated for each participant, and the sign of the mean score for the five negatively phrased questions was reversed so that positive numbers indicate increased confidence. The mean confidence rating for the reliable-source condition was -0.03 (SD = 0.49), and the mean for the unreliable-source condition was -0.15 (SD = 0.54). The confidence ratings for the reliable-source condition were greater than confidence ratings for the unreliable-source condition, t(193) = 1.95, p = 0.03.

The five positively phrased questions were compared to the five negatively phrased questions in a split-half reliability test to determine if participants were providing reliable answers during the confidence assessment. Mean scores were averaged across all participants in both the reliable- and unreliable-source conditions. The mean confidence score for the positively phrased questions was -0.07 (SD = 0.58), and the mean score for the adjusted negatively phrased questions was -0.12 (SD = 0.62). There was no significant difference between groups, t(385) = 0.49, p = 0.31.

Data from the reliable- and unreliable-source conditions were combined, and the overall relationship between confidence and performance was assessed using linear regression (Figure 7). The more confident the participant, the worse participants performed. Increased confidence scores were associated with decreased performance on the critical thinking assessment, F(1, 385) = 48.43, p < 0.0001, r2= 0.11. Confidence scores were also negatively correlated with performance on the critical thinking assessment for the reliable-source condition,F(1, 192) = 45.52, p < 0.0001, r2= 0.20 and the unreliable-source condition,F(1, 192) = 17.21, p < 0.0001, r2 = 0.08.

alt=""
Figure 7. Double dissociation between confidence and performance in Experiment 2. Data from reliable- and unreliable-source conditions were combined to determine the overall relationship between confidence and performance. There was a negative correlation between confidence scores and performance on the critical thinking assessment.

Results for Experiment 3

Performance on the critical thinking assessment was compared for all four conditions using a two-way ANOVA with source reliability and content intelligibility as independent factors. The source factor was composed of two levels (reliable-source and unreliable-source), and the content factor was composed of two levels (intelligible-content and unintelligible-content). The means and confidence intervals are plotted in Figure 8.

alt=""
Figure 8. A line graph of performance as a function of content intelligibility for both source conditions in Experiment 3. Content intelligibility affects performance but not source reliability in Experiment 3. Performance on the critical thinking assessment was compared using a 2 x 2 ANOVA with source reliability and content intelligibility as independent factors. The reliable-source condition was not significantly different from the unreliable-source condition, but data suggest the unintelligible-content condition was significantly greater than the intelligible-content condition.

There was no main effect for the source factor. Specifically, the reliable-source condition was not significantly different from the unreliable-source condition, F(1, 396) = 0.25, p = 0.61. A main effect was found for the content factor; the unintelligible-content condition was significantly greater than the intelligible-content condition across both levels of the source factor, F(1, 396) = 183.78, p < 0.0001, r2 = 0.31. There was also a modest, but significant interaction between the source and content factors, F(1, 396) = 4.13, p = 0.043, r2 = 0.007. Post-hoc tests were conducted using Scheffé’s contrast among pairs of means to understand the interaction between source reliability and content intelligibility. Consistent with the significant main effect for content type, performance differed for the reliable-source condition across each level of the content factor, F(1, 396) = 11.02, p < 0.05. Performance for the unreliable-source condition varied also across content types, F(1, 396) = 8.15, p < 0.05. Conversely, performance for the intelligible-content type did not vary across both levels of the source factor, F(1, 396) = 1.08, p > 0.10. Also, performance for the unintelligible-content type did not differ between both levels of the source factor, F(1, 396) = 1.79, p > 0.10. Speed-accuracy checks were conducted to determine if participants in both unintelligible conditions performed better because they took longer than participants in the intelligible conditions. A comparison of slopes test did not reveal differences in performance as a function of number of completed items, t(396) = 0.08, p > 0.10.

About the Author

Robert O. Duncan is an Associate Professor of Behavioral Science at York College of the City University of New York (CUNY) with joint appointments in Psychology and Biology at the CUNY Graduate Center. Dr. Duncan’s primary research interests are (1) to study the physiological mechanisms of visually guided behavior in healthy individuals and (2) to develop novel functional magnetic resonance imaging (fMRI) techniques to quantify neuronal, vascular, and metabolic contributions to neurodegenerative visual disorders. Dr. Duncan’s current research uses fMRI to compare measurements of neuronal activity and blood flow throughout the retino-cortical pathway to standard clinical measures of visual function. His additional research interests include game-based learning, undergraduate research, virtual reality, and interactive digital narrative.

Author Note
Funded in part by a Title III grant from the U.S. Department of Education. The author is grateful for the technical assistance of Rasha Alsaidi, Dipali Arshad, and Janina Medina, who were undergraduate research students on this project. Financial disclosures: none.

Correspondence concerning this article should be address to Robert O. Duncan, Department of Behavioral Science, York College, The City University of New York, 94–20 Guy R. Brewer Blvd., AC-4D06, Jamaica, NY 11451. Contact: rduncan@york.cuny.edu


Side by side comparison of traditional and experimental Scratch blocks. Detailed description in article text.
2

Music Making in Scratch: High Floors, Low Ceilings, and Narrow Walls?

Abstract

Music programming is an increasingly popular activity for learning and creating at the intersection of computer science and music. Perhaps the most widely used educational tool that enables music programming is Scratch, the constructionist visual programming environment developed by the Lifelong Kindergarten Group at the MIT Media Lab. While a plethora of work has studied Scratch in the context of children creating games and coding interactive environments in general, very little has honed in on its creative sound or music-specific functionality. Music and sound are such an important part of children’s lives, yet their ability to easily engage in creating music in coding environments is limited by the deep knowledge needed in music theory and computing to easily realize musical ideas. In this paper, we discuss the affordances and constraints of Scratch 2.0 as a tool for making, creating and coding music. Through an analysis of limitations in music and sound code block design, a discussion of bottom-up music programming, and a task breakdown of building a simple drum loop, we argue and illustrate that the music and sound blocks as currently implemented in Scratch may limit and frustrate meaningful music making for children, the core user base for Scratch. We briefly touch on the history of educational music coding languages, reference existing Scratch projects and forums, compare Scratch with other music programming tools, and introduce new block design ideas to promote lower floors, higher ceilings and wider walls for music creation in Scratch.

Background

Music programming is the practice of writing code in a textual or visual environment to analyze audio input and/or produce sonic output. The possibilities for creative music coding projects are virtually endless including generative music-makers, audio-visual instruments, sonifications, interactive soundtracks, music information retrieval algorithms, and live-coding performances. Music programming has become an increasingly popular pursuit as calls to broaden and diversify the computer science field have led to attempts at integrating programming and computational thinking into other topics and curricula.[1] Today, music programming languages are widespread and predominantly free. Yet, as evidenced by their history and purpose, most cater to expert computer musicians rather than novice programmers or musicians. For example, Max/MSP, one of the most popular music programming environments for professionals and academics, was first developed at IRCAM in Paris to support the needs of avant garde composers (Puckette 2002). Usually, prior knowledge of music theory, programming, and signal processing is needed just to get started with a music programming language. Writing about the computer music programming environment SuperCollider, Greher and Heines (2014) note that “the large learning curve involved in understanding the SuperCollider syntax makes it inappropriate for an entry-level interdisciplinary course in computing+code” (Greher and Heines 2014, 104–105). However, recent platforms such as SonicPi (Aaron et al. 2016), EarSketch (Freeman et al. 2014), and the approaches piloted by Gena Greher, Jesse Heines, and Alex Ruthmann using Scratch in the Sound Thinking course at the University of Massachusetts Lowell (Heines et al. 2011) attempt to find ways to teach both music and coding to novices at the same time.

Music and Coding Education

The idea of using computational automata to generate music can be traced back to Ada Lovelace, possibly the world’s first programmer, in the mid-19th century who imagined an Analytic Engine capable of completing all sorts of tasks including composing complex musical works (Wang 2007). One hundred years later, the first programming languages that synthesised sound were developed beginning with Music I in 1957 by Max Matthews at Bell Labs. Educational environments that treated music and sound as an engaging context for learning coding appeared relatively soon after when Jeanne Bamberger collaborated with Seymour Papert at the MIT AI Lab to adapt the Turtle Graphics programming language Logo to support music (Schembri 2018). Bamberger and the Lab’s creation MusicLOGO opened up exciting opportunities for children making music since it enabled them to bypass the tedium and complexity of learning traditional notation and theory and dive right in to composition and playing back their work to reflect and iterate (Bamberger 1979). Following MusicLOGO, Bamberger developed Impromptu, a companion software to her text Developing Musical Intuitions (Bamberger and Hernandez 2000) that enables young learners to explore, arrange, and create music using “tune blocks” in creative music composition, listening, and theory concept projects.

Within the last ten years, the landscape of free, educational programming languages designed explicitly around music has begun to bloom. While languages vary across style of programming (functional, object-oriented) and music (classical, hip-hop), all that follow enable novice programmers to sonically express themselves. SonicPi (Aaron, Blackwell, and Burnard 2016) is a live-coding language based on Ruby that enables improvisation and performance. It is bundled in the Raspbian OS resulting in widespread deployment in Raspberry Pi computers around the world, and it has been studied in UK classrooms. EarSketch (Freeman et al. 2014) is another platform that combines a digital audio workstation (DAW) with a Python or Javascript IDE (integrated development environment), and was originally designed to teach programming to urban high school students in Atlanta, Georgia using hip-hop grooves and samples. Among its innovations is the inclusion of a library of professional-quality samples enabling learners to make music through combining and remixing existing musical structures rather than adding individual notes to a blank canvas. JythonMusic (Manaris and Brown 2014) is another open source Python-based environment that has been used in college courses to teach music and programming. Beyond music-centric languages, numerous media computing environments such as AgentSheets and Alice support sound synthesis and/or audio playback in the context of programming games, animations, and/or storytelling projects (Repenning 1993; Cooper, Dann, and Pausch 2000; Guzdial 2003).

Music Coding in the Browser

At the time of their initial release, most of the programming environments described above were not web-based. The additional steps of downloading and installing the software and media packages to a compatible operating system presented a barrier to easy installation and widespread use by children and teachers. In 2011 Google introduced the Web Audio API, an ambitious specification and browser implementation for creating complex audio functions with multi-channel output all in pure Javascript. Soon after, Chrome began to support its many features, and Firefox and Safari quickly followed suit. As the developers of Gibberish.js, an early web audio digital signal processing (DSP) library, point out, the Web Audio API is optimized for certain tasks like convolution and FFT analysis at the cost of others like sample accurate timing which is necessary for any kind of reliable rhythm (Roberts, Wakefield, and Wright 2013). Over the past few years, the Web Audio API has added functionality, a more accessible syntax, and new audio effects. The aforementioned Gibberish.js and its companion Interface.js were among the first libraries to add higher-level musical and audio structures on the base Web Audio API, which has enabled the rapid prototyping and implementation of complex web-based music and audio interfaces. These libraries are increasingly being used in educational settings, such as middle schools in which girls are taught coding (Roberts, Allison, Holmes, Taylor, Wright and Kuchera-Morin 2016).

Newer javascript audio and music libraries include Flocking (Clark and Tindale 2014), p5.js (McCarthy 2015), and tone.js (Mann 2015). Flocking uses a declarative syntax similar to SuperCollider meant to promote algorithmic composition, interface design, and collaboration. The p5 sound library adds an audio component to the popular web animation library p5.js, and most recently tone.js provides a framework with syntax inspired by DAW’s and a sample-accurate timeline for scheduling musical events (Mann 2015).

Scratch

Scratch is a programming environment designed and developed in the Lifelong Kindergarten Group at the MIT Media Lab. First released in 2007, Scratch 1.0 was distributed as a local runtime application for Mac and Windows, and Scratch 2.0 was coded in Adobe’s Flash environment to run in the web-browser beginning in 2013. With the impending deprecation of Flash as a supported web environment, a new version of Scratch (3.0) has been completely reprogrammed from the ground up using Javascript and the Web Audio API. This version went live to all users in January 2019.

The design of Scratch is based on constructionist design metaphors in a visual environment where users drag, drop, and snap together programmable LEGO-style blocks on the screen (Resnick et al. 2009). An important part of the Scratch website is its large community of users that comment on and support each other’s work and project gallery which includes a “remix” button for each project that enables users to look inside projects, build upon and edit them to make custom versions, and copy blocks of code known as scripts into their “backpacks” for later use. While Scratch is used to create all kinds of media ranging from simulations to animations to games, it comes with a set of sound blocks and advertises its capability for creating music. In fact, the name “Scratch” derives from a DJ metaphor in that a user may tinker with snippets of code created by herself and others similar to how a DJ scratches together musical samples (Resnick 2012). Scratch is also widely used to make music in classrooms and homes around the world often through hardware such as the Makey Makey. Middle school music educator Josh Emanuel, for example, has posted comprehensive documentation on building and performing Scratch instruments with his students in Nanuet, New York.

Scratch is accessible even to novice coders in classes ranging from elementary to college level introductory coding classes, and to curious adults. Recently, in an undergraduate course entitled Creative Learning Design, the authors assigned a project for a diverse set of undergraduate students to design and build music-making environments in Scratch followed by user testing sessions with local first and seventh graders.[2] The music component of Scratch has been used to teach computational thinking in an interdisciplinary general education Sound Thinking class, which was co-designed and taught Alex Ruthmann, at the University of Massachusetts Lowell (Greher and Heines 2014). The instructors of the course covered topics such as iteration, boolean operators, concurrency, and event handling through building music controllers, generating algorithmic music, and defining structures for short compositions (2014, 104–131). They chose Scratch as the music programming environment for their course as it “makes the threshold of entry into the world of programming very low” and “includes some great music functionality” such as the ability to “play a number of built-in sounds, or sounds stored in mp3 files” and more importantly “its ability to generate music using MIDI” (2014, 104–105).

Motivation

Scratch has a massive user base with over 33 million registered users and over 35 million projects shared across its platform to date. In supporting this wide audience, the Scratch team intends to put forward an environment with “low floors, high ceilings, and wide walls,” that is to say a low barrier of entry in which one has ample room to grow and pursue unique interests (Resnick et al. 2009). However, when it comes to music projects, we believe Scratch has limitations. In terms of easily coding and playing pre-recorded and user-created audio files, one finds low floors. However, for users wishing to create music through sequencing musical notes or with engaging sounds and instruments, they often can face high floors, low ceilings and narrow walls due to the complex numeric music mappings and mathematical representations, as well as data structures that limit musical expression.

In this work, we present and critique three major challenges with the implementation of music and sound blocks in Scratch 2.0. First, the functionality of music blocks is immediately accessible, but designed to play sounds at the level of “musical smalls” (i.e., the “atoms” or “phonemes” of music, such as an individual note, pitch, or duration) vs. “musical simples” (i.e., the “molecules” or “morphemes” of music, such as motives and phrases) (Bamberger and Hernandez 2000). Second, arising from Scratch’s bottom-up programming style (i.e., building music code block by block, and note by note from a blank screen), the act of realizing musical sequences is tedious requiring a deep mathematical understanding of music theory to (en)code music. Third, and perhaps most challenging to the end user for music, is a timing mechanism designed to privilege animation frame rates over audio sample-level accuracy. As illustrated by numerous Scratch projects and our own task breakdown, a user must take extra, often unintuitive steps to achieve adequate musical timing to implement basic musical tasks such as drums grooving together in time, or melodies in synchronization with harmony. For the balance of this article, we analyze the design constraints of the music and sound blocks, the user experience and implications of using them in the Scratch environment, and finally the quality of the audio they produce. We conclude with a preview of new music block design ideas that aim to better address music making and coding in the Scratch environment for musical and coding novices.

Music Functionality in Scratch 2.0

The music capabilities of Scratch 2.0 center around the Sound blocks (Figure 1)—audio primitives that enable users to trigger individual sounds, notes and rests, and to manipulate musical parameters such as tempo, volume and instrument timbre.

13 purple puzzle piece-shaped Scratch blocks arranged vertically. Blocks 1-3 correspond to audio files. Blocks 4-7 correspond to drums and pitches. Blocks 8-13 correspond to volume and tempo.
Figure 1. Blocks included in the Sound category in Scratch.

Scratch makes the first steps in crafting sounds and music at the computer easy and accessible. Unlike many audio programming languages, it provides a learner immediate audio feedback with a single block of code that can be clicked and activated at any point. Other music coding environments often require more complex structures to play a sound, including instantiating a new instrument, locating and loading a sample, and turning audio output on or running the program before producing a result (Figure 2).

Three screenshots of code side by side. Left: a single Scratch block with the text 'play sound meow.' Center: four grey Max MSP blocks. Right: two lines of Javascript code.
Figure 2. Playing a sound “meow” in Scratch (left) uses only one block. It is more complex in Max/MSP (center) and Tone.js (right).

Scratch supports three types of audio playback—sounds (audio files), MIDI drums, and MIDI pitched instruments. A library of free audio files is included in the Scratch environment, and sounds are linked intuitively to relevant sprites (e.g. when the Scratch cat is chosen, its default sound option is “meow”). Users can easily upload and record new sounds by clicking on the “sounds” tab. General parameters such as instrument, tempo, and volume, are set locally for each sprite.

Unfortunately, the three types of audio playback in Scratch lack consistency. In the design of novice programming languages, consistency means, “the language should be self-consistent, and its rules should be uniform” (Pane and Meyers 1996, 55). Setting a tempo affects pitched instruments and drums, but not audio files (e.g., sounds). This makes sense for games and animations where strict rhythmic timing is unnecessary, e.g. characters communicating with each other. But if a user wanted to load in their own musical sounds to create a sampler, they would need to use a combination of “play sound” blocks and “rest” blocks to emulate tempo. The syntax “play sound until done” is also confusing to new learners and unique to sound playback. Really this line means “do not move on until the sound has finished” and results in timing derived from the current sound’s length.[3]

While pitched instruments and drums use a consistent timing mechanism, they differ in that “play drum” requires a percussion instrument as an argument, and “play note” requires, not an instrument, but a pitch. Following a real world metaphor of musicians playing notated music, this distinction is appropriate (Pane and Meyers 1996) as separate percussion instruments are often notated on different lines of the same staff just like pitches. A single percussionist is required to play multiple instruments like an instrumentalist is required to play multiple notes. However, the “set instrument” function constrains sprites to a single instrument. It’s often a challenge to teach that a single sprite can play any number of percussion sounds in parallel with a pitched instrument but is limited to one pitched instrument.[4] A more consistent design may treat a drum like any instrument requiring it to be set for the entire sprite while providing smart selection/labelling to indicate the resulting drum sound.[5]

The design of the “play note” and “play drum” blocks is challenging in other ways. For one thing, the vertical, numeric representation of pitches and instruments conceals melodic content more easily visualized with other representations like traditional score notation or pitch contour curves. A learner has to memorize for example that pitch 64 refers to “E,” that drum 1 is the snare drum, and instrument 9 is the trombone. Rhythm is also expressed numerically as fractions or multiples of the beat rather than in traditional formal music terms like “sixteenth notes” and “quarter notes.” While this notation affords a more fundamental conception of timing and can lead to crazy behaviors (1.65 beats against 1.1 beats), it constrains the type of rhythms possible, e.g. triplet eighth notes that would need to be rounded down to 0.33. Depending on what one is teaching about music, a notation-based rhythm representation may be more accessible than decimal fractions of beats.

The quality of the audio output produced may limit users’ expressiveness and may be a major ceiling on creativity for young users. Pitches and drums are played via a flash soundbank. While Scratch provides a large variety of drums and pitched instruments, they vary in quality and are output as mono sounds. High quality audio, such as instrument sample libraries, take up large amounts of storage space and internet bandwidth in downloading. It is a trade-off in sound quality with load speed that limits many web-based music and sound environments. Because Scratch 2.0 does not support synthesizing new sounds or uploading new MIDI instrument libraries, users are less able to pursue their unique musical instruments. In contrast, EarSketch uses a library of loops created by professional DJs and producers that its developers write contributes to an experience that is personally meaningful, and deeply authentic (Freeman et al. 2014). In Scratch, the lack of high-quality audio, in combination with other challenges discussed, results in many users bypassing the instrument blocks entirely in favor of using audio samples and uploading long audio files that play on repeat. For example in one Scratch forum post entitled “Add more instruments,” one response to the poster’s request states, “Why can’t you download music? It’s better.”

Learning Music Coding/Coding Music in Scratch

Coding in Scratch is a bottom-up process. While the developers argue this style of constructionist learning supports exploration, tinkering, and iterative design (Resnick et al. 2009), and Scratch’s remix feature lessens the load allowing many users to begin from an existing project rather than a blank canvas, critics have pointed out that it may lead to bad programming habits such as incorrect use of control structures and extremely fine-grained programming where programs consist of many small scripts that lack coherency (Meerbaum-Salant, Armoni, and Ben-Ari, 2011). For our purposes, we must interrogate whether Scratch’s bottom-up programming style is a useful method for composing music. For one thing, bottom-up programming takes time. Each instance a new block is added to the screen, a user must click and drag, select a new pitch/instrument and duration, and often rearrange existing blocks. Music scripts are long, and as a result of a finite amount of screen space, they quickly become unwieldy as even simple tunes like “Twinkle Twinkle Little Star” have tens if not hundreds of notes. One might argue that hard coding all of the pitches, like one might in a DAW, defeats the purpose of using a programming environment to represent music. It is true that a powerful exercise in Scratch is to represent a tune with as little code as possible using loops as repeats, messages to trigger sections, and data structures to store groups of notes. In formal settings the tedium required to code a tune block by block opens up the opportunity to learn more efficient methods of coding (Greher and Heines 2014). However, abstraction is a difficult concept for novice programmers to internalize, and problem-solving usually occurs first at the concrete level. Collecting the results of multiple studies Pea, Soloway, and Spohrer (2007) note that “there is a well-known tendency of child Logo programmers to write simple programs using Logo primitives rather than hierarchically organized superprocedures that call other procedures even after examples of superprocedures and discussions of their merits for saving work have been offered” (24). Similarly, in Scratch, beginning music coders must slog through dragging blocks and blocks of code before realizing even a simple melody, let alone a multi-part composition (Figure 3).

Large, complicated structure of more than 40 Scratch sound blocks connected vertically. The top block starts the music. Each following block indicates the pitch and duration of a single note.
Figure 3. “Twinkle, Twinkle Little Star” requires a large number of “play note” blocks in Scratch.

A more fundamental argument against bottom-up music coding can be drawn from Jeanne Bamberger’s work Developing Musical Intuitions (2000). In it, she points out that conventional music instruction, like Scratch, begins with the smallest levels of detail—notes, durations, intervals, etc. She compellingly makes the case that this approach ignores the intuitions we already possess as the starting point for musical instruction, e.g. our love of “sticky” melodies and pleasure in finding and moving to danceable grooves. Conventional instruction asks us to begin our educational work with the smallest musical components, often taken out of context, rather than those that our prior experience and developed intuitions with music may have provided.

In Bamberger’s curriculum, conversely, she begins with the middle-level concept of melodic chunks. The accompanying learning tool Impromptu presents musical building blocks called “tuneblocks” that are hierarchically grouped into entire tunes, phrases, and motives. The user begins by listening to the various “chunks” of melodies, and arranges and rearranges them to create or re-create a composition. The user is not working initially at the level of individual pitches and rhythms, but instead at a higher, and more intuitive level of abstraction. Students begin by learning that the base units of a composition are melodies and motives, rather than pitches or frequencies. It is not until Part 3 entitled Pitch Relations when Bamberger zooms in and introduces intervals and assigns students to uncover and reverse engineer scales from melodies. She effectively reverses the order of traditional music theory, teaching children how to theorize with music, rather than accept it as a set of bottom-up, well-structured rules. One notices this approach is the reverse of the way many people approach teaching coding in Scratch (and also in many other languages), but is congruent with the original design intention behind Scratch where users would first browse and play games created by other users in the Scratch community, then clicking the “see inside” button to remix others’ code as the starting point for learning.

Examined from a music technology angle, Freeman and Magerko (2016) classify educational music programming languages by the layer of musical interaction they afford and thus the form of musical thinking they prescribe: Subsymbolic programming languages prioritize synthesis and control of time at an extremely low level. Symbolic languages offer editing at the individual note level usually via MIDI. Hierarchical languages deal with musical form and allow loops or motives to be layered and remixed.[6] The most powerful of the bunch are clearly subsymbolic languages offering near limitless flexibility and control for expert and professional coders. Yet, the manipulation of low-level processes requires patience, determination, and a lot of time and experience to build up the knowledge to implement in a practical setting.[7] We have observed that novice Scratch users sometimes get hung up and frustrated after the first few short programs they write, lacking ever a moment of true self-expression and failing to develop the knowledge and motivation to progress further.

Hierarchical languages do not always support lower-level sound editing meaning that advanced users will inevitably reach a ceiling on what they can express with the tool. (EarSketch, for example, allows the programming of beats, but disallows modifying single pitches within a melody.) However, this constraint is not necessarily a bad thing. Educational programming languages should not serve as the end-all-be-all, but rather as a meaningful stage in a learner’s development. Maloney et al (2010) write that their intent with Scratch is to “keep [their] primary focus on lowering the floor and widening the walls, not raising the ceiling,” noting it is important for some users to eventually move on to other languages (66). Of course, this growth step should not be taken out of desperation, but rather in search of new mediums that afford more creative potential after one has undergone multiple positive experiences.

Ideally, users of a music programming environment should have the opportunity to compose entire pieces and/or perform for others using custom instruments/interfaces all while encountering code challenges that push their critical thinking skills along the way. Through this process, they can begin to theorize about and through code using musical materials and ideas they care about, testing and developing their musical intuitions and hunches as part of their creative process. Environments like EarSketch and Hyperscore (Farbood, Pasztor and Jennings, 2004) offer innovative, hierarchical methods of interaction that demonstrably achieve these goals, but we believe there is still unrealized potential for environments and educators to support parallel artistic and computational growth and then scaffold the transition process into more advanced/powerful environments. (Hyperscore is not a programming language, but a unique computer-assisted composition system that enables users to control high level structures (e.g. composition-length harmonic progressions) and lower level structures (motivic gestures) all through drawing and editing curves.)

Inconsistent Timing in Scratch Music Playback

The Scratch music implementation contains a tempo object that is used to specify the duration of notes, drums, and rests. Even though tempo is key to almost all music-making, many other audio programming languages require a user to time playback in second or millisecond long durations. One obvious application in which tempo is useful is in creating rhythms or drum beats. Consider a popular Scratch drum machine developed by user thisisntme. Both the code of thisisntme and many thoughtful comments indicate exciting possibilities for making music in Scratch.[8] Other comments indicate that the app is imperfect, especially in regards to rhythm. For example, user Engin3 writes, “only problem to me (that i doubt can be fine tuned [sic], is the timing of the repeat. It feels a bit off to me.” The developer thisisntme writes in the instructions, “TO MAKE THE BEATMAKER MORE ACCURATE, ENABLE TURBO MODE AND REDUCE TEMPO WITH DOWN ARROW.” These comments are indicative of a much more serious problem in Scratch: the lack of accurate timing. In the rest of this section, we describe the buggy path to building a simple drum loop consisting of hi-hat, kick drum, and snare drum. Consider a challenge to implement the drumbeat in Michael Jackson’s “Billie Jean” (Figure 4).

One measure of music represented with two different approaches to notation. Top: Traditional notation showing eight hi-hat eighth notes and alternating kick and snare quarter notes. Bottom: Piano roll notation where notes are represented as filled in squares on a grid.
Figure 4. “Billie Jean” drum pattern in traditional and piano roll notation.

An initial solution to this problem requires listening to determine the relative rates of each drum and envisioning three musical lines running in parallel. Learners would need to understand loops, as well implement a solution to trigger the three instruments to start at the same time. A Scratch representation of all the notes above looks as follows:

Four groups of Scratch blocks that play the rhythm in Figure 4. The first group sets the tempo to 120 bpm. Groups 2-4 trigger hi-hat with eight blocks, kick drum with two blocks, and snare drum with four blocks respectively. Sounds are set to loop forever.
Figure 5. “Billie Jean” Solution 1 representing all notes in measure.

A follow-up challenge adapted from the Sound Thinking curriculum (Greher and Heines 2014, 109–110) may involve asking students to implement a simplified version of the groove using as few blocks code as possible. This approach teaches students to look for patterns and opportunities to more efficiently code the music, exploring computational concepts of repeats, loops, and variables.

Identical layout of Scratch code as in Figure 5 with reduced code. Only one hi-hat block, one kick-drum block, and two snare drum blocks are used.
Figure 6: “Billie Jean” Solution 2 with reduced code and correct recreation of code in Figure 5.

When played back, the above solutions drift away from each other, increasingly out-of-time. Individually, each of the drum notes sound okay, but when executed together in parallel they sound cacophonous as if three musicians were asked to play together wearing blindfolds and earplugs. In an informal experiment done with undergraduates in the NYU Music Experience Design Lab, after encountering the problem, each unsuccessfully attempted a different fix. One student tried explicitly setting tempo in each group as if each loop requires a constant reminder or reset for how fast it is supposed to play. Another tried adjusting the beat values to approximations of the beat like 0.99 and 1.01 attempting to account for perceived differences in playback speed across the drums, but actually augmenting the differences in tempo.

Similar layout of Scratch code as Figure 6, but with a 'set tempo' block inserted into each instrument.
Figure 7. “Billie Jean” attempted solution by setting tempo at the start of every loop.

In reality, the problem is beyond the user’s control stemming from the fact Scratch is optimized for animation frame-rate timing rather than the higher precision timing needed for audio and music. Greher and Heines (2014) write that Scratch 1.4 intentionally slows down processing so that animations do not run too fast on modern computers, but “what’s too fast for animations is sometimes too slow for music, where precise timing is necessary” (112). In Scratch 2.0, the problem may be compounded by its Flash implementation in which background processes within and outside the web browser can interrupt audio events, making music and sound out of sync. We should note timing in the web is a longstanding challenge: Before the Web Audio API, programmers relied on imprecise functions like setTimeout() and Date.now(). Even though the Web Audio API exposes the computer’s hardware clock, its implementation is inflexible and often misunderstood (Wilson 2013). Ideally in a novice programming environment like Scratch, these timing challenges should be addressed so that users can think and implement code with intuitive musical results.

There are a few “hacks” to work around these timing issues in Scratch. Toggling the “Turbo Mode” button helps, but it merely executes code blocks as fast as possible, which does not prevent timing issues. Discussions from users in the Scratch online forums reveal that they are also perplexed about what the mode actually accomplishes. An algorithmic solution to improve musical timing involves conceptualizing one of the musical lines as the leader or conductor. Rather than each of the lines looping and gradually getting out of time relative to each other, one line loops and signals the others to follow suit:

Similar layout of Scratch code as Figure 6. The hi-hat grouping now includes a block that states 'broadcast message start.' The snare and kick groupings include corresponding blocks within the text 'When I receive start.'
Figure 8. “Billie Jean” solution 3 with more accurate timing.

Of course, this solution is less readable, and is not immediately intuitive to a novice coder. It is also more viscous (Pane and Meyers 1996) since the user must always ensure the “conductor” script is equal or longer than all other scripts. Changes to the “conductor” script may affect playback of other scripts, so the user must check multiple scripts after making a change. More importantly, it is an incomplete solution. As each line moves forward in its own loop, it will get out of time relative to the other loops before snapping back in time once the “start” message is broadcasted causing a feeling of jerkiness. As loops lengthen the effect becomes more dramatic. A slightly better solution requires creating a single forever loop that triggers all current drums at correct moments in time. Unfortunately, this solution results in ignoring the durations included in each “play drum” call since rests are necessary to separate the asynchronous “broadcast” calls. While it prevents the issue of these few music loops getting out of sync, it does not solve the fundamental problem of inaccurate timing: the entire rhythm still has the potential for musical drift.

Drastically changed version of Scratch code. A longer grouping of Scratch blocks on the left represents the full rhythm with a combination of broadcast and rest blocks. Three groupings on the right receive broadcast messages and trigger corresponding drums.
Figure 9. “Billie Jean” solution 4 with slightly more accurate timing.

One may argue that because at least one solution exists, the programming environment is not at fault, and further, that all programming languages have affordances and constraints unique to their implementations. We hope that a solution may be found in a future version of Scratch, or via a new extension mechanism, to solve many of these timing issues so that novice musicians and coders may use the Scratch paradigm to create musically synchronized code for their creative projects. This is in part because new programmers are often less attuned to what precisely to look for when they encounter results that are unexpected. Pea, Soloway, and Spohrer (2007) write that “since the boundaries of required explicitness vary across programming languages, the learner must realize the necessity of identifying in exactly what ways the language he or she is learning ‘invisibly’ specifies the meaning of code written” (25). In order to solve this problem, users must learn to mistrust Scratch’s implementation of tempo and respond in a way that gets around the issue of timing through using their ears. Our solutions can be thought of more as “hacks” than as useful exercises in building musical computational thinking skills.

Consider the same challenge executed in another visual programming environment Max/MSP. Max/MSP, as noted above, is not nearly as immediately accessible and does not abstract away a lot of music functionality. As a result, more code blocks are necessary to accomplish the same tasks. First, there is not a library of built-in drum sounds so the user must figure out his/her own method to synthesize or play the drums back. In the solution below (Figure 10), local audio files for each drum type are loaded into Max/MSP memory called buffers. Unlike Scratch’s sound playback which requires one object, Max/MSP’s sound playback requires a minimum of four objects: a buffer to store the audio file, a play~ object to play it, a start button to trigger playback, and an audio output to ensure sound reaches the user’s speakers. Second, rather than specify a global tempo in beats per minute (BPM), Max/MSP includes a metro object which produces output at regular intervals specified in milliseconds. To convert BPM to the correct note duration, it is up to the user to perform the calculations. All that said, the Max/MSP program utilizes an identical algorithm as the original Scratch solution but works as expected in that the drum beats sound in time together with each other.

Recreation of Scratch code in Figure 6 with the visual programming language Max MSP. Each grouping is much more complicated with additional required calculations.
Figure 10. Max/MSP Implementation of “Billie Jean” drumbeat.
[archiveorg ScratchBeats width=640 height=480 frameborder=0 webkitallowfullscreen=true mozallowfullscreen=true]
Figure 11. Four audio examples of the Billie Jean beat as discussed: First, Solution 1 in Scratch where the instruments drift. Second, Solution 1 implemented in Max MSP with accurate timing. Third, Solution 3 in Scratch where one instrument broadcasts “start” to other instruments. Fourth, Solution 4 in Scratch where the beat is notated as a single sequence using “broadcast” and “rest” functions.

New Opportunities for Music Blocks in Scratch

As we have shown, music making in Scratch may be limited by the level of representation coded into the blocks, differing motivations by users in using and reading the blocks, and inaccurate timing that affects music playback. However, there are ways forward that may enhance music making through redesigning the music and sound blocks in Scratch. Scratch’s accessibility as a browser-based environment, its wide and diverse user community, and its insistence on making all code and assets shared on its platform copyable and remixable creates a democratized space for children to explore music making and learning. Imagine if young people could not only use the web as a tool to listen to music and look up song lyrics, but actually step inside and remix the creations of their peers. With a new extension mechanism due to be implemented in Scratch 3.0, educators and more advanced coders will be able to design their own blocks to extend or modify the functionality of Scratch, leading towards a more open and customized coding environment.

As described in the wiki entry for Scratch 3.0, there are many notable changes. Scratch 3.0 only supports sound blocks in the default blocks interface, while music blocks, such as “play note” and “set tempo,” must now be loaded via an extension. This separation of blocks addresses our concerns about consistency formally separating how sound and MIDI music blocks are used.[9] However, the issues of steady musical timing, and representations of music as “musical smalls” remain. As other developers step in to add music functionality, it is important they explore ways to avoid the pitfalls of the current music implementation.

One exciting development is the ScratchX + Spotify extension for the ScratchX experimental platform.[10] This extension interfaces with the Spotify API in only a few, powerful Scratch blocks, while addressing many of the issues we discussed above.[11] Rather than using low-level notes and beats as musical building blocks, it loads thirty-second long clips of songs streamed instantly from Spotify. It abstracts away many of the complexities of using the API while including exciting features, such as chunking a song based on where Spotify’s algorithm believes the beat to be, and triggering animations or motions based on cues timed to structures within the music. Crucially, the ScratchX + Spotify website provides starting points for using the extension such as one app making sprites “dance” and another letting a user remix a song. From a user’s point of view, they can access a song or piece of music that is personally meaningful to them, and begin to code, play, and experiment directly. They can directly work with musically simple chunks of music they care about, in turn developing the curiosity that lays the foundation for the meaningful exploration of lower-level musical details and code at a later time. This is a powerful idea and positive step towards making music coding projects more accessible and motivating to young coders.

New Design Ideas for Music Blocks in Scratch

In the NYU Music Experience Design Lab (MusEDLab), we have been experimenting with new block designs for music using Scratch 2.0’s “Make a Block” functions, as well as ScratchX’s extension platform. In beginning work on new Scratch 3.0 music extensions, we are intimately aware of the challenges in designing for novice programmers and in representing music in code. To move from music blocks that are based on the musical smalls of “play note” and “play drum,” we propose creating new blocks that are modeled at a higher level of musical representation, closer to the “musical simples” that Bamberger and Hernandez (2000) advocate. This work builds on the creation of new individual blocks that perform chords or entire sequences of music, experimenting with possible syntaxes for creative interaction, exploration and play.[12]

As a first step, we are prototyping new functionality to launch various MusEDLab web-based apps via blocks within Scratch, such as the circular drum sequencer Groove Pizza, then returning the audio or MIDI output for use in Scratch projects. This functionality enables users to use our apps to play with and create rhythmic groove “simples” in an interface designed for musical play. Then, when they are finished creating their groove, the groove is already embedded into a new “play groove” Scratch block for use and further transformation through code.

In this prototype block (Figure 12), the user clicks on the yellow Groove Pizza icon on the “play groove” block, which pops up a miniature 8 slice Groove Pizza in the user’s browser. The user clicks to create a visual groove in the pop-up pizza, and the “play groove” block is virtually filled with that 8 beat groove. The user can then use this block in their Scratch project to perform the rhythmic groove. If the user wants to speed up or slow down the groove, they can modify it with an existing “set tempo” or “change tempo” block. This approach brings the musical creation and playful coding experiences closer together within the Scratch 3.0 interface.

Single Scratch block with the text 'play groove' and an image that can be clicked on to open up the Groove Pizza app.
Figure 12. MusEDLab prototype block creating and playing a Groove Pizza groove in Scratch 3.0.

Curious users of the MusEDLab apps who are interested in learning more about the computational thinking and coding structures behind them may already launch, step inside, and remix simplified versions created in Scratch, such as the aQWERTYon and Variation Playground apps. Where novice coders may not be able to initially understand the javascript code behind our apps when clicking “View Source” in their browser, they can get a middle-level introduction to our approaches by exploring and remixing the simplified versions of these apps in Scratch.

Another approach we’ve taken is to utilize Scratch’s “Make a Block” feature to create custom music blocks such as “play chord” and “play sequence.” In this approach, the user can copy our blocks and use them without having to know how the inner coding works. Those users that are curious can always take a deeper look at how the blocks, built using musical and computational simples, were coded, because in this implementation the structure behind the custom blocks are left intact.

Side by side comparison of traditional and experimental Scratch blocks. Detailed description in article text below.
Figure 13. Left: Scratch 2.0 blocks necessary to play a single chord; Right: MusEDLab experimental music blocks for playing a chord progression in ScratchX.

The leftmost block of code in Figure 13 presents a common method from Scratch 2.0 for performing a musical triad (a C major chord) when the space key is pressed on the computer keyboard. This code requires three instances of the “when space key pressed” block, as well as the user knowing MIDI pitch numbers to build a C major chord where C equals MIDI note 60, E equals MIDI note 64, and G equals MIDI note 67. This implementation requires 6 Scratch blocks to create and requires basic music theory and MIDI understanding.[13] To lower the floor and widen the walls for chord playback, we implemented a new “play chord” block where users can specify the chord root and chord quality directly. These blocks can then be sequenced to form a chord progression. This approach encapsulates music at a higher level than individual notes and rhythms, the level of the “musical simple” of a chord. While there is a time and place for teaching that chords are made up of individual notes, implementing a “play chord” block enables novices to more easily work with chords, progressions, and harmony.

Another approach to a simpler music block is illustrated in Figure 14. To perform a melody or sequence of musical notes or rests, a Scratch 2.0 user needs to use a “play note” block for each note in the melody. The “play seq” block enables the user to input a sequence of numbers (1-9, plus 0 for a 10th note) or a hyphen for a musical rest to represent and perform a melodic sequence. The variable “major” indicates that the sequence is to be mapped onto a major scale starting on a root note of 60, or middle C. The first number 1 maps onto the root pitch. With this block, the melodic sequence can be transposed, and rhythmic durations set by fractions of whole notes, or mapped to different musical scales all within the same block. When used in a live-coding approach, the sequence within a block running in a repeat or forever loop can be adjusted to update the melody on the fly.

Comparison of traditional and experimental Scratch blocks. Detailed description in article text above.
Figure 14. Top: sequence of notes coded in Scratch 2.0; Bottom: experimental music block for playing an identical sequence.

The prototype blocks described above provide just a few ideas for how to make it easier for novices to code and explore musical concepts in Scratch. Our approach focuses on designing Scratch blocks for music that are modeled after Bamberger’s notion of musical simples vs. musical smalls, potentially enabling a new pathway for children to intuitively and expressively explore creating and learning music through coding. Given the huge user base among school-age children, there is a great opportunity for the Scratch environment to not only be a platform for teaching coding and computational thinking, but also for musical creativity and expression. The affordances of a well-designed, wide-walled coding environment, provide opportunities for children to “tinker” with the musical intuitions and curiosities they already have, and to easily share those with their peers, teachers, and families. While there are many technical and design challenges ahead, and no environment can be perfect for everyone, we look forward to seeing how developers will extend the Scratch environment to promote lower floors, higher ceilings, and wider walls for creative music and sound coding projects for all.  

Notes

[1] The recent book, New Directions for Computing Education: Embedding Computing Across Disciplines (2017) provides a good starting point for reading about how computer science may be taught in broader contexts. One article by Daniel A. Walzer is particularly relevant as it looks at CS in the context of a Music Technology program (143).

[2] The website for the course may be found at http://creativelearningchina.org/. Some of the Scratch projects created in the course may be found in the Creative Learning Design Scratch Studio. Broadly in their reflections, our students found that the experiences they came up with were fun for first graders, but overly simplistic for seventh graders who were quite critical and were prepared to implement their own remixes!

[3] The new version of Scratch (3.0) changes the syntax for audio sounds in that the old “play sound” block is now “start sound” more accurately indicating that this block simply starts the sound without waiting until the sound is finished to move on to other blocks in a particular stack in Scratch.

[4] We often use a pedagogical metaphor of a sprite as a musician. A clarinetist can only play the clarinet (but can play any number of pitches). A percussionist or drummer has two hands and two feet and can then play up to 4 different drum sounds at once in layers. This is if we are trying to enforce human metaphors. That said, one of the cool things of computer environments is that create music that humans cannot. Many of Alex Ruthmann’s early Scratch experiments are “hacks” of Scratch taking advantage of overdriving the Flash sound engine, etc.

[5] Tone.js uses a more consistent design (Mann 2015). Even though sample playback objects called “Players” and synthesis objects require different arguments upon instantiation, both essentially produce sound. As a result, they share many methods like “start,” “stop,” “connect,” “sync,” etc.

[6] While programming environments often blur the lines, a good litmus test is to ask what a typical “Hello World” program looks like. Is it generating a sine wave, perhaps mapping mouse position to frequency? Subsymbolic. Is it scheduling a few notes and rests? Symbolic. Is it triggering a loop? Hierarchical.

[7] The first author recalls a conversation with Miller S. Puckette about the degree of functionality that Pure Data, the open source music programming language he actively develops, should have built in, and what should be up to users to implement with its basic constructs. For example, Puckette teaches students that the method to play an audio file stored within a buffer is to determine its size and then iterate through each sample of the buffer at the rate it was recorded at. Only once this is understood does he mention the existence of a simple playback command (sfread~). This is a worthwhile task for the budding music technologists in his course, but not likely for novice musicians who merely want to make some sounds before they have learned about loops and sampling rates.

[8] One of the authors of this work, Alex Ruthmann, has posted a Scratch Live coding session inspired by the composer Arvo Pärt. Another neat example of live coding and pushing Scratch to its timing limits is demonstrated by Eric Rosenbaum.

[9] For example, One user writes, “Mine Is Kick: 1000 0000 1010 1001 Clap: 1010 0110 0010 1000 Hat: 1010 0110 0010 1010 Snare: 1000 0100 0000 0001,” showing an exploration of musical ideas and a willingness to share his/her creative idea with others. The comment also indicates an understanding of boolean logic where ones indicate onsets and zeros indicate rests. Many others share grooves with identical boolean notation, and most remixes are identical but preprogrammed with a custom groove. Another user posts, “Put all of the lines on for all the instruments at the same time and put it at full volume. You’re welcome,” demonstrating expressivity in pushing an instrument to its limit to hear the result.

[10] Another simple but welcome fix in Scratch 3.0 is that instrument names are included with their numbers and pitch letters are included next to MIDI values making all music code more readable.

[11] The extension is currently available for ScratchX, an experimental version of Scratch created to test and share extensions, while lacking many of Scratch’s community features.

[12] Some early prototypes of these musical blocks may be explored on the Performamatics@NYU Scratch studio page.

[13] An excellent resource for learning the basics of music theory and MIDI through making music in the browser can be found at https://learningmusic.ableton.com.

Bibliography

Aaron, Samuel, Alan F. Blackwell, and Pamela Burnard. 2016. “The development of Sonic Pi and its use in educational partnerships: Co-creating pedagogies for learning computer programming.” In Journal of Music, Technology & Education 9, no. 1 (May): 75-94.

Bamberger, Jeanne. 1979. “Logo Music projects: Experiments in musical perception and design.” Retrieved March 21, 2019 from https://www.researchgate.net/publication/37596591_Logo_Music_Projects_Experiments_in_Musical_Perception_and_Design.

Bamberger, Jeanne Shapiro, and Armando Hernandez. 2000. Developing musical intuitions: A project-based introduction to making and understanding music. Oxford University Press, USA.

Clark, Colin BD, and Adam Tindale. 2014. “Flocking: a framework for declarative music-making on the Web.” In SMC Conference and Summer School, pp. 1550-1557.

Cooper, Stephen, Wanda Dann, and Randy Pausch. 2000. “Alice: a 3-D tool for introductory programming concepts.” In Journal of Computing Sciences in Colleges, vol. 15, no. 5, pp. 107-116. Consortium for Computing Sciences in Colleges.

Farbood, Morwaread M., Egon Pasztor, and Kevin Jennings. 2004. “Hyperscore: a graphical sketchpad for novice composers.” In IEEE Computer Graphics and Applications 24, no. 1: 50-54.

Fee, Samuel B., Amanda M. Holland-Minkley, and Thomas E. Lombardi, eds. New Directions for Computing Education: Embedding Computing Across Disciplines. Springer, 2017.

Freeman, Jason, and Brian Magerko. 2016. “Iterative composition, coding and pedagogy: A case study in live coding with EarSketch.” In Journal of Music, Technology & Education 9, no. 1: 57-74.

Freeman, Jason, Brian Magerko, Tom McKlin, Mike Reilly, Justin Permar, Cameron Summers, and Eric Fruchter. 2014. “Engaging underrepresented groups in high school introductory computing through computational remixing with EarSketch.” In Proceedings of the 45th ACM Technical Symposium on Computer Science Education, pp. 85-90. ACM.

Green, Thomas R. G., and Marian Petre. 1996. “Usability Analysis of Visual Programming Environments: A ‘Cognitive Dimensions’ Framework.” In Journal of Visual Languages and Computing 7, no. 2 (Summer): 131-174.

Greher, Gena R., and Jesse Heines. 2014. Computational Thinking in Sound. New York: Oxford University Press.

Guzdial, Mark. 2003. “A media computation course for non-majors.” In ACM SIGCSE Bulletin, vol. 35, no. 3, pp. 104-108. ACM.

Heines, Jesse, Gena Greher, S. Alex Ruthmann, and Brendan Reilly. 2011. “Two Approaches to Interdisciplinary Computing + Music Courses.” Computer 44, no. 12: 25-32. IEEE.

Maloney, John, Mitchel Resnick, Natalie Rusk, Brian Silverman, and Evelyn Eastmond. 2010. “The Scratch Programming Language and Environment.” In ACM Transactions on Computing Education (TOCE) 10, no. 4 (Winter): 16.

Manaris, Bill, and Andrew R. Brown. 2014. Making Music with Computers: Creative Programming in Python. Boca Raton, FL: Chapman and Hall/CRC.

Mann, Yotam. “Interactive music with tone.js.” 2015. In Proceedings of the 1st annual Web Audio Conference. Retrieved March 20, 2019 from https://medias.ircam.fr/x9d4352.

McCarthy, Lauren. “p5. js.” https://p5js.org.

Meerbaum-Salant, Orni, Michal Armoni, and Mordechai Ben-Ari. 2011. “Habits of Programming in Scratch.” In Proceedings of the 16th Annual Joint Conference on Innovation and Technology in Computer Science Education, pp. 168-172. ACM.

Pane, John F., and Brad A. Myers. 1996. “Usability Issues in the Design of Novice Programming Systems.” In School of Computer Science Technical Reports, Carnegie Mellon University, no. CMU-HCII-96-101.

Puckette, Miller. 2002. “Max at seventeen.” In Computer Music Journal 26, no. 4 (Winter): 31-43.

Repenning, Alex. 1993. “Agentsheets: a Tool for Building Domain-Oriented Visual Programming Environments.” In Proceedings of the INTERACT’93 and CHI’93 Conference on Human Factors in Computing Systems, pp. 142-143. ACM.

Resnick, Mitchel. 2012. “Reviving Papert’s dream.” Educational Technology 52, no. 4 (Winter): 42-46.

Resnick, Mitchel, John Maloney, Andrés Monroy-Hernández, Natalie Rusk, Evelyn Eastmond, Karen Brennan, Amon Millner et al. 2009. “Scratch: Programming for All.” In Communications of the ACM 52, no. 11 (November): 60-67.

Roberts, Charlie, Jesse Allison, Daniel Holmes, Benjamin Taylor, Matthew Wright, and JoAnn Kuchera-Morin. 2016. “Educational Design of Live Coding Environments for the Browser.” In Journal of Music, Technology & Education 9, no. 1 (May): 95-116.

Roberts, Charles, Graham Wakefield, and Matthew Wright. 2013. “The Web Browser as Synthesizer and Interface.” In Proceedings of the International Conference on New Interfaces for Musical Expression, Daejeon, Republic of Korea, pp. 313-318. NIME.

Schembri, Frankie. 2018. “Ones and zeroes, notes and tunes.” In MIT News, February 21, 2018. https://www.technologyreview.com/s/610128/ones-and-zeroes-notes-and-tunes/.

Wang, Ge. 2007. “A history of programming and music.” In The Cambridge Companion to Electronic Music, 55-71.

Wilson, C. 2013. “A Tale of Two Clocks—Scheduling Web Audio with Precision.” January 9, 2013. https://www.html5rocks.com/en/tutorials/audio/scheduling/.

About the Authors

Willie Payne is a PhD candidate in Music Technology at NYU Steinhardt where he develops tools and pedagogies to enable people to express themselves regardless of skill level or ability. He is especially interested in using participatory design methods to craft creative experiences with and for underrepresented groups, e.g. those with disabilities. When not writing code, Willie can often be found wandering New York City in search of (his definition of) the perfect coffee shop.

S. Alex Ruthmann is Associate Professor of Music Education and Music Technology, and Affiliate Faculty in Educational Communication & Technology at NYU Steinhardt. He serves as Director of the NYU Music Experience Design Lab (MusEDLab.org), which researches and designs new technologies and experiences for music making, learning, and engagement together with young people, educators, industry and cultural partners around the world.

Skip to toolbar