Tagged learning

ACERT presentation at Hunter College. Photo Credit: Jessie Daniels @JessieNYC

JITP Roundup: “Why Failure Matters”, a Lunchtime Presentation for ACERT

ACERT presentation at Hunter College. Photo Credit: Jessie Daniels @JessieNYC

Photo Credit: Jessie Daniels @JessieNYC

On October 27th 2016, the Academic Center for Excellence in Research and Teaching (ACERT) at Hunter College held a lunchtime seminar entitled “Why Failure Matters: Editors from CUNY’s Journal of Interactive Technology and Pedagogy on Learning from ‘Teaching Fails.” The Managing Editor of JITP, Laura W. Kane, introduced the aims and editorial guidelines of the journal, and discussed how the journal operates through a collaborative effort between 23 faculty members, graduate students, and academic staff at CUNY and other institutions.

Also joining the lunch was Sarah Ruth Jacobs, the editor of the journal’s Teaching Fails section. The Teaching Fails section provides an opportunity for faculty members from all disciplines to reflect on the ways in which their use of technology in the classroom fell short of their expectations. These failures can help instructors gain insight and improve in their future class plans. For example, in her Teaching Fails piece, Professor Karen Gregory reflected on how her public-facing course inadvertently failed in giving students a private space for assignments and online discussion.

As part of the session, attendees were asked to reflect on how their uses of technology had failed in the classroom. One insight that came out of this discussion was how it was important when introducing a new technology to students to explain not just “the how” but “the why:”  why the technology is necessary and the ways in which it benefits students. When students don’t understand the motivation for learning a new technology, they are less engaged and willing. Attendees also reflected on how students need a lot of time and detailed instruction in order to properly use new technologies in their assignments; that is, the myth of the “digital native” who perfectly implements technologies can be a faulty line of thinking.

You can read more about the presentation on the ACERT blog. Details about our Teaching Fails section can be found on our sections of the journal page. We encourage submissions about ideas that didn’t work in the classroom – assignments that didn’t work out, readings that none of your students understood – that may help others to fail better. Questions about our Teaching Fails section should be sent to teaching.fails@jitpedagogy.org

The InQ13 POOC: A Participatory Experiment in Open, Collaborative Teaching and Learning

Jessie Daniels, Hunter College, CUNY School of Public Health, and the Graduate Center, CUNY

Matthew K. Gold, City Tech and the Graduate Center, CUNY

with Stephanie M. Anderson, John Boy, Caitlin Cahill, Jen Jack Gieseking, Karen Gregory, Kristen Hackett, Fiona Lee, Wendy Luttrell, Amanda Matles, Edwin Mayorga, Wilneida Negrón, Shawn(ta) Smith, Polly Thistlethwaite, Zora Tucker



This article offers a broad analysis of a POOC (“Participatory Open Online Course”) offered through the Graduate Center, CUNY in 2013. The large collaborative team of instructors, librarians, educational technologists, videographers, students, and project leaders reflects on the goals, aims, successes, and challenges of the experimental learning project. The graduate course, which sought to explore issues of participatory research, inequality and engaged uses of digital technology with and through the New York City neighborhood of East Harlem, set forth a unique model of connected learning that stands in contrast to the popular MOOC (Massive Open Online Course) model.




In the spring semester of 2013, a collective of approximately twenty members of the Graduate Center of the City University of New York created a participatory, open, online course, or “POOC,” titled “Reassessing Inequality and Re-Imagining the 21st-Century: East Harlem Focus” or InQ13. The course was offered for credit as a graduate seminar through the Graduate Center and was open to anyone who wanted to take it through the online platform. Appearing at a moment when hundreds of thousands of students were enrolling for Massively Open Online Courses (or MOOCs) offered through platforms such as Coursera, Udacity, and EdX, InQ13 was notable as an attempt to openly share the usually cloistered experience of a graduate seminar (typically comprised of 10–12 students and an instructor) with a wider, public audience. Exploring various aspects of inequality in housing and education, the course emphasized community-based research in a dynamic New York neighborhood through a range of “knowledge streams” and interactive modalities.

Developing, designing, launching, and running the POOC was an enormous undertaking on every level. In this article, we provide a conceptual framework for a “participatory” open course and share thoughts about the challenges inherent in translating the ordinarily private world of the graduate seminar into a shared, public, online experience. This article provides an overview of the background, structure, and theoretical underpinnings of the course; a discussion of its connection to East Harlem as the site of inquiry and learning; and a brief exploration of how we might begin to assess the impact of such an experiment. Befitting a course that brought together a widely diverse range of perspectives, the article features a multivocal reflection by many of its participants, including faculty, students, project managers, librarians, web developers, educational technologists, videographers, and community members. This experiment in participatory learning is further contextualized by a podcast related to our course.


The Context of the POOC

In order to understand the development of InQ13, which launched in early 2013, it is important to appreciate the particular historical and political moment in which the course emerged. The term “MOOC” —an acronym for Massively Open Online Course—was coined by educational technologists Dave Cormier and George Siemens in 2008 to describe an innovative, and inherently participatory, open, online course (Cormier and Siemens, 2010). In the fall of 2011, Stanford University opened some of its computer science courses to the world through an online platform and found hundreds of thousands of students enrolling. At about the same time, venture capitalists began pouring millions of dollars into businesses such as Coursera hoping to find a revenue model in MOOCs (The Economist, 2013). As a result, MOOCs moved from niche discussions among educational technologists to coverage in The New York Times, which proclaimed 2012 “the year of the MOOC” (Pappano, 2012). When we began development of InQ13, there was no shortage of hyperbole about MOOCs. In perhaps the most egregious example of this hype, New York Times columnist Thomas Friedman extolled the revolutionary possibilities of MOOCs, saying, “Nothing has more potential to enable us to reimagine higher education than the massive open online course, or MOOC” (Friedman, 2013). As a number of scholars have pointed out, such claims about the revolutionary potential of MOOCs are not unique in the landscape of higher education but instead harken back to similar, even identical, claims to those made about educational television in the middle of the twentieth century (Picciano, 2014; Stewart, 2013). Still, we were intrigued by the potential of digital technologies for opening education.

Premised on extending the experience of traditional university courses to massive audiences, MOOCs have provoked an array of responses. Commentators who believe that higher education is in need of reform argue MOOCs offer a productively disruptive force to hidebound educational practices (Shirky, 2014). According to such arguments, the educational experiences offered at elite institutions can now be made available to students across the world, for free, thus making higher education possible for students who would not otherwise be able to afford it. Some critics of MOOCs often view them in the context of a higher education system that is being defunded, worrying that higher education administrators see, in MOOCs, possibilities for both revenue generation through increased enrollments and cost-cutting through reduced full-time faculty hires (Hall, 2013).

To date, most MOOCs have consisted of video lectures, sometimes accompanied by discussion forums and automated quizzes. Students are expected to absorb and repeat information delivered via video in ways that seem consonant with what Paulo Freire described as the banking model of education, where students are imagined as empty vessels into which the instructor deposits knowledge (Freire, 1993). Within the mostly one-way communication structure of the truly massive MOOCs, the interaction between faculty members and students is necessarily constrained due to the scale. While some MOOCs attempt to foster interaction between the professor and his (or her)[1] students, this has not met with much success (Bruffet et al., 2013, 187). There is little in the corporate MOOC model to recommend it as a vehicle for a graduate seminar, in which intimacy and sustained discussion, rather than massiveness and openness, are most prized. We coined the neologism of “POOC” —a participatory, open online course—to better capture the meaningful participation and co-production of knowledge that we hoped to achieve. Our participatory approach was layered and nested, bringing together two interlocking components: 1) direct engagement with specific readings, people, neighborhoods, and technologies (Cushman, 1999; Daniels, 2012; Gold, 2012; Rodriguez, 1996; Scanlon, 1993); and 2) collaborative rather than individually-oriented community-based research projects.


Studying Inequality

The course focus on inequality grew out of discussions among faculty at the Graduate Center of the City University of New York (CUNY) about how to bring together research about inequality across disciplinary boundaries and extend those conversations beyond the walls of the institution in ways that mattered within communities.[2] There was wide agreement that any effort should find a way to engage with the vibrancy of New York City and its history of struggle for social and economic justice, and thus reflect CUNY’s public educational mission to “educate the children of the whole people.” Among the questions we hoped the course would explore were: What does inequality look like in 2013? How might we imagine our future differently if we did so collectively, across a variety of disciplines and in conjunction with community-based partners?  And, given our particular historical moment, how might the affordances of digital technologies augment the way we both research inequality and resist its corrosive effects?


The Neighborhood of East Harlem

East Harlem is a neighborhood that has simultaneously fostered a vibrant, multi-ethnic tradition of citizen activism and borne the brunt of urban policies that generate inequality. Several of the people in the InQ13 collective had ties to East Harlem as residents, researchers, community activists and workers, so we began to discuss the possibility of locating the course there. In addition, CUNY had recently located a new campus in this neighborhood with the explicit goal of developing academic-community partnerships. These factors taken together—the unique history and present of East Harlem, the connection to the neighborhood from those in the InQ13 collective, and the new CUNY campus—provided a compelling case for situating the course in East Harlem. Thus, the original questions that framed the course were joined by another set of questions: Could a course such as this one “open” the new CUNY campus to the East Harlem community in innovative ways? Given the troubled relationship of university campuses to urban neighborhoods, could we forge different kinds of relationships? And, were there ways that the digital technologies used in the course could offer a platform that would be useful to community activists engaged in the struggle against the forces of inequality in East Harlem?

Given the limited amount of time the collective had to prepare the course and the complexity of staging the POOC, the process of forming in-depth engagements with community partners did not progress as far as we had initially hoped it would which will be further discussed (see Mayorga in “Perspectives” section). That said, the course served as a useful opening for future, ongoing efforts involving the East Harlem community at the uptown CUNY campus.


The Structure of the Course

The overall structure of the course was designed to serve multiple groups of learners: 1) traditionally enrolled students through the CUNY system, 2) online learners who wanted to participate, do assignments and complete the course, and 3) casual learners who wanted to drop in and participate as their schedule and desire for learning allowed.

In an effort to displace the MOOC model of a course led by a solitary, celebrity professor, each course session involved a guest lecturer or a panel of guests that served to highlight the collaborative nature of how knowledge is produced and activism is undertaken and sustained. Each session was both livestreamed for those who wanted to participate synchronously and then, several days later, a more polished video recording of the class session would be released and posted to the InQ13 course site for those who wanted to participate asynchronously. One of the ways we tried to build engagement with the East Harlem community into the structure of the course was to have class sessions that were also open community events at the uptown CUNY campus. Out of twelve regular sessions, four were held at the East Harlem campus and open to the public.

The course pivoted around leveraging digital technologies to enhance the skills and practices of community-based research; students were encouraged to work in partnership with community members in East Harlem. Students posted their completed assignments on the course blog at the InQ13 site. To facilitate group work, students could use the “groups” feature on the site to collaborate around specific projects. As designed, these groups were intended to foster connection between online-learners and CUNY-based learners, but this potential was not realized as fully as it could have been in the execution of the course. The faculty-provided feedback and grades on assignments were offered for CUNY-based learners, and the digital fellow provided this for the online learners (see Negrón in the “Perspectives” section below). At the end of the semester, students were invited to present their projects at a community event at La Casa Azul Bookstore in East Harlem (this was in addition to the four regular sessions held in the neighborhood).


Evaluating the Impact of the POOC

It is challenging to assess the impact of an experiment in graduate level education that took participatory learning as its chief goal. When the goal is for a course to be “massive,” the primary metric of evaluation is how many people registered for the course. With the POOC, this measure was not meaningful because participants were not required to register at the course site— a choice we made in our effort to open the course to as many different kinds of learners as possible. In its design and execution, the course allowed for multiple levels of participation, from Twitter users who joined conversations based on a Twitter hashtag (#InQ13), to those who watched the videos of the seminars or read some of the many open-access texts, to learners who created accounts and participated in group discussions on the course website.

daniels1Figure 1. Evaluation Metrics of the POOC


Part of the challenge of this experiment was the measurement of a broad spectrum of metrics meant to tap the distributed and participatory elements of the course (See Figure 1). For example, we were able to track the number of visits to the InQ13 course site during the semester, which totaled well over eight thousand (8,791). The videos garnered almost three thousand (2,824) views. While these numbers pale in comparison to the hundreds of thousands boasted by many MOOCs, these numbers represent a significant reach when compared to the usual reach of a typical graduate seminar that enrolls ten to twenty students.

Some of the emerging scholarship on evaluating MOOCs points to the importance of gauging student experience (Odom, 2013; Zutshi, O’Hare and Rodafinos, 2013). For the POOC, students contributed nearly two-hundred and fifty (250) individual blog posts and digital projects to the course site. A more in-depth qualitative analysis from the perspective of two students is included here (see Hackett and Tucker in the “Perspectives” section below).

Traditional measures of learning assessment are valuable, yet they often overlook the variety of learners and the wide range of their goals in engaging with such a course. Given the participatory nature of the course, one of the most relevant metrics is the number of people who attended the open events in East Harlem, which was nearly five hundred (485). As further testimony to the global potential of online learning, we found that people from twenty-six countries visited the course site or watched the videos. Discussions happened both in person and through the Twitter hashtag #InQ13 where over three hundred (315) updates about the course were shared.

We began the POOC with an emphasis on participatory pedagogy—on concrete interactions between a student community and a geographically specific urban community—all of which necessitated a model far removed from the sage-on-a-stage, “broadcast” teaching environments employed in most MOOCs. While MOOCs have spurred discussions about online courses extending the reach of higher education institutions (and, in the process, proffering new, more profitable business models for them), our experiences with InQ13 suggest that online courses that emphasize interaction between faculty, students, and broader communities beyond the traditional academy incur significant institutional and economic costs that rely on often hidden labor. The “Perspectives” section that follows is our effort to make legible this otherwise hidden labor.


PERSPECTIVES on the Participatory Open Online Course (POOC)

On the InQ13 website, our page about the collective lists nineteen different individuals who played a role in creating the course experience (http://inq13.gc.cuny.edu/the-inq13-collective). If MOOCs are imagined by administrators and venture capitalists to be a labor-saving, cost-cutting disruption for higher education, the POOC model was disruptive in another way. The POOC was, in reality, a job creation program, requiring significant investments of time, money, and labor to produce. Within the neoliberal context of devastating economic cuts to public higher education, this reversal of that trend points to an alternative model.[3] In the section that follows we offer insights from many of the people who were involved in producing the POOC and some lessons they draw from their particular roles and participation in the course.


Community Perspectives on the POOC

Community Engagement Fellow Edwin Mayorga

Our approach to community engagement drew on traditions of community-based research, where respectful collaboration with community is central to documenting the local and global dimensions of structural inequality. The commitment to centering community was intended to move us away from reproducing the often exploitative relationships between outside institutions and communities, setting up a number of challenges that we are still learning from. This sort of approach to community engagement is a timeintensive one, and one that was often at odds with the limited time frame for the launch of the POOC. Due to the experimental nature of the grant that funded this work, the POOC was conceived over the summer of 2012, launched in spring of 2013, but not fully staffed until late December – early January, 2013. Thus, building trusting relationships with community groups, effectively integrating community groups into course sessions, and connecting them with course students was a challenge that we did not always meet.

The strategy we used to engage community groups was to reach out to various organizations and host a community meeting. The initial community meeting, held at a restaurant in East Harlem, was small but productive. Following that, we worked to establish a relationship with the Center for Puerto Rican Studies (Centro). Centro’s place as a product of struggle, its long standing relationships to East Harlem, and its definitive archive of the Puerto Rican diaspora made it an ideal starting point for the course.

By the end of the course, we had much to be proud of with respect to our community engagement work. We were able to facilitate community-centered sessions at locations in East Harlem where researchers and activists who either live or work in East Harlem could speak to key issues affecting the community, such as education, housing, and gentrification. We were excited to see students who worked with various community-based organizations produce hundreds of knowledge streams in the forms of bibliographies, blogs, infographics, slides, visuals, and videos on issues of inequality both theoretical and specific to East Harlem, and open to any one to read, explore, and engage.

Still, there were a number of humbling setbacks. Most poignant were the critiques by community-connected scholars and participants about what they saw as reductive depictions of the community and the exploitative “parachuting in” of communityspeakers. We worked to address some of these important critiques by holding another community meeting, and reducing the number of organizations we worked with in order to ensure we maintained and nourished relationships with our project partners. To be sure, there was a need for more community-building work in the run-up to the course.

Upon reflection, our attempt to be both digitally and critically bifocal (paying attention to the local and the global— see, Weis and Fine, 2012) was ambitious and inadequately presented to community people. Creating a clear focus in partnership with communities is essential to future community-oriented POOCs. Most importantly, time (at least a year) and financial resources must be allotted to allow for the creation of well-considered opportunities to share and build across institutions, networks, and people.

The sustained work of community building can seem daunting, but it is central to providing a successful foundation for participatory social-justice education.


Faculty Perspectives on the POOC

Professors Caitlin Cahill and Wendy Luttrell

With a leap and a bound, together we held hands and dove head-first into InQ13 POOC. The course made history at the Graduate Center for it cross-listings across so many disciplines and programs (Urban Education, American Studies, Earth and Environmental Science, Psychology, Anthropology, Sociology, Geography, Women’s Studies, and Liberal Studies). We were not only aware of this cross-disciplinary breadth, but also the multiple groups and levels of learners as we developed graduate-level course readings and assignments. Our materials were posted on the public course platform so that all students could engage course materials and each other. Ensuring that these materials were open-access became a collective effort described below in more detail byPolly Thistlethwaite and Shawn(ta) Smith.

As instructors, we shared two goals: first, to frame the course as an inquiry into the links between public matters and private troubles (Mills 1959), or put differently, an inquiry into the structural inequalities and public policies that imbue our everyday lives. Our second goal was to marry community-based inquiry with digital technologies, in part to counter the no-placeness and too-smooth, ubiquitous, sanitized space of many online courses. We created a series of scaffolded graduate–level assignments for students to address how global restructuring takes shape in the everyday life struggles of a real place, engaging community-based research and digital technologies to learn and leverage change with East Harlem community partners.


Please turn off your cell phone

For the first assignment, students were asked to go to East Harlem without using any digital technologies. This felt like a bold move at a time when so much of our everyday experience is mediated by screens. We encouraged students simply to “be” in East Harlem, to draw upon their senses of smell, sight, sounds, touch, taste, and texture as they paid attention to and experienced their surroundings (Rheingold, 2012). As part of this assignment, we asked students to reflect upon their relationship to East Harlem and their positionality. For their final projects, students would experiment with at least three digital tools from a set of twelve categories (such as mapping, audio & soundscapes, and digital storytelling). But first, we needed to raise critical questions about the voyeuristic gaze of researchers engaging in working class communities of color. Through in-person and online discussions about personal experiences, readings, and the film Stranger with a Camera (2000), we began the course around questions of ethics, the politics of representation, and the meaning of community engagement. All of this was meant to prepare students to enter and engage East Harlem as a site of learning and activism, and tp set the tone for the explorations that followed.


On stage – off stage

Each week, the class met for two hours; during the first hour, we livestreamed video of a lecture or discussion as part of the public-facing course, and during the second hour, we met privately with the Graduate Center students. This was a key pedagogical move: we learned that the performativity of the POOC was intimidating for many involved, and so we were committed to maintaining dedicated face-to-face time each week with the Graduate Center students enrolled in the course. While some students were at ease in the online environment whether on camera, on the blog, or on Twitter, for others the public nature of working and learning was uncomfortable, even paralyzing. With hindsight, we wonder if this discomfort was even more pronounced after the sense-of-place exercise in East Harlem described above because it surfaced messy questions about insiders, outsiders, border-crossers, structural racism, anxiety, and attending to the necessary “speed bumps” of doing research where one must slow down and reflect before moving forward. This reflection was on-going and needed to be nurtured through multiple formats and spaces—weekly blog posts, class discussions on and off stage, one-on-one in-person conversations with students, meetings between students and community partners, and posts to a private online space where students could exchange views they didn’t want to share with a broader public.


Plurality of publics

Our experience builds out the pedagogical and ontological significance of acknowledging the plurality of publics. As Nancy Fraser (1996, 27) has suggested, the constitution of alternative public spaces, or counterpublics, function to expand the discursive space and realize “participatory parity,” in contrast to a single comprehensive public sphere. This was the promise of the POOC as we strove to create and hold different publics together. We believed in the productive tensions between digital technologies, community-based spaces and research, and the more intimate, reflective pedagogical spaces of the course. The course reflected these three dimensions in terms of format and ways one could participate. The community-based inquiry projects also placed emphasis upon using technology in exciting and interesting ways to feature the critical counterpublics of East Harlem and their emancipatory potential in addressing structural inequality and injustices. This was reflected in the variety of final projects, which focused on documenting contemporary and historical community spaces such as Mexican restaurants, Afro-Latina hair salons, alternative educational spaces, youth-led collective social justice movements (the Young Lords/ the Black Panthers), and the memories embedded in everyday spaces in El Barrio.

One of the most exciting ideas was how the POOC might serve as a resource at two levels: at the local level, connecting with members of different East Harlem community efforts, and at a global level, connecting with historic Latino neighborhoods (Barrios) across the US and around the world. For example, how might the POOC serve as a resource for undocumented students in Georgia or Arizona where access to education has been denied? Or how might it help trace networks of Puerto Rican migration across the United States? These remain potentialities for future iterations of the course; in this first instance of the course, the most developed form of participation came out of the community-based partnerships students formed through face-to-face relationships where the thorny questions of outcomes, sustainability, and representation were negotiated over time and in relationship.


On the edge of knowing

When we started the class, we did not know what to expect. We were wary of the online neo-liberalization of higher education, especially at this particular political moment. Still, we were excited by the possibilities of participatory digital technologies to create bridges that connect the plurality of publics in more collaborative rather than exploitative ways (as evidenced in some of the amazing student projects). Critical questions of appropriation, labor, access, pedagogy, and privatization loom large in our minds. But what stays with us is best conveyed by the wise words excerpted from the blog of Sonia Sanchez, a student in the course who wrote about the world we inherit but want to reimagine, a world where “everything can be turned around and stamped with a barcode,” including education, housing, and space. As Sonia points out, we are surrounded by screens, by “a million little vacuums with bright screens” that make people “unaware they are standing next to each other.” We see InQ13 as part of a larger and much-needed process of connecting screens and souls in the service of social, economic, and educational equity, and justice.


Open Access and the POOC

Librarians Polly Thistlethwaite and Shawn(ta) Smith

Libraries have traditionally supported faculty with course reserve services, copyright advice, and scanning service to shepherd extension of licensed library content for exclusive use by a well-defined set of university-affiliated students. However, under current licensing models, this content can rarely be extended to the massive, unaffiliated, undefined, and unregistered body of MOOC enrollees without tempting lawsuits filed by publishers with deeppockets. Course content, usually in the form of books, book chapters, articles, and films, are not licensed to universities for open, online distribution.

Additionally, use of licensed content of any kind is arguably incongruent with a MOOC’s aim and purpose. Licensed content requires some form of reader authentication to regulate access. In contrast, open-access scholarship requires no registration or license. It is available to any reader, including students affiliated with a university and non-university students living and working in East Harlem. Linking interested students to the open reports, films, books, and articles reflecting work focused on inequality and East Harlem, the POOC’s open access course materials raise the profile and increase the impact of the academic, activist, and artist authors.

Authors featured in or engaged with the InQ13 POOC were generally eager to make their work open access. The Directory of Open Access Journals verified that several significant course readings were already “gold” open access, providing the widest possible audiences, and ready to be assigned for any course reading. The Centro Journal of the CUNY Center for Puerto Rican Studies, for example, is completely open access. Many of this journal’s authors were assigned by the POOC over several course modules.

Some journals allow self-archiving by authors. Self-archiving means that authors may post their own articles online at their professional website or institutional repositories. These types of journals are sometimes referred to as “green” open access. While author self-archiving is widely permitted by traditional academic journal publishers, the opportunity to self-archive is not at all ubiquitously exercised or understood by authors. Authors publishing in journals that are not completely open (known as “gold” open access), required both prompting and advice about how to put their work in open access contexts. Librarians supporting the POOC spent a great many hours checking the policies of journals using the SHERPA/RoMEO tool, and corresponding with course authors about how to make their scholarship available in open access repositories, accessible by any student in the course.

A few book publishers were willing to make traditionally published, print-based academic books open access, at least temporarily. The University of Minnesota Press, NYU Press, and University of California Press made copyright-restricted book chapters, and in one case an entire book openly available to accompany an author’s video-recorded guest lecture.

Publisher restrictions are not at all immediately obvious to authors or to faculty forming course reading lists. Librarians played a crucial role in supporting this open online course by identifying, promoting, and advising faculty and their publishers about open access self-archiving.


Coordination of the POOC

Project Manager Jen Jack Gieseking

Producing the POOC involved a multitude of staff that across the span of InQ13’s development, enactment, and follow up. In order to manage the project’s many moving parts,we set about outlining our goals, sketching out a plan to accomplish these aims, and making sure each contingent piece was ready in time for the next element. In the few weeks we had to plan, we also involved educational technologists to help us think through user experience (UX) and information architecture (IA). They also helped us conceptualize the educational technology functions and support needed for InQ13 to succeed. The next step then was to hire staff to develop this work based on our colleagues’ expertise.

Oversight and management involves a great deal of listening. As project manager, I was responsible for seeing each element of the project to completion. For instance, as each person asked me, who would handle the UX or IA, I would turn around and assign that element to the person who already had a great deal of insight into it.   Our work as co-developers involved many check-ins before any final work was completed so that we could bring together concerns and questions.

My own position bridged these parallel teaching and learning processes. I was simultaneously a developer, teaching assistant, online user, videographer, educational technologist, and the primary technical and logistic support for the live event seminars. I sometimes appear in the course videos because I invited the guest speakers for those weeks, or because someone was needed to run the laptop. I live-tweeted class sessions, I enrolled in the course, and, more than anything, I learned.

Each step forward in managing the POOC involved a million little, delicate steps. As Amanda Matles and Stephanie M. Anderson describe below, placing cameras in the classroom was a complicated issue that took weeks of discussion to resolve. Edwin Mayorga sent hundreds of emails requesting meetings with activists in East Harlem and making inroads to connect students to community partners. Our WordPress and Commons In A Box developer, Raymond Hoh, handled difficult fixes overnight and expanded the ways the site and course could afford a collaborative space for students and InQ13 team alike. Like the class itself, the process of producing the POOC involved a great deal of teaching, learning and knowledge-sharing.


Website development & Instructional Technology

Educational Technologists Karen Gregory, John Boy, Fiona Lee

There is a familiar heroic narrative about the genesis of new products and services in the tech sector (including educational technology) that goes something like this: “We worked 100 hours a week, slept under our desks, ate cold pizza and drank stale beer so we could write code and ship our product on time—and we liked it!” Like most heroic narratives, this narrative is as revealing for what it leaves out as for what it includes. While building a product, service or online course certainly requires concocting abstractions in the form of code, we have to unpack what we mean by “coding” in this context (Miyake 2013).

In addition to the time and energy that went into building the web infrastructure (setting up pages, categories, widgets etc.), there was a lot of discussion—online and in person—about course goals, envisioning what kind of work course participants would do and how they would use the site. In other words, the work of building the website was not just coding in the limited sense of creating and manipulating computer algorithms. It was also thinking, talking, debating, questioning, and imagining.

In this section, we will reflect our involvement, as graduate students, instructors and educational technologists, in building the POOC and highlight three forms of labor that are likely to be missed in the usual narrative: pedagogical practice, aesthetic imagination, and the accumulated labor of the “code base.”


We Came as Teachers

Perhaps the first thing to stress when considering the hidden labor of the website is that those of us who came together to create the site had already taught for several years. We did not come to this task as simply as “builders” or “coders,” but as educators, scholars, and Instructional Technologists. Each member of the site team was able to bring to bear several years of classroom experience, as well as experience collaborating with faculty across disciplines to design and implement “hybrid” assignments. This means that we not only had experience with what “works,” but also with what can fail, despite the designers’ (or teachers’) intentions.

The challenge of creating this particular course site was not only a challenge of designing a functional site to accommodate the coordination and logistics of the site (such as to create space for blogging or posting media artifacts), but also to lay out the site to structure, facilitate, and implement the course goals and intentions.


The Labor of Imagination & Design

In considering the question of labor, we cannot overlook the role the imagination played. Creating the POOC site was an act of giving form or realizing the ideas, goals, and desires for the course. If the POOC was to be a space for communication and conversation among participants, the challenge of this site was to imagine how to design a space that could foster community, across a series of mediated spaces and through the thoughtful use of the tools at hand, including WordPress and the Commons In A Box platform. At the same time, given that we were building the website for participants rather than for users, we had to re-imagine what “user experience” means. This required building a website that was not only functional, well organized and easy to navigate. The website also had to be designed in a way that encouraged participants to contribute their own ideas and goals for the course, and that was flexible enough to meet the course’s changing needs. To do so, we had to use our imagination to anticipate the perceptions and responses of participants, but in a way that remained open to their imagination of how they approached the course. In other words, the work of building the website did not just happen at the beginning, in anticipation of the start of the semester; it was an ongoing process of reflection and maintenance that involved engaging with participants’ needs.


The Political Economy of Service Provision

Another case in which we need to broaden our understanding of the kinds of labor coding entails is with regard to the tools or “code base” we worked with. Software products such as WordPress, BuddyPress, and the CUNY-developed Commons In A Box suite are not just abstractions all the way down; rather, they, too, are accumulations of people’s imaginative and creative work. To say that simply we built on or leveraged existing code bases is to reify this and to blot out the political economy of free and open source software (FOSS) development. While the FOSS world is often seen as the epitome of the “sharing economy” it also intersects in some ways with broader labor regimes. “FOSS development, with its flexible labor force, global extent, reliance on technological advances, valuation of knowledge, and production of intangibles, has fully embraced the modern knowledge economy” (Chopra and Dexter 2007, 20).


The Challenges of Videography

Videographers Amanda Matles and Stephanie M. Anderson

As doctoral candidates in the Critical Social Personality Psychology program, Geography program, and Videography Fellows at the Graduate Center, we entered the InQ13 POOC collaboration well acquainted with the nuances of using video in academic settings. The task in the POOC, though—to livestream, capture, and immediately publish the video recordings of the various classes online—presented a number of ethical, technical, and logistical challenges unique to participatory open online courses. Often, the introduction of camera equipment into any social space changes the dynamics and feelings of participants. While some students were comfortable having their likenesses seen by a mostly anonymous online audience, others expressed concerns, and anxieties. Thus, in order to achieve an intimate feel for online participants, consent from all students was needed. This tension of consent was compounded by the video crew’s presence in the midst of intimate group discussions. The feeling of embeddedness for online viewers sometimes came at the risk of vulnerability for graduate students, instructors, and speakers.

Working within the instantaneous time-space of participatory open online courses, the transmission of pedagogical material in video form—available in real time or overnight—is actually the result of professional A/V and computer set-ups and many invisible hours of planning and labor. Each location and unique class structure required specific A/V design. Because there were multiple presenters, audiences, rooms, and auditoriums, we needed not only a hardwire Ethernet connection in each location, but also flexibility and breadth in audio recording equipment. InQ13 used a two-person crew: one person operated the camera while the other live-mixed the audio, monitored the livestream, and received and reacted to feedback from other POOC collaborators watching the stream online. Additionally, an entire video postproduction process occurred within the 24 hours following each class. This included the addition of unique title cards and lower thirds for each speaker, sound mixing, exporting, file compression, and uploading new videos to the blog. Furthermore, long-format HD video files are extremely bulky, and can be slow to work with. Once edited, the file for a one-hour course usually takes at least 2 hours to export, then must be further compressed for internet streaming. The entire process could take up to 12 hours. A dedicated hard drive with at least 2TB storage and at least a 7200 rpm processor was needed to produce one semester of the POOC.

As videographers, we had to continually negotiate between what our ideals were and what was practically achievable given the opportunities and limitations involved in the InQ13 POOC. To integrate online POOC student participation and learning through the InQ13 site, it was vital that access to online course videos was timely. This availability allowed students writing weekly assignments and participating in blog conversations could torefer to the video archive at any time and as many times as needed. Online video provides learners with valuable repetition and open access.


The Labor of Supporting Students

Digital Fellow Wilneida Negrón

In the early planning stages of the POOC, the team identified the need for a Digital Fellow who could provide support in integrating technology and pedagogy to foster an active learning environment that would challenge students to think critically about inequality and the technologies they would be utilizing. The literature on best practices for online instruction increasingly emphasizes a focus on interactive, skillful use of technology, and a clear understanding of both technical and interpersonal expectations (Tremblay 2006, 96). The technology and participatory features of the POOC involved an online web platform, social media, and digital media technologies, the use of which bridged online and face-to-face learning contexts. This required me to partake in various roles as a facilitator, community-builder, instructional manager, coach, and moderator. While the fluidity of my role precluded, to some extent, clear parameters and role definitions, it also allowed for a kind of “distributed constructionism” (Resnick, 1996), a key building block to the formation of knowledge-building communities.

The initial phase of the class consisted of helping students and professors navigate around the multimodal nature of the POOC (see Kress, 2003) and evaluate any barriers or enablers when participating and using technology for content-creation, collaborating, and knowledge-building (Vázquez-Abad et al 2004, 239;Preece 2000, 152; Richardson 2006, 52). Since it was imperative that the students be able to utilize digital technologies, I conducted two short surveys, one completed in class and one completed online, which gauged the digital skills of students and their interest in a variety of digital tools they might use during the semester.

A majority of POOC students were interested in using Zotero, Flickr, and archiving-based projects for the class. This reflects what students already felt comfortable with, as many noted that the digital tools they most had experience with were Zotero and Flickr.

The majority of students expressed an interested in archiving but had no experience with it. Animation and information filters were the only two technologies that none of the students had experience in.

Although studies in computer-supported collaborative learning frequently under-expose the interaction between students and technology (Overdijk and van Diggelen, 2006, 5), my experience as a Digital Fellow revealed how essential this perspective is for identifying additional instruction and support needed. For example, through these assessments, I learned of the varying levels of digital media literacy among the students: some students were proficient and had been using digital technologies in their work and professional life, while others had no experience in digital technologies and/or limited use of social media. I sought to address these issues through individual and group instruction and through the creation of online groups and forums, which promoted peer-to-peer learning and problem solving.

As a Digital Fellow, I had to be prepared to negotiate the students’ own views about how they wanted to use digital technologies and their social media profiles. I could not assume, for instance, that all students would be at ease using these technologies, or that the asynchronous conversations between the graduate seminar students and the wider community of POOC students would go smoothly. Some students expressed early concerns about their privacy and seemed hesitant to use their public social media profiles in conjunction with the class. These kinds of moments provided challenges to the POOC’s objective of fostering transformative and open dialogue among students, but they were challenges that were met collaboratively by the InQ13 team.


Student Perspectives on the POOC: In the Physical Classroom

Student Kristen Hackett

Prior to taking the course I had a Facebook account as my sole scholar-activist digital outlet. Within the first couple of weeks I had set up an account with Twitter and Skype, had begun building a personal website, and was becoming an experienced blogger through my weekly contributions to the course blog. Further, within the first two months we had an assignment that required us to use three of the twelve knowledge streams suggested by the course in our community-based research projects, which ultimately entailed trying out many more than three before settling on which would be most useful (these along with instructions for use can be found at: http://inq13.gc.cuny.edu/knowledge-streams/).

In the course I used digital technologies to facilitate communication and collaboration with other classmates (both GC- and community-based), my professors, the distant guest lecturers, the extensive digital support staff, and community partners and organizations in East Harlem who were cruising the website or Twitter hashtag (#InQ13). In a broader sense, technology was used as an avenue to communicate to others and spread awareness about social justice—blurring the boundaries between community and academy and incorporating and implicating each in the other—and about our research projects, which were predicated on the importance of this cause. In this vein, Twitter was a useful tool for positioning our work among other similar works and related information by using targeted hashtags such as #communityresearch, #eastharlem, or #inequality. Furthermore, Twitter was important for driving others back to the site to learn more about the course and our cause by using the hashtag #InQ13 with each tweet.

On a level specific to my situation as a doctoral student, the emphasis on technology was useful in thinking about how I can expand the way I think about my scholarship and myself as a scholar. A specific question that has repeatedly come to mind during my graduate study is why journal articles and written prose are deemed the best (and often the only) mode of communication of our ideas. By introducing new tools of digital communication into my lexicon I could rethink or reimagine how I could communicate my research, in what form, from what platform, and to whom. For example, being able to incorporate Flickr photos into my blogs brings my words and thoughts to life in a way that is not achievable in a journal articles where images, and colored images in particular, are often not accepted. Additionally posting a short article to my webpage as a blog filled with photos and free of academic jargon, and then tweeting it to relevant yet potentially distant communities using hashtags allows me to share my work with others who I previously was not able to reach using traditional academic channels of sharing and publishing. In sum, the emphasis on these new and emerging technologies forced me to reconsider who my audience and co-researchers could, should, and might be and what forms that research could take.

Admittedly, given the highly supported environment we were in and the impending deadlines for assignments that required some kind of digital technology use, getting over our varying degrees of digital technology phobia occurred more rapidly and readily than others might expect. We had a few impromptu support group-like sessions in the beginning of the semester. At these sessions students voiced their fears of publishing online and putting their thoughts out there right away and/or their technical fears regarding actual use of a digital technology. Many of us didn’t have accounts for these different technologies and hadn’t engaged them before so our fears likely stemmed from a nagging anxiety about stepping into new territory.

For the former fear, some class time was carved out to talk, share, and support one another—and it helped that many of us were having the same concerns. When they were fears connected to lack of technical knowledge, we were referred to workshops in the library, or we could meet one on one with our digital technology support staff member or one of the librarians. In my own experience, my concerns were more along the lines of the latter, and while workshops and one-on-one sessions can be helpful in getting started, honestly a lot of my knowledge has come from doing and from playing around with the different technologies (for example, from building websites, from tweeting and using hashtags, and talking @ others on Twitter). Doing so alleviated the fear and increased the comfort of use as well as taught me how to use the different tools, technically speaking.

I also realized that part of my increased use of these digital technology tools was just knowing they existed. Furthermore, thinking about these tools in the context of rigorous academic research, and in a group that condoned and encouraged their use for that purpose, was new to me and reoriented my approach to these technologies in new ways—as tools. The focus of the course was not just on using these digital technologies, but using them as scholars and as scholar activists in pursuit of community-based research, and it was helpful that other respected scholars (our professors) and our academy were encouraging it.

Since the closing of the course I have proceeded to emphasize the use of digital technologies in my own scholarship and in the scholarship endeavors of research groups I work with. I have focused my efforts on Twitter and website and Facebook page creation at the moment. I think of the latter two in a geographical sense, as a way of creating a virtual place or home for me and my work, or the work of a research team. One can find my current research projects and interests, publications and presentations, and approaches to teaching. Further, they can get a sense of my networks by following links to the page of my research team or the Graduate Center, or the Environmental Psychology subprogram.

While my use seems to be growing, and I am finding the tools helpful, there are many digital tools from the course and in general that I’m not engaging. But I don’t think that’s the point. It is helpful just to know they are there, to be on the lookout for more as they develop, and to consider how they may enhance a project, make it more accessible or carry its messages further.


Student Perspectives on the POOC: In the Online Classroom

Student Zora Tucker

This course was valuable to me in several distinct but interdependent capacities: I am a graduate student at another institution, a public school teacher, and a self-identified movement activist. As a graduate student in a program in Arizona designed for people who live and work elsewhere, it was a windfall to find this course to use for my self-designed program in Critical Geography at Prescott College. It is rare that I am able to find collegial relationships in this rather isolated process, and the multiple modalities available to me—webcasts, Twitter, and the capacity to come into the CUNY Graduate Center for the open sessions—were all excellent for the development of my independent scholarship. I was able to see and converse with scholar-activists I had known only through writing, such as Michelle Fine and Maria Torres. This format allowed me to engage the course with varying intensities at different times in my schedule.

When I took this course, I was looking for teaching work as a new arrival to NYC while simultaneously doing research on charter schools and public space for my graduate work. This course gave me the ability to get a sense of the landscape of public schooling in relation to space in East Harlem, and to think through my emergent understanding of the state of public schooling in this city. My learning in these two areas came primarily from paying attention to people on Twitter, following them if our interests converged, and engaging with the work of other students posted on the class website. This happened fluidly, through a process that allowed my research interests to converge and weave together in a positive feedback loop that sustained my understanding of my new home, my academic critiques, and my ambition to work as a teacher in New York City.

This course was wellaligned with my movement philosophy of using academic space as a forum for broadcasting voices that are not always amplified in the halls of power. No one lives in the abstraction of neoliberalism; we all find our ways through the minutiae of its day-to-day realities. This course made space for this truth in multiple ways, but I will write here about two. First, the community forums created in InQ13 paired academic writing, which so often veers into the abstract and untenable, with the concrete analysis of those who do the work of living in and through sites of academic analysis. Second, the website itself was visible to people outside of the class, so I could share my posts and posts of other scholars—and even the structure of the website itself—with my former students, my colleagues, and anyone who might be interested in either the format or the content (or both) of this course. I had two colleagues at the college where I used to teach using my blog posts in their work with undergraduates.

In conclusion, s a person who came to this course through a friend who recommended it through Facebook, and as someone who participated in it primarily through the website and Twitter and shared it through social media—my experience of this POOC—a was holistically educational and useful beyond the expectations that I initially had of the experience.



We, the collective of the InQ13 POOC, shared what we learned while conducting this experiment in participatory, open education in the classroom, online, and among East Harlem community partners. As this essay suggests, and as the archived course website reveals, the InQ13 POOC was a valuable experience, not least of all because it offered an alternative to MOOCs at a crucial moment of their ascendance in the popular imagination. The InQ13 POOC provided a vision of digitally augmented learning that prizes openness, community-building, and participatory action above massiveness of scale. While this attempt to create an innovative model of what opening education could be sometimes resulted in messy struggles with the complex social, political, and economic issues related to inequality—not the least of which is the inequality between academics and community-partners—the POOC nevertheless reimagined what higher education might be if we took seriously the idea of “opening” education. Graduate education can and should engage with the possibilities to open education that MOOCs offer. But it must do so through thoughtful models, conceptualized with social justice in mind, and with an awareness of the labor, solidarity, and collectivity required behind the scenes. We proffer the InQ13 experiment in particular, and the idea of the POOC more generally, as one possible path for others considering future experiments in open education.




Bruff, Derek O., Douglas H. Fisher, Kathryn E. McEwen, and Blaine E. Smith. 2013. “Wrapping a MOOC: Student perceptions of an experiment in blended learning.” MERLOT Journal of Online Learning and Teaching 9 (2): 187–199. Accessed November 5, 2013. http://jolt.merlot.org/vol9no2/bruff_0613.htm. OCLC 61227223.

Chopra, Samir and Dexter, Scott D. 2007. Decoding Liberation: The Promise of Free and Open Source Software. New York: Routledge. OCLC 81150603.

Cook-Sather, Alison. 2013. “Unrolling Roles in Techno-Pedagogy: Toward New Forms of Collaboration in Traditional College Settings.” Innovative Higher Education 26, no. 2 : 121-139. OCLC 425562481.

Cormier, Dave and George Seimens. 2010. “The Open Course: Through the Open Door–Open Courses as Research, Learning, and Engagement.” Educause Review 45, no. 4 : 30-39. http://www.educause.edu/ero/article/through-open-door-open-courses-research-learning-and-engagement

Cushman, Ellen. 1999. “The public intellectual, service learning, and activist research.” College English: 328-336. OCLC 1564053.

Daniels, Jessie. 2012. “Digital Video: Engaging Students in Critical Media Literacy and Community Activism.” Explorations in Media Ecology, Volume 10 (1-2): 137-147. OCLC 49673845.

(The) Economist. 2013. “Higher Education: Attack of the MOOCs.” July 20, 2013. The Economist. Accessed July 23, 2013. http://www.economist.com/news/business/21582001-army-new-online-courses-scaring-wits-out-traditional-universities-can-they. OCLC 1081684.

Fraser, Nancy. 1996. “Social Justice in the Age of Identity Politics: Redistribution, Recognition, and Participation.” Paper presented at The Tanner Lectures on Human Values, Stanford University April 30–May 2. Accessed November 10, 2013. http://tannerlectures.utah.edu/_documents/a-to-z/f/Fraser98.pdf. OCLC 45732525.

Freire, Paulo. 1993. Pedagogy of the Oppressed. New York: Continuum. OCLC 43929806.

Friedman, Thomas. 2013. “Revolution Hits the Universities.” The New York Times, January 26. Accessed January 26, 2013. http://www.nytimes.com/2013/01/27/opinion/sunday/friedman-revolution-hits-the-universities.html. OCLC 1645522.

Gold, Matthew K. 2012. ” Looking for Whitman: A Multi-Campus Experiment in Digital Pedagogy.” Digital Humanities Pedagogy: Practices, Principles and Politics, ed. Brett D. Hirsch. Cambridge: Open Book Publishers. OCLC 827239433.

Graham, Charles Ray. 2006. “Blended learning systems: Definition, current trends, and future directions.” In Handbook of blended learning: Global perspectives, local designs edited by Curtis Jay Bonk and Charles Ray Graham, 3–21. San Francisco, CA: Pfeiffer. OCLC 60776550.

Hall, Richard. 2012. “For a Critique of MOOCs/Whatever and the Restructuring of the University.” Accessed May 12, 2013.

Kress, Gunther R. 2003. Literacy in the new media age. London: RoutledgeFalmer. OCLC 50527771.

Mills, Charles Wright. 1959. The Sociological Imagination. USA: Oxford University Press. OCLC 165883.

Miyake, Keith. “All that is Digital Melts into Code.” 2013. GC Digital Fellows Blog. October 25. Accessed October 25 2013. https://digitalfellows.commons.gc.cuny.edu/2013/10/25/all-that-is-digital-melts-into-code/.

Odom, Laddie. 2013. “A SWOT Analysis of The Potential Impact of MOOCs.” In World Conference on Educational Multimedia, Hypermedia and Telecommunications, vol. 2013, no. 1: 611-621. OCLC  5497569520.

Orner, M. 1992. “Interrupting the calls for student voice in liberatory education: A feminist poststructralist perspective.” Luke, C. and G. J. (eds.). Feminisms and critical pedagogy. New York: Routledge. OCLC 24906839.

Overdijk, Maarten and Wouter van Diggelen. 2006. Technology Appropriation in Face-to-Face Collaborative Learning. Workshop at First European Conference on Technology Enhanced Learning, Crete, Greece, October 1–4. Accessed November 16, 2013. http://ceur-ws.org/Vol-213/ECTEL06WKS.pdf. OCLC 770966463.

Palloff, Reena M., & Pratt, Keith. 1999. Building learning communities in cyberspace: Effective strategies for the online classroom. San Francisco, CA: Jossey-Bass. OCLC 40444568.

Parr, Chris. 2013. “MOOC Creators Criticise Courses’ Lack of Creativity.” Times Higher Education. October 17. Accessed May 8, 2014. http://www.timeshighereducation.co.uk/news/mooc-creators-criticise-courses-lack-of-creativity/2008180.fullarticle. OCLC 232121645.

Picciano, Anthony. 2014. “The Hype, the Backlash, and the Future of MOOCs,” pp.6-9, University Outlook. February. Accessed May 12, 2014. http://universityoutlook.com/archives.

Powazek, Derek M. 2002. Design for community: The art of connecting real people in virtual places. Indianapolis, IN: Pearson Technology Group. OCLC 47945525.

Preece, Jenny. 2000. Online communities: Designing usability and supporting sociability. Chichester, UK: Willey. OCLC 43701690.

Tremblay, Remi. 2006. “Best Practices” and Collaborative Software in Online Teaching. The International Review of Research in Open and Distance Learning, 7(1). http://www.irrodl.org/index.php/irrodl/article/view/309/486. OCLC 424760690.

Resnick, M. 1996. Distributed constructionism. The proceedings of the international conference on learning sciences. Association for the Advancement of Computing in Education. Northwestern University. OCLC 84739865.

Rheingold, Howard. Net Smart: How to Thrive Online. MIT Press. Cambridge, MA, 2012. OCLC 803357230.

Richardson, Will. 2006. Blogs, wikis, podcasts, and other powerful web tools for classrooms. Thousand Oaks, CA: Corwin Press. OCLC 62326782.

Rodriguez, Cheryrl. 1996. “African American anthropology and the pedagogy of activist community research.” Anthropology & Education Quarterly 27, no. 3: 414-431. OCLC 5153400850.

Sanchez, Sonia. “More than panem et circenses.” 2013. Reassessing Inequality and Reimagining the 21st Century. April 18. Accessed November 5, 2013. http://InQ13.gc.cuny.edu/more-than-panem-et-circenses/.

Scanlon, Jennifer. 1993. “Keeping Our Activist Selves Alive in the Classroom: Feminist Pedagogy and Political Activism.” Feminist Teacher,  8-14. OCLC 424819999.

Stewart, Bonnie. 2013. “Massiveness+ Openness= New Literacies of Participation?.” Journal of Online Learning & Teaching 9.2. Accessed May 12, 2014. http://jolt.merlot.org/vol9no2/stewart_bonnie_0613.htm. OCLC 502566421.

Straumheim, Carl. 2013. “Masculine Open Online Courses.” Inside Higher Ed, September 3. Accessed November 5, 2013. http://www.insidehighered.com/news/2013/09/03/more-female-professors-experiment-moocs-men-still-dominate. OCLC 721351944.

Vázquez-Abad, Jesus, Nancy Brousseau, Guillermina Waldegg C, Mylène Vézina, Alicia Martínez D, Janet Paul de Verjovsky. 2004. “Fostering Distributed Science Learning through Collaborative Technologies.” Journal of Science Education and Technology, 13(1): 227–232. OCLC 425946303.

Vázquez-Abad, Jesus, Nancy Brousseau, Guillermina Waldegg C, Mylène Vézina, Alicia Martínez Dorado, Janet Paul de Verjovsky, Enna Carvajal, Maria Luisa Guzman. 2005. “An Approach to Distributed Collaborative Science Learning in a Multicultural Setting.” Paper presented at the Seventh International Conference on Computer Based Learning in Science, Zilina, Germany, July 2–6. Accessed November 13, 2013. http://cblis.utc.sk/cblis-cd-old/2003/3.PartB/Papers/Science_Ed/Learning_Teaching/Vazquez.pdf

Weis, Lois and Michelle Fine. 2012. “Critical Bifocality and Circuits of Privilege: Expanding Critical Ethnographic Theory and Design.” Harvard Educational Review 82(2): 173–201. OCLC 815792737.

Zutshi, Samar, Sheena O’Hare, and Angelos Rodafinos. “Experiences in MOOCs: The Perspective of Students.” American Journal of Distance Education 27, no. 4 (2013): 218-227. OCLC 5602810621.




[1] Most high-profile MOOCs have featured men as instructors; the POOC was co-led by two women. For more on the gender imbalance in MOOCs, see Straumheim 2013.

[2] This initial conversation included Michelle Fine, Steven Brier and Michael Fabricant and was made possible by the Advanced Research Collaborative (ARC), under the thoughtful leadership of Don Robotham (Anthropology).

[3] The POOC was made possible by funding from the Ford Foundation.



About the Authors

Jessie Daniels is Professor of Public Health, Psychology, and Sociology at Hunter College, CUNY School of Public Health, and the Graduate Center, CUNY. She has published several books, including Cyber Racism (2009) and White Lies (1997), along with dozens of articles. She leads the JustPublics@365 project.

Matthew K. Gold is Associate Professor of English and Digital Humanities at City Tech and the Graduate Center, CUNY, where he serves as Advisor to the Provost for Digital Initiatives. He is editor of Debates in the Digital Humanities (2012) and served as Co-PI on the JustPublics@365 project during its first year.

Co-Authors from the InQ13 Collective:
Stephanie M. Anderson
(Graduate Center, CUNY)

John Boy
(Graduate Center, CUNY)

Caitlin Cahill
(Pratt Institute)

Jen Jack Gieseking
(Bowdoin College)

Karen Gregory
(Graduate Center, CUNY)

Kristen Hackett
(Graduate Center, CUNY)

Fiona Lee
(Graduate Center, CUNY)

Wendy Luttrell
(Graduate Center, CUNY)

Amanda Matles
(Graduate Center, CUNY)

Edwin Mayorga
(Swarthmore College)

Wilneida Negrón
(Graduate Center, CUNY)

Shawn(ta) Smith
(Graduate Center, CUNY)

Polly Thistlethwaite
(Graduate Center, CUNY)

Zora Tucker
(Prescott College)

Talking with Students through Screencasting: Experimentations with Video Feedback to Improve Student Learning

Riki Thompson, University of Washington Tacoma
Meredith J. Lee, Leeward Community College


Changing digital technology has allowed instructors to capitalize on digital tools to provide audiovisual feedback. As universities move increasingly toward hybrid classrooms and online learning, consequently making investments in classroom management tools and communicative technologies, communication with students about their work is also transforming. Instructors in all fields are experimenting with a variety of tools to deliver information, present lectures, conference with students, and provide feedback on written and visual projects. Experimentation with screencasting technologies in traditional and online classes has yielded fresh approaches to engage students, improve the revision process, and harness the power of multimedia tools to enhance student learning (Davis and McGrail 2009, Liou and Peng 2009). Screencasts are digital recordings of the activity on one’s computer screen, accompanied by voiceover narration that can be used for any class where assignments are submitted in some sort of electronic format. We argue that screencast video feedback serves as a better vehicle for in-depth explanatory feedback that creates rapport and a sense of support for the writer than traditional written comments.


“I can’t tell you how many times I’ve gotten a paper back with underlines and marks that I can’t figure out the meaning of.”

–Freshman Composition Student1


The frustration experienced by students after receiving feedback on assignments is not unique to the student voice represented here. Studies on written feedback have shown that students often have difficulty deciphering and interpreting margin comments and therefore fail to apply such feedback to successfully implement revisions (Clements 2006, Nurmukhamedov and Kim 2010). A number of years ago one of us participated in a study about student perceptions of instructor feedback. The researcher interviewed several students, asking how they interpreted her feedback and what sorts of changes they made in response to the feedback (Clements 2006). Students reported that some comments were indecipherable, others made little sense to them, and some were disregarded altogether.2

Clements (2006) suggests that the disconnect between feedback and revision is complicated by a number of factors, including the legibility of handwriting and editing symbols which sometimes read more like chicken scratch than a clear message. Students usually did their best to interpret the comment rather than ask for clarification. Other times, students made revision decisions based on a formula that weighed the amount of effort in relation to the grade they would receive. In other words, feedback that was easier to address gained priority, and feedback that required deep thinking and a great deal of cognitive work was dismissed. Sometimes these decisions were made out of sheer laziness. Other times students’ lack of engagement with feedback was a strategic triage move to balance the priorities of school, work, and home life. These findings motivated us to find more effective ways to provide feedback that students could understand and apply to improve their work.
We both rely upon a combination of written comments and conferences to provide feedback and guidance with student work-in-progress, but we find that written comments make it too easy to mark every element that needs work rather than highlight a few key points for the student to focus on. We often struggled to limit our comments to avoid overloading our students and making feedback ineffective, as research in composition studies shows that students get overwhelmed by extensive comments (White 2006). After years of using primarily written comments to respond to student papers, we were often frustrated by the limits presented by this form of feedback.

Wanting to intellectually connect with students and explore ideas collaboratively while reading a paper, we are often having a conversation in our own heads, engaging the text and asking questions. We experience moments of excitement when we read something that engages us deeply. We think, “Wow! I love this sentence!” or “Yes! I completely agree with the argument you’re making,” or “I hadn’t thought of it that way before.” We also ask questions: “What were you thinking here?” or “Why did you start a new paragraph here?” in hopes that the answers appear in the next draft. Unfortunately, written comments are often in concise, complex explanations that students find difficult to unpack. That is, the necessary supplemental explanation that students require for meaning-making remains largely in our heads, rather than appearing on student papers. Thus, we wanted to make the feedback process more conversational, less confusing, and less intimidating for students, especially in online classes.

In both of our teaching philosophies, our primary motivation as writing teachers is to help our students improve upon their own ideas by revising their writing and utilizing feedback from us and their classmates. Thus, we recognize that our feedback needs to be personalized and conversational in nature. We don’t want our feedback to be perceived as a directive, which we know results in students focusing all their energy on low priority errors rather than considering global issues. Instead, we want our feedback to inspire students to think about what they’ve written and how they might write it in a way that is more persuasive, clearer, or more nuanced for their intended audience. Moreover, we want their writing to be intentional; we don’t want students to think writing should be merely a robotic answer to an assignment prompt. With goals such as these, it’s no surprise that traditional feedback methods were deemed insufficient and wanting. We teach students that argumentation is about responding to a rhetorical situation–joining the conversation so to speak—and yet our written feedback was not effectively serving that purpose.

To remedy this problem, we experimented with screencasting technology as a tool to provide students with conversational feedback about their work-in-progress. Screencasts are digital recordings of the activity on one’s computer screen, accompanied by voiceover narration. Screencasting can be used by professors in any class to respond to any assignment that is submitted in an electronic format, be it a Word document, text file, PowerPoint presentation, Excel spreadsheet, Web site, or video. While using Screen Capture Software (SCS), we found that screencasting has most commonly been used pedagogically to create tutorials that extend classroom lectures.

Screencasting has been used as a teaching tool in a variety of fields, with mostly positive results reported, specifically in relation to providing students with information and creating additional avenues of access to teaching and materials. In the field of chemical engineering, screencasting has served as an effective supplement to class time and textbooks (Falconer, deGrazia, Medlin, and Holmberg 2009). A study of student perceptions and test scores in an embryology course that used screencasting to present lectures demonstrated enhanced learning and a positive effect on student outcomes (Evans 2011). Asynchronous access to learning materials—both to make up for missed classes as well as to review materials covered in class—is another benefit of screencasting in the classroom (Vondracek 2011, Yee and Hargis 2010). An obvious advantage for online and hybrid classrooms, this type of access to materials also creates greater access for brick-and-mortar universities, especially those that serve nonresidential and place-bound student populations. Research on screencasting in the classroom is limited, but so far it points to this technology as a powerful learning tool.

While most of the research on screencasting shows positive results for learning, such studies focus on how this digital technology serves primarily as a tool to supplement classroom instruction; no research has yet shown how it can be used as a feedback tool that improves learning (and writing) through digitally mediated social interaction. This study examines the use of and student reactions to receiving what we call veedback, or video feedback, in order to provide guidance on a variety of assignments. We argue that screencast video feedback serves as a better vehicle for in-depth explanatory feedback that creates rapport and a sense of support for the writer than traditional written comments.

Literature Review

Best practices in writing studies suggest that feedback goes beyond the simple task of evaluating errors and prompting surface-level editing. The National Council of English Teachers (NCTE) position statement on teaching composition argues that students “need guidance and support throughout the writing process, not merely comments on the written product,” and that “effective comments do not focus on pointing out errors, but go on to the more productive task of encouraging revision” (CCCC 2004). In this way, feedback serves as a pedagogical tool to improve learning by motivating students to rethink and rework their ideas rather than simply proofread and edit for errors. At the 2011 Conference on College Composition and Communication, Chris Anson (2011) presented findings on a study of oral- versus print-based feedback, arguing that talking to students about their writing provides them with more information than written comments.

The task of providing comments that students can engage with remains a challenge, especially when feedback is intended to help students learn from their mistakes and make meaningful revisions. Not only for composition instructors but also for any instructor who requires written assignments, providing students with truly effective feedback has long been a challenge both in terms of quality and quantity. Notar, Wilson, and Ross (2002) stress the importance of using feedback as a tool to provide guidance through formative commentary, stating that “feedback should focus on improving the skills needed for the construction of end products more than on the end products themselves” (qtd in Ertmer et al. 2007, 414). Even when it provides an adequate discussion of the strategies of construction, written feedback can often become overwhelming.

Written comments usually consist of a coded system of some sort, varying in style from teacher to teacher. Research about response styles has shown that instructors tend to provide feedback in categorical ways, with the most common response style focused primarily on marking surface features and taking an authoritative tone to objectively assess right and wrong in comments (Anson 1989). Writing teachers, for example, tend to use a standard set of editing terms and abbreviations, although phrases, questions, and idiosyncratic marks are also common. According to Anson (1989), other teachers used feedback to play the role of a representative reader within the discourse community, commenting on a broad range of issues, asking questions, expressing preferences, and making suggestions for revision. Comments can be both explicit–telling students when an error is made and recommending a plan of action–and indirect, implying that something went well or something is wrong. In this way, indirect feedback seems a bit like giving students a hint, similar to the ways in which adults give children hints about where difficult-to-find Easter eggs might be hidden in the yard. Although the Easter egg hunt is intended to challenge children to solve a puzzle of where colorful eggs might be hidden from view, adults provide clues when children seem unable to figure out the riddle. In other words, adults give guidance when children seem lost, similar to the ways instructors give guidance to students who seem to have veered off track.

Written feedback tends to be targeted and focused, with writers filtering out the extraneous elements of natural speech that may further inform the reader/listener. All communication—whether it be written or spoken—is intrinsically flawed and problematic (Coupland et al. 1991), such that the potential for miscommunication is present in all communicative exchanges. Thurlow et al. (2004, 49) argue that nonverbal cues such as tone of voice usually “communicate a range of social and emotional information.” Everyday speech is filled with hesitations, false starts, repetitions, afterthoughts, and sounds that provide additional information to the listener (Georgakopolou 2004). Video feedback allows instructors to model a reader response, with the addition of cues that have the potential to help students take in feedback as part of an ongoing conversation about their work instead of a personal criticism. We recognize that this claim assumes that an instructor’s verbal delivery is able to mitigate the negativity that a student may interpret from written comments and that the instructor models best practices for feedback regardless of medium.

Serving as a medium that allows instructors to perform a reader’s response for students, digital technology can be an effective tool to continue the conversation about work-in-progress. By talking to students and reading their work aloud, instructors can engage students on an interpersonal level that is absent in written comments. It’s about hearing the reader perform a response full of interest, confusion, and a desire to connect with the ideas of the writer. This type of affective engagement with student work is something that students rarely see, hear, and sense—the response from another reader that’s not their own. Veedback offers students an opportunity to get out of their heads and hear the emotional response that is more clearly conveyed through spoken words than writing.

Thus, audiovisual feedback has the potential to motivate students and increase their engagement in their own learning, rather than just to assess the merits of a written product or prompt small-scale revision. Holmes and Gardner (2006, 99) point out that student motivation is multifaceted within a classroom and point to “constructive, meaningful feedback” as characteristic of a motivational environment. Changing digital technology has allowed for instructors to capitalize on new or evolving digital tools in creating that motivational environment.

As universities move toward hybrid classrooms and online learning and consequently make investments in classroom management tools and communicative technologies, communication with students about their writing is also transforming. Instructors in all fields are experimenting with a variety of tools to deliver information, present lectures, conference with students, and provide feedback on written and visual projects.

Experimentation with digital technologies in traditional and online composition classes has yielded fresh approaches to engage student writers, improve the revision process, and harness the power of multimedia tools to enhance student learning (Davis and McGrail 2009, Liou and Peng 2009). By employing screencast software as a tool to talk to students about their work-in-progress, we are adding another level of interpersonal engagement—palpably humanizing the process.

Our Pedagogy

Because inquiry and dialogue are foundational to our pedagogical practice, writing workshops, teacher-student conferences, and extensive feedback in which we attempt to take on the role of representative reader are common in our courses. Although we each work hard not to be the teacher who provides students with feedback that they don’t understand, more often than we would like to admit, we know that we too are sometimes those teachers who use underlines and marks that make little sense to our students, as this paper with written comments demonstrates (Figure 1).

Figure 1. A student paper with written comments.

Even after informing students of our respective coding system, many students remain confused. This example is one instructor’s chart of editing marks given to students with their first set of written feedback (Figure 2).

Figure 2. A chart of editing marks given to students with their first set of written feedback.

We know students are confused by written comments because some students come to office hours and share their confusion over some of our statements and questions. Many confirm that they don’t really know what to do with the comments or how to make the move to improve their work or transfer their learning to the next assignment or draft. Students’ difficulty in decoding comments may be based on their expectations of feedback as directive rather than collaborative and conversational. Moreover, students’ prior (learned) experiences with feedback may color the way students read and respond to comments. That is, many students expect directive feedback and believe that the appropriate response is merely to edit errors and/or delete sections that are too difficult to revise. Thus, students feel confused (and frustrated) when a comment does not yield a specific solution that fits into the paradigm of “what the teacher wants.”

Although we both require in-person student conferences (or in a digitally mediated form via phone, Skype, or Blackboard Collaborate) as one of the most important pedagogical tools to improve student writing, we acknowledge the limitations of conferences as the primary means of giving feedback. Time is the most obvious obstacle. While allowing the most personalized instruction for each student, one-on-one student-teacher conferences are labor-intensive for the teacher. Conferences are usually held only twice in a sixteen-week semester (or ten-week quarter) and are characterized by a non-stop whirlwind of twenty-minute appointments. For those teaching at nonresidential university campuses and community colleges, requiring students to schedule a writing conference outside of class time is even more challenging as most are overextended with jobs and family responsibilities. The most important feature of writing conferences is the dialogic nature of it–the conversation about the work-in-progress and the collaborative planning about how to make improvements. Acknowledging both the effectiveness and limitations of face-to-face conferencing, we considered alternatives to the traditional writing conference.

Initially, one of us experimented with recording audio comments as a supplement to written comments and an extension of the writing conference, but was not satisfied with the results. This method requires the instructor to annotate a print-based text (which is problematic for online courses and digitally mediated assignments) in addition to creating a downloadable audio file. The separation of the annotated text from comments can create logistical problems for students finding and archiving feedback and create extra work for the instructor providing it.

When we discovered screencasting, we began to experiment with this digital tool as an alternative form of feedback. We each employed Jing screen-recording software to record five minutes of audiovisual commentary about a student’s work. This screencasting software enabled us to save the commentary as a flash video that could be emailed or uploaded to an electronic dropbox. This screenshot shows what appears on the screen for students when they click a link to view video feedback hosted on the Dropbox site.

Opportunities and Obstacles

New methods of delivering instruction, such as in hybrid or online courses, create a need to solve the feedback dilemma in a variety of ways. We believe a key component to effective feedback is the collaborative nature of conversation built upon a rapport cultivated in “normal” classroom interaction. However, with limited (or no) face-to-face time between instructor and student (or between student and student), creating a collaborative and conducive environment for writing is a challenge as the tone of the class is often set by the “performance” of the instructor during class. In online environments, students cannot see or hear their instructors or their classmates, which can potentially stifle the creation of a positive learning community. The face-to-face experiences of the traditional classroom allow students to develop rapport with a teacher, which can mitigate the feeling of criticism associated with formative feedback.

Without these face-to-face experiences, students in online classes are more likely to disengage with course content, assignments, and their instructor and classmates. This increased tendency to disengage is evidenced in the lower completion rate for online classes. According to a Special Report by FacultyFocus, “the failed retention rate for online courses may be 10 to 20 percent higher than for face-to-face courses.” And according to Duvall et al. (2003), the lack of engagement by students in online courses is linked to the instructor’s “social presence.” They state that “social presence in distance learning is the extent that an instructor is perceived as a real, live person, rather than an electronic figurehead.” Research shows that the relationship between student and teacher is often an important factor for retention (CCSSE – Community College Survey of Student Engagement n.d., NSSE Home n.d.); this relationship is a compelling argument for why we should look for socially interactive ways to respond to our students’ work.

While multimedia technology has allowed instructors to create more “face time” with students in an online class, technological savvy does not automatically translate into more social presence. While we would agree that any use of audio/video formats in the online class contributes to creating a learning community, video lectures are not personal in the same way that face-to-face lectures are not personal. In providing feedback on individual students’ writing, we are engaging in a conversation with our students about their own work—a prime opportunity to personalize instruction to meet student needs (also called differentiated instruction).

Logistically, screencasting has its challenges, such as those we encountered—additional time at the computer and a quiet place to record the videos—but we both discovered ways to mitigate those challenges. For example, one author found that this medium relegates the instructor to a quiet space and the other experienced limited storage capacity on her institution’s server. The first author discovered that a noise-cancelling headset allowed her to be mobile while using this feedback method. And the second author had to create alternative means of delivery and archiving by giving students the option of receiving video files via email, downloading and deleting files from the dropbox, or accessing videos via Screencast.com, which is not considered “private” by her institution.

Initially, the process was time-consuming because it was difficult to get out of the habit of working with a hard copy; we each initially wrote comments or brief notations on a paper (or digital) version as a basis for the video commentary. Keeping to the five-minute time limit was also a challenge, but the time limit also helped us to focus on the major issues in students’ writing rather than on minor problems. Perhaps most importantly, as we have become accustomed to the process, it takes us less time to record video comments than when we started using screencasting for feedback. Moreover, positive student response has encouraged us to be innovative in addressing the drawbacks.

Veedback allows instructors to move the cursor over content on the screen and highlight key elements while providing audio commentary as shown in this response paper. These two samples (a response paper [Video 1] and an essay draft [Video 2]) show how instructors can take advantage of the audiovisual aspects of screencasting to engage students in learning.

Video 1 (Click to Open Video). The instructor highlights key elements while providing audio commentary on a response paper.

Video 2 (Click to Open Video). A student essay.

After providing commentary within a student paper, this sample shows how instructors discuss overall strengths and weaknesses by pasting the evaluation rubric into the electronic version of the student essay and marking ranges (Video 3).

Video 3. The instructor has pasted the evaluation rubric into the electronic version of the student essay and marked ranges.

One of the many ways in which we used screencasting was to give feedback about work-in-progress that was posted to online workspaces, such as a course blog or discussion board. In this case, students posted drafts of their thesis statement for their essay on the blog and we responded to a group of them in batches and linked to the feedback on the course blog (Figure 3).

Figure 3. The instructors responded to thesis statements in batches and linked to the feedback on the course blog.

This method gave students access to an archive of feedback through the course blog and allowed for an extension of in-class workshops about work-in-progress to help students focus their research essays.

This snippet of one of the ten-minute videos mentioned above shows how one of the authors uses the audiovisual medium to show students how their writing may be seen and heard simultaneously (Video 4).

Video 4. A snippet of a video response to student work.

We have also found veedback to be especially useful for presentations because screencasting software allows us to start a conversation about the impact of visual composition and to manipulate the original document to present alternatives. In this particular sample veedback, the instructor used a sample presentation for an in-class workshop and ran screencasting software to provide an archive of notes that students could access when they were ready to revise (Video 5).

Video 5. A sample veedback presentation.


Screencasting was used in five sections of college-level writing courses by two instructors. Students from two sections of one author’s research and argument course were surveyed about screencasting feedback on essay drafts and PowerPoint presentations. In the second author’s three online sections of a research and writing course, students were informally asked about the use of veedback, and one online section was surveyed. The screencasts were produced on PCs using Jing software to create individual Flash movie files to be shared and posted to the classroom management system for student access. Both instructors also used screencasting as an extension of classroom lectures by offering mini-workshops on specific aspects of writing and providing tutorials for assignments and software use. Veedback was used instead of written comments, not in addition; in other words, assignments that got veedback didn’t get written comments—only some highlighting or strikethrough font in the file versions that were returned to them. We employed a color-coding system to differentiate between types of comments. For example, yellow highlighting may signal grammatical errors and green highlighting marks problems with content or interpretation as shown inthis example of veedback on an annotated bibliography assignment.

Students in the first author’s classes were asked to fill out an optional, anonymous, web-based questionnaire that would provide feedback about the course. Along with questions that asked students to reflect on their ability to meet the learning objectives for the course, the midquarter surveys also included specific questions related to particular assignments, activities, or teaching technologies that were added to the course. During that quarter, this author added an additional question, eliciting a short response of 500 words maximum, targeted to student perceptions of using videos as a method of feedback. The questionnaire prompted students to speak about their experiences with Jing videos for the two particular assignments in which it was used and to specifically address whether it was beneficial (or not) to their learning. The final question was “Please tell me about your experience getting feedback through Jing screen capture videos on a response paper and your presentation. How did it improve your learning (or not)?”

The data set for this survey is limited as it was elicited from two sections of the same 200-level writing course at one institution, with a maximum potential of 40 respondents. Thirty-two students took the online survey, 22 responded to the short answer questions about Jing, and 3 responded about digital classroom tools other than Jing. An additional data set was elicited from one section of a similar 200-level course at a second institution, with a maximum potential of 16 respondents. All 11 students who participated in the survey provided short answer questions about the veedback. Six respondents also commented on the use of videos for instruction. Thus, the data used in this paper comes from 30 short answer responses which were analyzed using content analysis. A number of key themes emerged and are discussed below. Most students who responded about Jing were extremely positive and found it beneficial to their learning. A few students, including those who found it beneficial, spoke of hearing and seeing through this digital tool as enhancing learning.

With only two out of 30 students stating a preference for comments “written down,” Jing comments received rave reviews as a form of feedback that aided student learning. Student preference for this type of feedback demonstrates how important it is that teachers deliver feedback employing multiple modes of delivery, combining the auditory, visual, and kinesthetic. Many spoke directly to the importance of auditory feedback as a key factor that contributed to their learning, and others claimed that the auditory in combination with the visual made the difference. Many students implied that the auditory explanations, coupled with the visual representation of their essay, gave them enough information to make meaningful revisions and apply feedback.
Students overwhelmingly included statements like “I like the Jing screen capture videos a lot” and “I think the Jing videos are very helpful.” Some students compared this video feedback form to traditional written comments, focusing on the negative side of the written comments rather than fully explaining the positives of the new form. “It felt as if I was talking with them – a much more friendly review rather than harsh critique.” In these comments, student preferences were implied and, therefore, were analyzed for meaning.

Can You Hear Me Now? Can You See It Now?

Inherent in the student-teacher relationship is a power differential in which teachers have more power and the student is somehow deficient and in need of correction. Students expect correction from teachers, not dialogue about their work. Oftentimes, tone of voice is obscured in written comments, forcing students to imagine the teacher in their head. This imagined teacher often sounds harsh and punishing. For example, we might ask questions in our margin comments that are indeed questions. While we might be looking for further explanation or description, students might read these questions as rhetorical, not to be answered, flatly stating that they made some unconscionable mistake that should not appear in a future version or assignment. Anything written in the margins is the “red marker that scolds” (White, 2006). Using one’s actual voice makes the tone of voice apparent. Audio feedback erases the red pen, and replaces it with the sound of a human conveying “genuine” interest in the ideas presented. By giving veedback, we are able to use a conversational tone to talk about writing with students. We are able to share how their writing sounds and offer a variety of options.
Students overwhelmingly pointed to auditory feedback as beneficial to their learning. “Hearing” what the teacher was saying was the most important reason that screencasting was found to be such a successful feedback tool, with many students stating a preference for hearing someone’s voice going through their paper.

“Being able to hear your explanations was very helpful.”

“The fact that you are hearing somebody’s voice instead of reading words on a piece of paper.”

“Instead of just writing comments it helps hearing the feedback. It helps a lot with knowing what specific things to work on.”

The feedback may be perceived as friendly because students can hear tone of voice, recognizing that we as teachers are encouraging them and not criticizing them. We surmise that students may be gaining a way into the conversation because they hear us talking with them about writing, not preaching or using teacherly discourse.

In commenting on veedback, students pointed to more than just the audio component as valuable to learning; for some, it was the combination of hearing feedback while simultaneously seeing the site where ideas may be re-imagined. These comments pointed to the importance of learning through multiple modes of delivery simultaneously, specifically audio and visual.

“I liked being able to hear you and see my paper at the same time.”

“It’s great to be able to get the feedback while watching it being addressed on the essay itself.”

“It’s one thing to just read your instructors feedback but to be able to see it and understand what you are talking about really helps!”

“I can see and follow the instructor as she reads through my writing with the audio commentaries. It helps me to pin-point exactly what areas need to be corrected, what is hard to understand, which areas I did well on, and which areas could be improved.”

Some students showed metacognition about learning preferences, judging the tool as beneficial to them specifically because they believed themselves to be visual learners who benefited from “seeing” what was being discussed. Reproducing discourse about learning styles, these students took on the identity of self-aware learners.

“This way seemed to be very good for visual learners like myself.”

“I like the capture videos. I’m a visual person.”

Making Connections

A number of students described their confusion and frustration after receiving feedback through traditional methods, demonstrating the challenges of making connections between feedback and learning goals. Negative experiences with written feedback were contrasted with previous positive responses to veedback.

“I can’t tell you how many times I’ve gotten a paper back with underlines and marks that I can’t figure out the meaning of.”

“Sometimes when you receive a paper back half of it is decoding what the teacher said before even seeing what was commented on.”

Despite the fact that (written) feedback is intended to communicate important information to students, the end result is often quite the opposite; students feel frustrated, disempowered, and unable to take the necessary steps to apply the comments.

Students noted that veedback simulates the student-teacher conference. Although this form of feedback is only one side of the conference, from teacher to student, the conversational nature of the feedback is clear. Students picked up on this intention, calling veedback an “interactive” form of feedback that is available beyond office hours (24/7) in comparison to a one-to-one writing conference with the teacher.

“it’s like a student-teacher one-on-one conference whenever I can get the time. Very helpful.”

“I really love the Jing screen capture videos that you have given as feedback. It’s very interactive and has helped me a lot. Thank you.”

“It helped my learning by answering questions I had about my writing.”

“Video feedback helped me to better improve my work because it was almost like a classroom setting that allowed the teacher to fill in the interaction gaps without actually having an in-class setting. Not only that, the information could be replayed repetitively, allowing me to review them and reflect on them once I need help with my work.”

While veedback does not allow students to ask questions as they would in a face-to-face, phone, or video conference, hearing the voice of the teacher going through the paper does give students the sense that they can ask more questions because it establishes a personal connection and rapport, creating a sense of availability.

Veedback does more than allow teachers to create a more personal mentoring connection with students; it allows us to take advantage of digital technologies–often thought to dehumanize interaction–to personalize instruction beyond the classroom. Unlike with written comments that rely upon brief descriptions, many students noted that video comments improved their learning primarily because teachers provided deeper explanation.

“I think the Jing videos are excellent because they help me understand a lot better as to what I need to revise. They are a lot better and more helpful than regular comments.”

“It did help my learning, i was able to understand what i was doing wrong, and how to fix it.”

“I think I received more detailed feedback than I might have from written comments.”

“I like it better than normal comments because I can hear your thought process when you are making a comment so it is easier to understand what you’re trying to say.”

Students stated that explanations within video feedback made the thought process of the reader visible, allowing them to identify problems. Thus, veedback provided students with greater guidance about how to improve. One of the second author’s online students stated that veedback “felt like you were explaining it to me,” not just pointing out mistakes. In this way, veedback engages the student in ongoing learning rather than grade justification. Moreover, veedback encourages a response and encourages revision as a re-vision (seeing again), not as just changing to whatever the teacher wants.

It is important to clarify here that it is the audio part of veedback that allows students to hear tone, which is a difficult skill for many students. Moreover, the medium of audio comments encourages students to think of feedback as a conversation. Inexperienced or less experienced (student) writers tend to conflate medium with tone, register, purpose, etc. That is, students often perceive written comments as directive—even when these comments are phrased as questions to consider or presented as guidance for revision. What veedback allows is for instructors to convey tone in both what they say and how they say it, thereby increasing the likelihood that students will understand our comments to be part of an ongoing negotiation between the meaning-making a reader enacts and the intended meaning writers attempt to create. While it is possible to transcribe spoken comments into written form, we posit that it is in hearing our voices that students are engaged in the conversation.

Accessing and Applying Veedback

The problem with traditional margin comments isn’t necessarily in the marks themselves, but in the disconnect between what teachers communicate and how students interpret that feedback. Teachers comment on assignments in hopes of reaching students by providing feedback about what worked well (if a student is lucky) and what went wrong. Feedback is frequently given merely as a form of assessment–justification for a grade. Regularly, feedback is provided after an assignment is completed and with the belief that a student will be able to transfer knowledge about what he did wrong and what he needs to do right the next time. Students are expected to fill in the gaps of their own knowledge. If students are lucky, feedback is given on early attempts (practice activities or essay drafts) to provide guidance, helping those who have lost their way to find their way back to the path.

Although most reviews of screencasting in the classroom have been positive, a recent study in the field of computer science found no significant effect of screencasts on learning (Lee, Pradhan, and Dalgarno 2008), and another that uncovered pedagogical challenges of integrating screencasting (Palaigeorgiou and Despotakis 2010). These critical reviews help us to see that this technology is not a panacea. Like other learning technologies, many of us are quick to see the benefits without fully assessing the problems they present for learners. Many of the problems faced by the computer science students in the first study, such as access, speed, and uncertainty about how to use the tool, were also experienced by our writing students.

Although there are increasing expectations—for instructors as well as students—to use digital tools, sometimes there are additional obstacles based on students’ lack of digital literacy in new media that go beyond typical social networking and entertainment-based tools. The free version of Jing creates SWF files that require a Flash player to open and often requires students to specify which program to open the file with. For the click-and-open generation, this has proved to be a challenge. Alternative software programs include options to save video files in the MP4 format, which can be more easily opened or played on other media devices (such as iPods). However, MP4 files are larger than SWF files, which presents other problems for downloading and/or uploading.

Technological difficulties were one of the primary obstacles to using video feedback. Students participating in the survey overwhelmingly liked veedback, but some complained of difficulties accessing and/or using the technology. Despite written instructions and campus resources providing students with help using academic technologies, two of the nineteen respondents said that they didn’t even know how to get into the videos. Because the survey was anonymous, the instructor remained clueless about which students had problems with access.

“Jing feedback videos and [Dropbox] comments still do not work on my end. I have talked with tech guys and they can’t figure it out. I can’t find out how I did and ways to improve my writing.”

“I like the videos but they were really hard to get them to work.”
“Sometimes it’s hard to open the videos.”

“I have no clue what Jing Feedback Video is and if I got a comment back it may have not opened because I tried to open some of the comments you left, but they would not open for me.”

“I think all the tech we use in class is great, but I have to teach myself how to use it :)”

The technological problems faced by these students resemble the difficulties faced by students unable to decipher the comments on the written page. That is, the technology acted as a barrier between our students and the conversation we tried to enact in written comments—both marginal and end comments—in the same way that written comments themselves are a barrier to the rich conversation that they are meant to convey. Until they asked for help or clarification both groups of students—those with technological problems and those with written comments–remained in the dark, unable to access the feedback in any useful way.

After assessing video feedback in our early classes, we were surprised to learn that technological issues were not always the obstacle to learning. In fact, the obstacle was students’ difficulty understanding how to utilize the feedback in their revision process. Although only two of the respondents stated a preference for written feedback, the complaint brings to light important issues of how students access and apply feedback to make improvements to their work.

“personally i don’t like the jing videos. i’d rather have the comments written down so that I can quickly access the notes and not have to keep track of just where in the video a certain comment is.”

“Written feedback helps more because I get to see the description and review it again if I need to. It is more easier for me to see it written out than video”

The complaint about video feedback in this context can be compared to the specific problems described in the studies about computer science courses (Lee et al. 2008, Palaigeorgiou and Despotakis 2010). It is apparent from these two comments above that the students’ revision practices operate within print-based culture. That is, written feedback is a norm within education and students have background knowledge and a repertoire for working with this mode of feedback, which consequently creates a perception that working with written feedback is easier (even if it is not). Some students, therefore, feel frustrated by unfamiliar modes of feedback and resist new revision practices that require learning new strategies to engage with feedback. While it is not uncommon for new technologies to be resisted when they require some adaptation, students in other contexts show a propensity to develop strategies to overcome these challenges. Thus, continuing research needs to evaluate whether the potential difficulties of implementing veedback outweigh the benefits for learning.

Through this study, we found that students need instruction on strategies for interacting with written and digitally mediated forms of feedback before they can deeply engage in the revision process. Proposed solutions to improve student learning with video feedback include teaching them how to read and apply feedback, not unlike the ways in which we teach them how to interpret the comments we put on a paper. We suggest that teachers encourage students to take advantage of the video format by re-watching sections and pausing when necessary to “digest” comments, as well as teaching them how to use feedback. We also recommend creating tutorials for students to demonstrate how to annotate a “hard copy” of the draft while watching the video, including highlighting and circling key points, time-stamping the draft to correspond with important places in the video, interpreting video feedback, and paraphrasing teacher comments in the margins. When students write their own comments, they do so in terms they understand and use writing to make sense of their own ideas through the act of rephrasing, reworking, and revising. Students already do translation work of digesting feedback during class, student-teacher conferences, and when they sit down to revise their work. What is valuable about student comments on their own work is that in that moment students are actively engaging in the process of revision (and learning).


Even when students understand what we are saying in our comments, they often don’t know how to reconceive the structure of their writing and change it (that is, they don’t understand how to reconfigure their ideas in their own voice). Many students continue to use templates and try to fill in the blanks, rather than see the model and then use the comments to make decisions about the types of revisions that can be made. In the service of learning, finding richer ways to teach students to engage with the work is of the utmost importance. It is our contention that students should be taught how to apply feedback to improve their work. Feedback that engages multiple learning styles while providing deeper explanation offers the possibility of increased student learning in a variety of higher education contexts.

Screencasting allows instructors to provide students with in-depth feedback and/or evaluation. With response papers and short written assignments, veedback allows the teacher to zoom in and highlight portions for discussion while scrolling through the document. With visually-oriented work (e.g., art work, Web sites, and PowerPoint presentations), using the mouse to point at key elements, instructors can talk about the impact of the student’s choices. We suggest that instructors be mindful of time and create multiple videos if there is a need for extensive feedback. Conceivably, students can view each of the videos at different times, even on different days. It is debatable how long web-based videos should be (Agarwal 2011, Scott 2009, SkyworksMarketing 2010), but the need for concision and clarity remains vital for both the student and the instructor. We also recommend instructors inform students if only certain types of issues will be discussed in a particular feedback session.

Based on our pilot study, the majority of students perceived that they understood video comments in a more meaningful way than written comments. Veedback can be used to perform the “confused reader” instead of the “finger-wagging critical teacher.” A margin comment that says “this is awkward” is different than hearing it read aloud from a real reader. The audio portion of veedback allows for communication that is conversational. In other words, teachers can speak the student’s language with veedback in ways that are absent in written comments. When teaching multilingual speakers, teachers may find that reading sentences aloud models Standard English and possible alternative forms that are commonly spoken. Another way to consider using veedback is to give students a sense of a reader’s experience, presenting alternatives through visual imagery and analogies.

We can see that video feedback is effective in terms of engagement with the revision process because we have noticed that students responding to video feedback appeared to attend to big picture issues, making global revisions rather than merely edits to surface level errors. With video feedback, students hear what is confusing about a sentence (rather than just a phrase identifying the error type) and therefore are more willing to attempt revision. Video feedback provides an opportunity to elaborate on problems in writing assignments, which gives students more direct guidance about how to solve the communication problem.

Although students have responded positively to this multimodal teaching tool, additional studies comparing revisions that responded to written feedback and video feedback are needed to investigate specifically what it is about veedback that is so compelling. Student interaction with and application of veedback requires further investigation. Furthermore, assumptions that the current generation is more audio/visual-oriented, a claim that has yet to be proven, may create external pressures for teachers to incorporate digital media into their teaching before research proves its effectiveness. Debates about pedagogy and technology are intricately tied to these assumptions, which must be interrogated. The question remains whether veedback is in fact more effective in improving student performance, or if it is merely student perception because “it’s not your grandfather’s Oldsmobile.” That is, not only are we not using the scolding red pen, but we are also not using any of the traditional feedback methods with which students may have had prior negative experiences.

While redesigning e-learning pedagogy should yield improved student learning, the question of how to measure outcomes will likely remain a source of debate. Although studies have found subtle differences in the impact of technology on student learning, variation in study types and research methodologies continue to leave more questions than answers about the effectiveness of digitally mediated modes of instruction (Wallace 2004), with alternate modes of instructional delivery showing “no significant difference” in student outcomes (Russell 2010). Rather than assessing the effectiveness of e-learning tools like veedback as measured by improved grades, drawing upon the Seven Principles for Good Practice in Undergraduate Education to examine “time on task” (Chickering and Ehrmann 1996) would provide a better indicator of student engagement. We propose that further research utilizing digital tools like Google shared docs would provide an avenue to review writers’ revision histories. This would allow for an examination of the types of revisions students produce in response to different feedback modes during time on task and to garner information about how students are engaged in revision.

We argue that assessing video feedback in terms of performance or most effective mode of delivery would miss the most important point of what our research is attempting to propose. It seems most useful to answer this question: is it fruitful to deconstruct the idea of “engagement in the revision process” by discussing “engagement” and the “revision process” on their own terms? Although there are other ways to assess engagement in the revision process, we believe that students’ attitudes about engaging with feedback provide a wealth of data about affective engagement in the revision process, which gets us closer to understanding what makes our students motivated and, thus, invested in their own learning. While scholars continue to debate about effective ways to motivate students, we propose that using veedback can be an effective way to address the affective component in motivating students. That is, students who are invested in the interpersonal relationship with their instructor/reader are likely to engage in more extensive and/or intensive revision and, consequently, learn at deeper levels.

One of the shortcomings of our study–the fact that our data on student attitudes cannot be compared to writing samples because the survey tool elicited anonymous responses–highlights the challenges of assessing the impact of video feedback on student learning. In our case, the use of anonymous surveys to elicit honest responses conflicted with a desire to triangulate data, leaving us with more answers about students’ perceptions about their own engagement with feedback than proof of whether students who claimed that veedback improved their learning did in fact make improvements.

In courses that teach skills acquisition through a cumulative drafting process, a number of variables at play further trouble the ability to assess the effectiveness of this particular tool. In writing intensive courses, for example, we might question whether improvement in skills from an early draft to a later draft is a product of the feedback method specifically; when assessing improvement in a course that aims to improve skills over the course of a term, supplemental instruction during class (or through online tutorials) and the cumulative effect of skills and knowledge gained between drafts are likely to skew the results. In addition, improvement in the final product (in the form of a revised draft) can differ widely across the data sample, in terms of both classroom dynamics and individual student motivation, background knowledge, ability, and commitment to the course.

Future research that attempts to mitigate some of these variables and triangulate the data may provide a more satisfying answer about the effectiveness of veedback. For example, an option that would allow for a comparison between feedback forms within one class is to use both forms to respond to the same type of assignment (e.g. summaries for two different articles) within one class. While this method may eliminate one variable by using the same students, other problems may arise, such as whether the form used later in the quarter may provide better results on account of cumulative learning or whether one of the assignments produced inferior results on account of the content. To compare across classes, researchers may want to use written feedback first in one class and video feedback first in another. While this may allow researchers to compare across classes and mitigate the problem presented by the order of the feedback form, other variables remain.

While it may be tempting to only ask whether video feedback is superior to traditional modes, we suggest that instructors also consider how this method supplements written feedback through an integration of technology in educational environments (Basu Conger 2005). Because student response to veedback was overwhelmingly positive–and despite technological issues, students preferred this form of engagement to traditional written comments—we intend to continue to evaluate how veedback may improve student learning and enrich teaching. The following student comment reminds us that taking the time for innovation with digital teaching technologies is valuable to student learning and doesn’t fall on deaf ears: “It was a very unique feedback process that helped considerably. I know it’s time consuming but more of this on other assignments would be great!”


Agarwal, Amit. 2011. “What’s the Optimum Length of an Online Video.” Digital Inspiration, February 17. http://www.labnol.org/.

Anson, Chris M. 1989. Writing and Response: Theory, Practice, and Research. Urbana, IL: National Council of Teachers of English. ISBN 9780814158746.

———. 2011. “Giving Voice: Reflections on Oral Response to Student Writing.” Paper presented at the Conference on College Composition and Communication, Atlanta, GA.

Basu Conger, Sharmila. 2005. “If There Is No Significant Difference, Why Should We Care?” The Journal of Educators Online 2 (2). http://www.thejeo.com/Archives/Volume2Number2/CongerFinal.pdf.

CCCC. 2004. “CCCC Position Statement on Teaching, Learning, and Assessing Writing in Digital Environments.” http://www.ncte.org/cccc/resources/positions/digitalenvironments.

Chickering, Arthur W., and Stephen C. Ehrmann. 1996. “Implementing the Seven Principles: Technology as Lever.” AAHE Bulletin 49, no. 2: 3-6. ISSN 0162-7910.

Clements, Peter. 2006. Teachers’ Feedback in Context: A Longitudinal Study of L2 Writing Classrooms. PhD diss., University of Washington. https://digital.lib.washington.edu/researchworks/handle/1773/9322.

Community College Survey of Student Engagement (CCSSE). (n.d.). http://www.ccsse.org/.

Coupland, Nikolas, Howard Giles, and John M. Wiemann. 1991. “Miscommunication” and Problematic Talk. Newberry Park, CA: Sage Publications. ISBN 9780803940321.

Davis, Anne, and Ewa McGrail. 2009. “‘Proof-revising’ with Podcasting: Keeping Readers in Mind as Students Listen To and Rethink Ttheir Writing.” Reading Teacher 62 (6): 522-529. ISSN 0034-0561.

Duvall, Annette, Ann Brooks, and Linda Foster-Turpen. 2003. “Facilitating Learning Through the Development of Online Communities.” Presented at the Teaching in the Community Colleges Online Conference.

Ertmer, Peggy A, Jennifer C Richardson, Brian Belland, Denise Camin, Patrick Connolly, Glen Coulthard, Kimfong Lei, and Christopher Mong. “Using Peer Feedback to Enhance the Quality of Student Online Postings: An Exploratory Study.” Journal of Computer‐Mediated Communication 12, no. 2 (January 1, 2007): 412-433. http://jcmc.indiana.edu/vol12/issue2/ertmer.html.

Evans, Darrell J. R. 2011. “Using Embryology Screencasts: A Useful Addition to the Student Learning Experience?” Anatomical Sciences Education 4 (2): 57-63. ISSN 1935-9772.

Falconer, John L., Janet deGrazia, J. Will Medlin, and Michael P. Holmberg. 2009. “Using Screencasts in ChE Courses.” Chemical Engineering Education 43 (4): 302-305. ISSN 0009-2479.

Furman, Rich, Carol L. Langer, and Debra K. Anderson. 2006. “The Poet/Practitioner: A New Paradigm for the Profession.” Journal of Sociology and Social Welfare 33 (3): 29-50. ISSN 0191-5096.

Georgakopolou, Alexandra, and Dionysis Goutsos. 2004. Discourse Analysis: An Introduction. Edinburgh: Edinburgh University Press. ISBN 9780748620456.

Holmes, Bryn, and John Gardner. 2006. E-Learning: Concepts and Practice. Sage Publications Ltd. ISBN 9781412911108.

Lee, Mark J. W., Sunam Pradhan, and BarneyDalgarno. 2008. “The Effectiveness of Screencasts and Cognitive Tools as Scaffolding for Novice Object-Oriented Programmers.” Journal of Information Technology Education 7: 61-80. ISSN 1547-9714.

Liou, H.-C., and Z. -Y. Peng. 2009. “Training Effects on Computer-Mediated Peer Review.” System 37, (3): 514-525. doi:10.1016/j.system.2009.01.005.

NSSE Home. (n.d.). http://nsse.iub.edu/.

Notar, C. E., J. D. Wilson, and K. G. Ross. 2002. “Distant Learning for the Development of Higher-Level Cognitive Skills.” Education 122: 642-650. ISSN 0013-1172.

Nurmukhamedov, Ulugbek, and Soo Hyon Kim. 2010. “‘Would You Perhaps Consider …’: Hedged Comments in ESL Writing.” ELT Journal: English Language Teachers Journal 64 ( 3): 272-282. doi:10.1093/elt/ccp063.

Palaigeorgiou, George, and Theofanis Despotakis. 2010. “Known and Unknown Weaknesses in Software Animated Demonstrations (Screencasts): A Study in Self-Paced Learning Settings.” Journal of Information Technology Education 9: 81-98. ISSN 1547-9714.

Russell, Thomas L. 2010. “The No Significant Difference Phenomenon.” NSD: No Significant Difference. http://www.nosignificantdifference.org/.

Scott, Jeremy. 2009. “Online Video Continues Ridiculous Trajectory.” ReelSEO: The Online Video Business Guide. http://www.reelseo.com/online-video-continues-ridiculous-trajectory/.

SkyworksMarketing. 2010. “What’s the best length for an internet video?” SkyworksMarketing.com, February 2. http://skyworksmarketing.com/right-video-length/.

Thurlow, Crispin, Laura B. Lengel, and Alice Tomic. 2004. Computer Mediated Communication: Social Interaction and the Internet. Thousand Oaks, CA: Sage. ISBN 9780761949534.

Vondracek, Mark. 2011. “Screencasts for Physics Students.” Physics Teacher 49 (2): 84-85. ISSN 0031-921X.

Wallace, Patricia M. 2004. The Internet in the Workplace: How New Technology Is Transforming Work. New York: Cambridge University Press. ISBN 9780521809313.

White, Edward M. 2006. Assigning, Responding, Evaluating: A Writing Teacher’s Guide. Boston: Bedford/St. Martin’s. ISBN 9780312439309.

Yee, Kevin, and Jace Hargis. 2010. “Screencasts.” Turkish Online Journal of Distance Education 11 (1): 9-12. ISSN 1302-6488.


About the Authors

Riki Thompson is an Assistant Professor of Rhetoric and Composition at the University of Washington Tacoma. Her research takes an interdisciplinary approach to explore the intersections of the self, stories, sociality, and self-improvement. Her scholarship on teaching and learning draws upon discourse, narrative, new media, and composition studies to reflect upon, assess, and improve methods for using digital technology in the classroom.

Meredith J. Lee is currently a Lecturer at Leeward Community College in Pearl City, HI. Her pedagogy and scholarship draws upon discourse, rhetorical genre studies, composition studies, sociolinguistics, and developmental education. Dr. Lee’s work also reflects her commitment to open access education.



  1. Data for this study comes from in-class surveys about assessing learning through written and video feedback. Student comments were provided anonymously through a web-based survey tool. In compliance with Human Subjects review, the web-based surveys anonymized responses.
  2. Acknowledgements:  We would like to thank the community of scholars whose constructive feedback made this article richer: reviewers George H. Williams and Joseph Ugoretz, editors Kimon Keramidas and Sarah Ruth Jacobs, as well as Colleen Carmean for her thoughts on the complexities of measuring meaningful outcomes when integrating technology with teaching.

Skip to toolbar