Issue One

Issue 1, Spring 2012

Introduction
Kimon Keramidas and Sarah Ruth Jacobs

“Let’s Go Crazy”:  Lenz v. Universal in the New Media Classroom
xtine burrough and Emily Erickson

“City of Lit”:  Collaborative Research in Literature and New Media
Bridget Draxler, Haowei Hsieh, Nikki Dudley, Jon Winet, et al.

MyDante:  An Online Environment for Collaborative and Contemplative Reading
Frank Ambrosio, William Garr, Eddie Maloney, and Theresa Schlafly

Talking with Students through Screencasting:  Experimentations with Video Feedback to Improve Student Learning
Riki Thompson and Meredith J. Lee

Philosophy through the Macroscope:  Technologies, Representations, and the History of the Profession
Chris Alen Sula

Steps, Stumbles, and Successes:  Reflections on Integrating Web 2.0 Technology for Collaborative Learning in a Research Methods Course
Kate B. Pok-Carabalona

 

Issue One Masthead

Issue Editors
Kimon Keramidas
Sarah Ruth Jacobs

Managing Editor
Sarah Ruth Jacobs

Copyeditors
Steve Brier
Benjamin Miller
Leila Walker

Web Content Management
Claire Fontaine
Sarah Ruth Jacobs

Philosophy through the Macroscope: Technologies, Representations, and the History of the Profession

Chris Alen Sula, Pratt Institute

Abstract

Macroscopes are tools for viewing what is too large, complex, or dynamic to perceive with the naked eye. This paper examines the use and history of macroscopy in philosophy to represent ideas, trends, and other aspects of the field. Particular emphasis is given to the growing Phylo project, which combines data, user collaboration, and visual analytics to advance the study of philosophy. This paper also presents two pilot studies focused on unique aspects of Phylo: one on the perceived importance of social connections in philosophy and the other on information visualization and academic philosophers. The second study, in particular, points toward several recommendations and areas of further research, and underscores the value of macroscopy in representing the field and suggesting interventions.
 

Few episodes in the history of technology are as curious as seventeenth-century opinion surrounding the first microscopes. Robert Boyle, an early proponent of microscopy, claimed he could prove his theory of color perception if only he “were sharp-sighted enough, or had such a perfect microscope,” as to discern the surfaces of objects ([1663] 1755 Vol. 5, 680). Robert Hooke announced that, now, “nothing is so small, as to escape our inquiry” (1665, Preface). Enthusiasm for the tool was so complete that the early microscopists were more focused on discussing the design, construction, and performance of their inventions than on making biological observations with them (Bracegirdle 1978, 8). “By 1692,” Catherine Wilson reports, “Hooke was already complaining of reaction against the microscope, of boredom and disenchantment” (1995, 67). Not until the nineteenth century did microscopy inspire a school that carried on the tool—this time with scientific results (Singer 1970, 382).1

Nearly three centuries after the microscope was invented, Joël de Rosnay noted the emergence of a new tool: the macroscope. “The macroscope is unlike other tools,” he warned. “It is a symbolic instrument made of a number of methods and techniques borrowed from very different disciplines…It is not used to make things larger or smaller but to observe what is at once too great, too slow, and too complex for our eyes” (de Rosnay 1973, xiv). Within de Rosnay’s sights were topics of energy and survival, information and society, and time and evolution—systems-level phenomena that elude casual perception and exhibit various interaction effects between their constituent elements.

In contrast to the microscope, which was turned on tiny bits of matter, de Rosnay claimed that the macroscope would be focused on ourselves: “This time our glance must be directed towards the systems that surround us in order to better understand them before they destroy us” (xiv). Whereas the microscope was a tool of enhancement and understanding, the macroscope would also be an important tool of action. “Let us use the macroscope,” he urged, “to direct a new look at nature, society, and man to try to identify new rules of education and action” (xiv).

Whether the macroscope(s) will follow the same course as the early microscopes—disuse, disenchantment, delay in adoption—remains to be seen. One suspects its use in fields such as ecology, economics, or evolutionary biology (the principal fields de Rosnay examined) has been enthusiastic and substantive. Whether those in the humanities have embraced the tool is another matter. A telling example may be Franco Moretti’s Graphs, Maps, Trees: Abstract Models for a Literary History (2005). As opposed to the close (microscopic?) readings of a single text that typify literary scholarship, Moretti employs a “distance reading” (macroscopic) method: “instead of concrete, individual works, a trio of artificial constructs—graphs, maps, trees—[is used] in which the reality of the text undergoes a process of deliberate reduction and abstraction….fewer elements, hence a sharper sense of overall interconnection. Shapes, relations, structures. Forms. Models” (1). While Moretti’s work is regarded by many as revolutionary, even he describes it as a “specific form of knowledge,” and Emily Eakin calls  Moretti’s approach a “heretical blend of quantitative history, geography, and evolutionary theory” in her review of the book (New York Times, January 10, 2004)—perhaps suggesting that it is not literary or humanistic at all.

This marginalization is a fate for the macroscope that I wish to avoid.  In the same pragmatic spirit in which de Rosnay advanced this tool, I suggest that the macroscope affords a wide and unpredictable range of applications across disciplines, including opportunities for understanding the ways in which ideas and knowledge are produced and exchanged, especially in the formal and structured channels of academia. This approach is interdisciplinary, requiring a hefty dose of empirical methodology that is not beyond the scope of criticism. As Cathy Davidson points out, “Data transform theory; theory, stated or assumed, transforms data into interpretation. As any student of Foucault would insist, data collec­tion is really data selection. Which archives should we preserve? Choices based on a com­plex ideational architecture of canonical, institutional, and personal preferences are constantly being made” (2008, 710). Though critical perspectives on empiricism should not be lost, it is equally misguided to allow them to paralyze knowledge construction, especially where new methods and their applications are concerned. In such cases, it is worth suspending critical concerns provisionally for the sake of further inquiry. An analogy with engineering is helpful here: whatever the epistemic status of a set of scientific facts, those facts may still help build bridges, launch rockets, or cure diseases. In the end, this pragmatic test may be sufficient to justify a place for those facts in what is considered knowledge; if not, at least we are left with rockets, vaccines, and other technologies. So too, the macroscope as a tool may yield interesting and useful results, and epistemological questions may be reserved for a later date, once some results are in.

This paper introduces a macroscopic view of philosophy by discussing Phylo (http://phylo.info), a developing, interactive search tool that will visualize connections and trends between individuals, institutions, and ideas in philosophy. To provide a context for this tool, I begin by surveying past and present attempts to represent philosophy, noting the technologies and media involved in each, as well as their merits and limitations. Following this historical review, I discuss the data and interface for Phylo in greater detail, paying particular attention to two unique aspects of Phylo: its focus on social connections and its use of information visualization. I then present the results of two surveys that separately address each of these aspects. The first survey focuses on social connections in the field and the importance philosophers ascribe to them in their research. The second survey is a pilot study in which philosophers were asked to interact with macroscopic representations of the field and report their experiences. The data from these studies support the overall approach taken with Phylo and suggest several ways in which macroscopic representations might be refined to aid comprehension and facilitate user interaction.

It should be noted that this paper acts as a précis to the macroscopy of philosophy as a whole. In a fuller treatment, it would be necessary to address the modal aspects of philosophy (e.g., philosophers, institutions, schools, ideas) separately by discussing past and possible representations of each in detail, as well as empirical research on various types of visualizations. In this paper, I discuss the field cross-modally, switching liberally between representations of people, places, and ideas in philosophy. My immediate goal here is to demonstrate the power of the macroscopic perspective, and for that purpose, a wide swath of material will suffice.

Representing philosophy

In a macroscopic sense, representations of philosophy present top-level, abstract views of the field, either generally or for some particular subfield of philosophy or topic area. In some cases, they are created to give introductory overviews to orient non-specialists to the history and content of ideas. In others, they are tools for facilitating research or advancing substantive views about the development, merit, and trajectory of particular schools, movements, ideas, and arguments. Representations of philosophy tend toward the textual; few visual examples exist. Still, it is worth surveying the variety of examples and their merits to inform the design of current, macroscopic representations of the field. Since macroscopy will, to some extent, rely on empirical methods and quantitative analysis, it will be helpful to place historical representations within the framework of figure 1, which is based on idealized poles of purely quantitative and purely qualitative representations. This figure also positions three general categories of textual representations (narratives, encyclopedias, bibliographies), as well as the few instances of visual representations, all of which are discussed in greater detail in this section.

Figure 1. An organizational framework for representations of philosophy.

In assessing these types and representations, it will be helpful to consider issues of neutrality and completeness. Creators of representations are forced to decide not only which material to include and which to exclude (the selection problem), they are also burdened with categorizing that material in particular structures (the organization problem). Both of these processes involve some amount of bias that, to varying degrees, compromises the ideals of neutrality and completeness to which these representations aspire. Of course, it may be equally naïve to suppose that any nonbiased representation of the field is possible. Still, a survey of these representations and their limitations can prove instructive for discovering ways to reduce bias and harness new technologies in the service of more neutral and more complete representations.

Narrative representations

By far, narrative accounts form the most common representations of philosophy and are the stock and trade of most historians of philosophy. In the past decade, significant attention has been given to contemporary analytic philosophy with such notable works as Scott Soames’ two-volume Philosophical Analysis in the Twentieth Century (2003), Avrum Stroll’s Twentieth-Century Analytic Philosophy (2001), and the American Philosophical Association’s Centennial Supplement to the Journal of Philosophical Research, Philosophy in America at the Turn of the Twentieth Century (2003). These and other narratives tend to portray the field in terms of key figures and major movements, leaving aside the minor figures that, in Randall Collins’s view, form the social structure crucial for disseminating ideas: “To speak of . . . a little company of genius would be to misread the sociological point entirely. It is the networks which write the plot of this story, and the structure of network competition over the attention space, which determines creativity, is focused so that the famous ideas become formulated through the mouths and pens of a few individuals” (1998, 78). Collins chooses to represent the field in terms of maps of social connections, which I discuss in a later section on visual representations.

In evaluating narrative accounts and Collins’ criticism, it is worth recalling Marshall McLuhan’s rejection of the notion that content or ideas are simply delivered through the use of various media. On the contrary, “it is the medium that shapes and controls the scale and form of human association and action” (1994, 9). If McLuhan is correct, understanding these delivery methods is at least as important as understanding their actual content, and representing the field strictly in terms of ideas excludes the vehicles by which those ideas are propagated or silenced—the vehicles which also shape and transform those ideas. This fact has long been appreciated by bibliometricians who study patterns of authorship and citation within scholarly book and journal communication. Presently, this research suggests that social connections play an important role in determining co-citation patterns in which two authors cite one another (White, Wellman, and Nazer 2004; Pepe 2011). One must approach these studies with caution, however, because most bibliometric research has been based on scientific literature and citation patterns have been found to differ across disciplines (Lancho-Barrantes, Guerrero-Bote, and Félix Moya-Anegón 2010). Humanists, in particular, are known for crediting each other less frequently than scientists (Hellqvist 2010), but marked patterns of co-citation still exist in the humanities (Leydesdorff 2011) and humanists have begun to credit each other more frequently over time (Cronin, Shaw, and LaBarre 2003). While the full influence of social connections on all citation patterns in the humanities is not fully known, omitting these connections and the minor figures that serve as bridges, liaisons, and transmitters weakens the power of most narrative representations. It is important to note that this weakness is perhaps more the fault of the narrative form itself than that of particular authors; a narrative structure seems ill suited for recording a high volume of connections between people (about whom little else may be known). With this criticism in mind, it is worth reviewing more quantitative textual approaches that may be capable of representing the high-volume data required for macroscopy.

Bibliographic representations

General bibliographies of philosophy date back as early as the fourteenth century, though Johann Jacob Frisius’s 1592 bibliography may be regarded as the pioneer of the form, which was followed for several centuries by larger compendia. As Michael Jasenas notes, these bibliographies all contained some amount of bias: “minor authors whose writings were related to the curriculum of a university often had more chance of being listed in the bibliographies of philosophy of the time than a philosopher whose views were so advanced that they remained for a while peripheral to the field” (1973, 41). The last of these general bibliographies came in 1905 with Benjamin Rand’s massive compilation, which contained over sixty-thousand entries and has been succeeded only by more limited subject bibliographies. While Rand’s bibliography gave exhaustive coverage of the nineteenth century, it still reflected the compiler’s strong interest in psychology, which spans nearly a quarter of the entire work—more than logic, aesthetics, philosophy of religion, and ethics combined. Indeed, Gilbert Varet’s ambitions for an “indiscriminate” catalogue remain unfulfilled to this day.2

Beyond problems of inclusion and exclusion, these bibliographies also suffer from various limitations imposed by their printed form. One is the need to aid users in finding sources: Should entries be arranged alphabetically or chronologically? By author/philosopher or subject matter? Should entries be cross-listed (perhaps with a limit on the number of times one source appears)? These questions were most commonly answered with subject classifications such as metaphysics, aesthetics, and ethics. These classifications, however, exhibit biases similar to those involved with inclusion and exclusion of sources. As Jasenas notes: “philosophy has always been in a state of flux, with differing conceptions of scope and interest prevailing at different times. In classical antiquity philosophy encompasses almost all fields of knowledge; but later many of them gradually became separated from philosophy as independent disciplines. Consequently, the history of the subject is largely colored by the continuing tendency to re-define and emphasize special areas of interest formerly included in larger definitions” (12). A variety of historical classifications is presented in figure 2 below.

Figure 2 (Click to Enlarge). Comparative diagrams of selected bibliographies of philosophy from 1498 to 1905.

Though there is noticeable overlap between these classifications (e.g., inclusion of the trivium and quadrivium across several centuries), the differences are also telling, both in terms of the position of certain subjects within the author’s taxonomy and the presence or absence of certain subjects (e.g., medicine, agriculture, law) in one bibliography as compared to others. Jasenas attributes these differences to the influence of past bibliographers (e.g., Gesner on Frisius), the importance of the subject as perceived by the bibliographer (e.g. Spach’s inclusion of ethics), and the university curriculum familiar to each bibliographer. Though each attempted a cosmopolitan rendering of the field, his representation fell short of the goal of a neutral and complete representation of the field. These bibliographies should be appreciated, however, if only for their ability to catalogue the growing size of the field: from approximately one-thousand authors (Gesner) to five-thousand authors (Spach) to more than sixty-thousand discrete works (Rand).

Encyclopedic Representations

Perhaps in response to the growing literature in the field, bibliographies gave way to encyclopedias, most notably Macmillan’s Encyclopedia of Philosophy (1967). These collectively authored volumes have attempted to treat individual topics in a general and comprehensive manner. Contrary positions are presented with equal regard—or at least given their day in court for the benefit of novices, scholars of other (sub)disciplines, and experts alike. Rather than advocating for specific views or positions, these entries purport to give accurate, thorough, and undistorted representations of philosophy. But is such an expectation really fulfilled, and, to ask the prior question, is such an expectation even warranted? Decisions about the content of encyclopedias and the exposition of their topics are the choices of lone scholars, who, incidentally, have their own views to defend on the very topics they attempt to present in an unbiased fashion. It is not unreasonable to think that these views compromise the neutrality advertised by encyclopedias of the field.  Moreover, there have been few steps toward transcending the single-author model for encyclopedic entries. Though we should question the assumption that collectives always produce better results than individuals, it is reasonable to suspect that collaboration helps to diminish the distorting effects of individuals’ opinions. Nevertheless, individual authorship and the myth of lone expertise are mainstays, and problems of selection and organization affect encyclopedic representations no less (and perhaps much more) than bibliographic representations.

Visual representations

Though past visual representations of philosophy are scarce, three examples stand out. The first is the work of Milton Hunnex, who created chronological and thematic charts that “take advantage of the fact that philosophical theories tend to recur in the history of thought, and tend to cluster within the limitations of several possibilities of explanation” ([1961] 1983, 1). Hunnex’s book contains twenty-four diagrams, each spread over two full-sized pages, which show various fissions and fusions between ideas and schools of thought over time, depicted on the horizontal axis. Examples of diagrams include “Platonic Dualism,” “Hedonism,” “Kantianism (Dualism), “Analytical Philosophy,” and “Existentialism and Phenomenology.” These diagrams attempt to show both antecedent and consequent influences, and the expected relationships (e.g., Socrates begat Plato, who begat Aristotle) are indeed reflected. Altogether lacking in this work is any discussion of criteria and methodology for drawing the charts, and Hunnex repeatedly affords his contemporaries (now largely forgotten) an equal place with canonical authors. Despite its methodological opacity and weakness, his work can safely be praised for its innovation and utter uniqueness in the field.

The second and third visual examples specifically address the role of people in philosophy. As we saw in discussing narrative representations, this stress on social connection may be especially important for explaining how, in a macroscopic sense, ideas flow across time and space in the field. Collins, who develops this idea more formally using sociological theory, also includes nearly forty diagrams depicting social influence in the field. “Major” and “secondary” or “minor” philosophers are shown in ties of acquaintanceship (including master–pupil ties), as well as conflict based on their work.3 Collins does not include all philosophers, noting in an appendix that “the proportion of publications by and citations to (and hence the influence of) intellectuals falls off rapidly as one leaves the central core [of scholars]. This kind of work would only add to the tails of the distribution” (Collins 1998, 890). As with the bibliometric studies discussed earlier, this observation is based on scientific literature dating back to the modern period, and the validity of these claims for humanities probably deserves further scrutiny. Collins’s representations nevertheless advance a general understanding of social influence in the field, though other questions remain: On what specific bases do attributions of acquaintanceship and conflict rest? How much disagreement is sufficient to characterize one philosopher as “attacking” another (Collins’s term)? Might it be possible for philosophers to agree on some issues yet disagree on others, and how would such a relationship be rendered? These problems arise, in part, from the fact that space is limited and the diagrams are static; only so much can be shown at one time.

A more recent, interactive example is Josh Dever’s Philosophy Family Tree (2005). Visitors to the website (https://webspace.utexas.edu/deverj/personal/philtree/philtree.html) can click through a tree layout or “file browser” display to view a hierarchy of over eight-thousand philosophers since the seventeenth century. While the Tree attracted early attention in the philosophical blogosphere, it records only a single relationship—an “academic parent–child” relationship—across “generations,” and there is neither a consistent methodology for attributing this relationship (sometimes it is one’s dissertation advisor, sometimes a major influence) nor documentation for these attributions. Also, in contrast to Collins’s diagrams, only acquaintanceship is tracked, not disagreements in philosophical views, making it harder to trace the flow of ideas across time. Like the two other visual examples discussed earlier in this section, the Tree should be praised for its sheer existence and ambitions, but greater attention must be paid to methodology before it can have serious research and teaching applications.

Phylo as macroscope

In the foregoing sections, I criticized textual and visual representations of philosophy for their various biases (esp. through their organizational structure) as well as their incompleteness, either in omitting social connections or failing to provide sufficient information about those connections. In this section, I describe the data and interface of Phylo, which attempt to address these weaknesses and provide a representation of the field that is both more neutral and more complete. I also address the limits of representation in general—that is, the distortion that is bound to occur with any representation. In discussing the organization, completeness, and representation-effects of Phylo, it is important to remember that the methodology, rather than the status of its current dataset or interfaces, best captures the macroscopic nature of the tool.

Phylo is built in Drupal, an open-source content management platform, which calls information from one or more SQL databases and offers thousands of add-on modules as well as the ability to design custom modules. Phylo contains several add-ons as well as a suite of custom modules that are planned for release to the public later this year, so that other disciplines may use the same framework for gathering and displaying data. Drupal was chosen for the project both for its flexibility and because its nature as an open-source tool aligns with the principles statement adopted in the mission statement for Phylo (http://phylo.info/mission).

The Phylo database currently contains information about

  • philosophers (e.g., name, date of birth, date of death);
  • faculty positions (e.g., appointments, job openings);
  • institutions (e.g., colleges, universities); and
  • formal scholarly communications (e.g., articles, books, dissertations).

Additional frameworks are being explored for informal scholarly communications (especially correspondence) and teaching materials, such as syllabi. From this data, a number of relations may be constructed, including advisor–advisee ties, dissertation committee service, and departmental colleagues; authorship and citation patterns are also planned. At present, all major datasets on North American philosophers since the 1880s (the date of the first dissertations) have been consulted, and nearly all recorded dissertations and faculty appointments are included, covering over 17,334 philosophers. Data on dissertations was obtained by cross-referencing Dissertation Abstracts International, Thomas Bechtle’s Dissertations in Philosophy Accepted at American Universities, 1861-1975 (1978), and annual lists of doctoral degrees printed in the Review of Metaphysics, as well as archival research at seventeen institutions. Faculty appointment data were obtained by cross-referencing American Philosophical Association membership lists, the biannual Directory of American Philosophers, and departmental records and webpages across various institutions. Information on publications and citations, which is significantly larger, remains an ongoing source of data collection. Data is also collected from user submissions, extending the scalability of the tool and encouraging the development of a community of users who add, explore, and update information. Users are strongly encouraged to provide one or more sources for each item of information (e.g., degree date) they submit, which both increases the transparency of each data point and maintains longstanding practices of citations and scholarly warrants.

Interactive displays have been planned to present this information in various network, chronological, and geographical visualizations. Two sample visualizations are presented in figures 3 and 4, which show a network diagram of individual ties and a timeline of departmental history (respectively).

Figure 3 (Click to Enlarge). Sample network graph of individual ties.

Figure 4 (Click to Enlarge). Sample timeline of a philosophy department.

Each of these visualizations is meant to show, at a glance, the various people and areas associated with one person or a department—providing larger context for that person or department. The timeline display has already been implemented and tested during development, and work on several additional displays with more analytic power is underway in spring 2012.

The digital form of Phylo and its associated computing capabilities afford a unique opportunity for studying the discipline in an empirical way, one that improves upon traditional representations that are at best subjective and at worst merely anecdotal. The usefulness of this method has already been previewed by two recent articles:  Jason Byron (2007) has used bibliometric data to dispel the received view that philosophy of science neglected biology in the 1930s, 1940s, and 1950s. This confirms Moretti’s conclusion that “we see them [quantitative data] falsifying existing theoretical explanations” (30). In a very different use of quantitative methods, Shaun Nichols (2007) has proposed a hypothesis about conjoint belief in separate elements of theory-pairs (e.g., determinists will be more likely than indeterminists to be compatibilists) and tested this by “observing” the views of past philosophers and analyzing pairings of those views for statistical correlations. The novelty of Byron and Nichols’s results, which center on relatively small datasets, suggest even greater potential for Phylo, which is several orders of magnitude larger in size.

The digital nature of Phylo also ensures that hard-and-fast decisions about selection and organization need not be made. So-called “fringe” figures and ideas can be included, along with their connections to other data; if indeed these entities are marginal, then their relative unimportance should be reflected in visualizations of the data. The data framework can also extend to capture new sources of information. While institutional relationships and formal scholarly communications dominated the lives of twentieth-century philosophers, less formal relationships and correspondence were important during earlier periods. The Drupal installation can accommodate such differences by building broader content types with more flexible fields. For example, the content type for documents need not require listing a publisher or exact date; such a framework for documents might be neutral between, say, formal publishing and letters or written notes. In addition, through multiple taxonomies (and crosswalks relating terms in each), documents can be categorized under multiple systems representing subjects and ideas, and users can switch back and forth between different systems of arrangement, eliminating the need for any single, “canonical” organization. These powers of scale and flexibility help to reduce the biases imposed by single authors, editors, or bibliographers that we examined in discussing existing representations. Indeed, comparing different arrangements can itself be a valuable exercise in understanding and even for correcting the biases of each.

One issue that remains, however, is the distorting effect of all representations. For any phenomena with a complexity of n, a representation of n will only be as complex as nm, where m is based on the expressive power of that representation. A 2-D map, for example, will inevitably distort actual geography because of dimension reduction, while a 3-D map may lessen the distortion but is unlikely to completely eliminate it. Similar concerns apply to non-geographic representations, such as those made possible by Phylo. Two points may be offered in response. First, given the empirical basis of Phylo as a project, statistical and event analyses may be performed on the data to yield more accurate, non-representational results, such as the factors that predict the emergence of a major scholar or article. These analyses may, in turn, inform design choices to yield visualizations that reflect statistical realities. Second, as a digital tool, Phylo can represent datasets in a variety of different visualizations; no one type or design need be favored above all others. These various visualizations may then be evaluated, particularly through user studies, to reveal which are most useful, revealing, etc. Though Phylo’s visualizations may exhibit some limitations, Phylo’s digital foundation allows empiricism and pluralism to more thoroughly preclude instances of bias, incompleteness, and distortion.

Once suitable data have been gathered and visualizations created, this framework will help to reveal which kinds of social influence have resulted in interesting and original philosophical ideas and which have led us down more frustrating or less fruitful paths. These observations about the beneficial or detrimental effects of various kinds of social influence would then help to inform practical decisions about admission, employment, tenure and promotion, even research and teaching in the field—all in keeping with de Rosnay’s notion of the macroscope as, above all, a tool for action.

Given the promise of this approach, it is worth asking how empirical evidence can contribute to the design and construction of Phylo, both in terms of data on philosophers and interfaces that would be useful to the philosophical community. The following two sections address the issues of data and display by discussing empirical studies of social connections in philosophy, as well as the role of information visualization in displaying data about the field. Each section presents a pilot study conducted with philosophers about the value they assign to social influences or their experience using visual representations of the field.

Social influence in philosophy

Social connections are often discounted in a field that prizes itself on rationality. As I have argued elsewhere, it is a mistake to regard the progress of philosophical ideas as the march of reason and rationality through time; social influence in the form of teachers, colleagues, and critics plays a central role in practices of reward and reputation, and an empirical study of these influences is sorely needed (Morrow and Sula 2011).

As a preliminary attempt to study this influence, seventy-five philosophers were asked about the information sources that contribute to their understanding of other analytic philosophers. One participant was chosen at random from each of the top fifty Leiter-ranked institutions,4 and one participant was chosen at random from faculty working in the history of analytic philosophy at the top twenty-five Leiter-ranked departments for the study of the history of analytic philosophy. Participants were asked to rank the importance (on a scale of one to six) of various information sources in their understanding of a particular philosophy from the past 125 years. Sources included the individual’s doctoral institution, dissertation advisor, graduate school peers, students he/she advised, and publications. Participants were also asked how much those same things have influenced their own research and/or professional career. The order of the list was changed from the first question to the second, and the wording was shifted from third person (e.g., “dissertation advisor,” “graduate school peers”) to second person (e.g., “who your dissertation advisor was,” “who your peers were in graduate school”). The results of this survey are presented in table 1.

In general, participants rated the influence of most items on their own research/careers as higher than the importance of the same items in understanding the work of others. Even after adjusting for this trend (an average difference of 0.67), participants still ranked personal factors (italicized) with an average importance of 2.12 in others’ work compared to 2.86 (3.53 raw) in their own. The average importance of publications was ranked 5.82 in other’s work and 4.85 (4.18 raw) in their own. The average difference between publications and personal factors was thus 3.70 for others’ work and 1.99 (0.65 raw) for their own. This is a striking asymmetry in perceptions of personal influences: respondents weighted publications nearly twice as important as people in understanding others’ work, yet the difference between the two shrunk by half when respondents were asked about their own influences.

In short, the survey respondents acknowledged that personal ties have an important influence on their own research/careers, yet assign little importance to these ties in understanding other philosophers. The magnitude and mechanisms of personal influences, as well as the cause or causes of this asymmetry in perceived value, are areas for further inquiry. The self-assessment data from the study nevertheless indicates that philosophers recognize the role of social connections in their own work. On the basis of these self-reports, they ought, rationally, to assign the same weight to those connections in approaching the work of others. In incorporating data on social connections as well as publications, Phylo serves an important purpose in facilitating more comprehensive research, and it will be interesting to repeat this survey (or a similar one) with Phylo users in the future to determine if the tool has changed perceptions within the field.

The role of information visualization

While the scale of Phylo offers clear advantages, it also brings with it certain challenges of representation. Chief among them is the need to make high-volume, macroscopic data cognitively salient for users. Much of the information contained in Phylo cannot be easily comprehended in textual form; thus, information visualization, “the use of computer-supported, interactive visual representations of data to amplify cognition,” plays a crucial role (Card, Mackinlay, and Schneiderman 1998). To test the usefulness of different visualization types (and even the usefulness of information visualization itself among philosophers), a pilot study was conducted using data from Phylo as well as publication records contained in PhilPapers, a free online catalog of books, articles, and other publications in philosophy. These experimental visualizations are not included in Phylo at present, and were constructed simply for this study using external software. The methodology of this study as well as the visualizations employed in it are presented below, along with the survey results. I conclude by offering several recommendations for the types of visualizations that could become part of Phylo’s online presence in the future.

Methodology

Four visualizations were prepared for this study based on three different visualization techniques: population graphs, Lexis diagrams, and network diagrams. The methodology and data sources of each chart are described below with the accompanying visualization. Participants were asked to rate (on a five-point scale) how informative each visualization was and how much it reflected their understanding of the topic. They were also asked to provide qualitative feedback on whether they learned anything new from the visualization and how, if at all, their understanding of the topic changed as a result of the visualization. The survey was sent to fifty-five faculty, graduate students, and undergraduate majors in philosophy within the City University of New York. Random selection was not employed in this pilot study, though efforts were made to recruit individuals from a variety of areas and methodologies. Twenty-eight respondents completed at least some part of the survey; nineteen completed all parts. Seventeen provided demographic information on their gender, age, primary status in the field, and primary affiliation. These results are presented in table 2. There was no significant correlation between any demographic categories and participants’ responses. Given the small number of respondents, this is unsurprising, and a larger study would do well to test for such correlations.

Visualization 1: Population graph

Population graphs show the overall structure of a population based on age group (i.e., birth year cohort); male and female figures are often shown separately, since sex has important implications for other demographic trends involving birth rate, education, health, and so on.  Population graphs often highlight the growth and age structure of the population, as well as changes in female and male birth rate and mortality.

For the purpose of this study, the graph was adapted in several ways. Although individuals enter populations through birth or immigration, entry into academic disciplines is usually characterized by completing a doctoral degree in the field. Thus, the year in which a person’s doctoral degree was awarded was used as a proxy for his/her birth year in the field. In this study, there was no correlate for exit because many actual years of death were unknown and, even if they were, it is likely the individuals exit the field through retirement or unemployment in the years or decades before they die. As a result, figure 5 is not a true population graph because it does not show any existent population over a single period of time, but rather several continuous populations within the period indicated. Nevertheless, the chart captures overall trends of growth and decline and gives a sense of the age and gender of the current population.

Figure 5 (Click to Enlarge). Population chart showing North American PhDs in philosophy between 1905 and 2005.

Data on doctoral degrees was compiled from dissertations housed in libraries at seventeen North American institutions, as well as records contained in Dissertation Abstracts International, Thomas Bechtle’s Dissertations in Philosophy Accepted at American Universities, 1861-1975 (1978), and annual lists of doctoral degrees printed in the Review of Metaphysics. Sex was imputed to degree recipients based on first and middle names, as well as additional research about specific individuals. Of the original 14,926 degree recipients, the sex of 911 (6.1 percent) could not be determined, and these were excluded from the visualization.

Visualizations 2 and 3: Lexis diagrams

Lexis diagrams (1875) show the exposure of cohorts to a given condition (e.g., giving birth, contracting a disease) over a continuous period of time. They are useful for looking beyond aggregate population trends to show the contribution of different segments of the population to those trends.

The visualizations below were created with Kirill Andreev’s Lexis Map Viewer, which plots age on one axis and time on another and uses hue to represent density of exposure to the condition of interest: in this case, publishing a journal article. As in the previous visualization, similar adaptations were made to accommodate this special population. In the standard case, cohorts are constructed out of birth years and represent individuals who move through time and historical events together. Academics, however, receive their PhDs at various ages, and a philosopher with n years of experience is more likely comparable to another philosopher with n years of experience, regardless of the age of each, rather than a philosopher of the same age with n + m years of experience. As a result, a better choice than birth year for constructing academic cohorts is the year in which a person’s doctoral degree was awarded. Since year of “death” (i.e., exit from the field) was again unknown, an artificial cutoff of fifty years from receipt of degree was used on the assumption that most individuals receive their degrees, at the earliest, in their late twenties and would likely be retired or dead fifty years from that date.

Figure 6 (Click to Enlarge). Lexis diagram showing individually-authored journal articles by North American philosophy PhDs, between 1876 and 2010.

Figure 7 (Click to Enlarge). Lexis diagram showing the ratio of female to male individually authored journal articles by North American philosophy PhDs between 1876 and 2010.

Data for figure 6 was prepared by matching articles listed in PhilPapers with degree and sex information from the first visualization. The number of articles displayed is absolute.5 The ratio of female to male publication in figure 7 was computed by direct comparisons of year x “age” sets (e.g., articles published in 1926 by males who received their degrees eighteen years ago with articles published in 1926 by females who received their degrees eighteen years ago). Of the original 228,078 articles obtained, 56,665 (24.8 percent) were matched with degree data to produce these visualizations. This percentage yield is explained by several factors. First, PhilPapers contains several thousand entries written by psychologists, classicists, and scholars from other disciplines. In addition, 29,799 (13.1 percent) articles were written after 2005. Since the degree dataset ends at 2005, recent publications by junior scholars could not be matched. Finally, North American philosophers account for only a fraction of the overall output of the field. Still, it is reasonable to suspect that the data reflected in these visualizations is far from complete, but there is no reason to suspect any systematic bias that would distort the general accuracy of the visualizations in reflecting overall trends.

Visualization 4: Network diagram

Network diagrams display connections and relationships between parts of a system. Each network is made up of a set of discrete elements (i.e., vertices, nodes, actors) and a set of connections (i.e., edges, links, relational ties) between them (Barabási 2002; Buchanan 2002; Newman, Barabási, and Watts, 2006; Watts 2003). Network analysis can yield information about individual elements, including their prominence and the roles they play as isolates, liaisons, bridges, etc.; pairs of elements, including distance and reachability; and group-level properties, including centralization, density, prestige, and recurring structural patterns (equivalence classes and blockmodels). Current work by network analysts includes the study of multiple relations, dynamic networks, and longitudinal network data.

Data for this visualization was based on Barbara C. Berman’s Library of Congress Subject Headings in Philosophy: A Thesaurus (2001), which provides a hierarchical taxonomy of subject headings. Each subject was treated as a separate node, and the parent–child relationship was recorded as an edge. In some cases, homonyms were disambiguated to aid in clarity.

Figure 8 (Click for Interactive Visualization). A network diagram of subject headings in philosophy published by the Library of Congress.

This visualization was prepared using the network display in IBM’s Many Eyes online data visualization software. The software is designed to keep strongly related items close to each other and weakly related ones farther apart. The network display is interactive; it can be zoomed or panned to obtain a detailed view of different parts of the diagram. Clicking on a node highlights its edges and immediately connected nodes.

Results

Participants’ responses about the informativeness of each visualization and how much each visualization reflected their understanding of the topic are shown in figure 9.

Figure 9 (Click to Enlarge). Results of the information visualization pilot study.

Three-quarters of respondents found the population graph informative or very informative. In their qualitative responses, a number of participants noted the sharp rise in degrees in the 1970s and mid-1990s as well as several trends involving women in the field, including the still-stark gender inequality in the field. Several expressed surprise at the number of women in the field in the early twentieth century while noting that it was still quite small. A few respondents thought that placing the male and female columns side-by-side (as in a traditional column chart) rather than in the population-chart style would have made the visualization clearer.

Data on the Lexis diagrams were inconclusive. Reactions to the first Lexis diagram (see figure 6) included almost equal numbers of favorable and unfavorable responses.  Reactions to the second Lexis diagram (see figure 7) exhibited a bimodal distribution, with many respondents reporting that the visualization was very uninformative or neither informative nor uninformative. One possible explanation of the different reactions to these two Lexis diagrams is that the third visualization relies on more complex ratio data, while the second visualization represents whole numbers of articles. In addition, it is possible that the sequence of the questions influenced responses to the third visualization, with those who had unfavorable reactions to the second visualization exhibiting even more unfavorable reactions to the third (another Lexis diagram).

While the quantitative responses to the Lexis diagrams might recommend discontinuance of their use as visualizations, the qualitative responses tell another story. A number of participants said it took some time to interpret the diagrams, but once they did, the diagrams reflected their understanding of the topic. Those who expressed favorable responses said the diagrams reflected their understanding and made the information more “immediate,” “striking,” and “interesting.” Many respondents, including those who had unfavorable reactions in their quantitative evaluations, began to summarize interesting points of the diagram, offer explanations about trends in the data, or pose further questions about what was happening in the field. These responses suggest that although the visualizations may have been difficult to understand quickly, participants indeed engaged with them, enough to generate critical questions and topics for further inquiry. This reflects Plaisant’s observation that visualization can often be used to “answer questions you didn’t know you had” (2004, 111).

Responses to the network diagram exhibited a bimodal distribution similar to the third visualization, with most participants finding the diagram either very uninformative or neither informative nor uninformative.  A number of respondents noted problems in accessing the diagram, either because of browser incompatibility or errors in JavaScript that prevented them from viewing it. Many thought there was too much information presented initially and some “filtering mechanism” was needed. Several expressed confusion over the network diagram itself, asking about the significance of the nearness relation or questioning the closeness of particular nodes (e.g., police ethics and surrealism.) Despite these difficulties, a few respondents reported that the network diagram was interesting and “pleasant to use,” and one even noted that it “gave me a jolt to realize how much of philosophy is non so-called ‘western’.”

Recommendations

In this study, the four visualizations were presented purposely without explanation or interpretation to test subjects’ unvarnished reactions. In addition, the visualization types were mainly selected from demographic representations, since the information presented concerned populations. Respondents trained primarily in philosophy were probably unfamiliar with these displays and may have needed additional time or training to interpret them. Many seem to have interpreted them correctly, however, and their qualitative responses reflected critical engagement with the visualizations. In future presentations of these visualizations, a short introduction or tutorial explaining the format of the visualization and perhaps drawing attention to one or two points of interest would likely meet with more favorable reactions from participants.

In addition, the network diagram in particular presents challenging issues.  The strength of network diagrams is their ability to display high-density information; at the same time, that sheer volume of data may prove overwhelming and stymie further exploration. As with interactive maps, network visualizations may benefit from pre-set levels of zoom centered around users’ interests. Should they wish, users can then explore other areas or more abstract levels of the diagram without needing to navigate these more complicated layers first. Additional recommendations for simplifying network displays can be found in Herman, Melançon, and Marshall (2000).

Conclusion

As de Rosnay suggested, the macroscope is still in its infancy. Like the microscope, it too will require refinement as an instrument and sufficient practice to yield intelligent use. Still, it is worth reflecting on the timely range of issues in the field that even this pilot study raised: the persistence of gender inequality in North American philosophy, the potential overrepresentation of Western philosophy in perceptions of the field, and underlying structural trends in the population that impact scholarly output. Indeed, the macroscope invites critical inquiries that have been largely excluded from traditional representations of the field—no bibliography gives a subject heading for the gender gap in philosophy, no narrative shows so starkly the full scope of topics contained in the printed record—and affords us valuable opportunities for not only understanding the practices of the field but also for changing them.

For the macroscope to play this role in philosophy and other disciplines, it requires not only use, but intelligent use. It must be focused in the right ways, turned in the proper directions, and supported by a community of inquirers who understand its methods and their applications. The design, data, and visualizations of Phylo all aspire to these standards. Beginning with a researched background of social connections in philosophy, the project incorporates social network analysis, bibliometrics, demography, and other analytical methods to shed new light on ideas and patterns in the field. The design of the interfaces takes advantage of research and user studies in information visualization to provide visitors with a quick, intuitive grasp of high-volume, longitudinal data. Above all, the digital environment of the project provides visitors with opportunities to interact, explore, contribute, and discuss—encouraging the creation of a community around the data, a community that can then shape its own future based on a macroscopic view of itself and its past.

Bibliography

American Philosophical Association. 2003. Philosophy in America at the Turn of the Century. Charlottesville, Virginia: Philosophy Documentation Center. ISBN 9781889680330.

Andreev, Kiril. Lexis Map Viewer, Version 1.0.  [Computer program.] http://www.demogr.mpg.de/books/odense/9/cd/default.htm.

Barabási, Albert-László. 2002. Linked: The new science of networks. Cambridge, Massachusetts: Perseus. ISBN 9780738206677.

Bechtle, Tomas C. 1978. Dissertations in Philosophy Accepted at American Universities, 1861-1975. New York: Garland Publishing. ISBN 9780824098353.

Berman, Barbara L., ed. 2001. Library of Congress Subject Headings in Philosophy: A Thesaurus. Charlottesville, Va.: Philosophy Documentation Center. ISBN 9780912632643.

Boyle, Robert. [1663] 1744. Works. Edited by Thomas Birch. 5 vols. London: A. Millar. OCLC 40814148.

Bracegirdle, Brian. 1978. A History of Microtechnique. Ithaca, New York: Cornell University Press. ISBN 9780801411175.

Buchanan, Mark. 2002. Nexus: Small Worlds and the Groundbreaking Science of Networks. New York: Norton. ISBN 9780393041538.

Byron, Jason M. 2007. “Whence Philosophy of Biology?” British Journal for the Philosophy of Science 58:409–22. ISSN 0007-0882.

Card, Stuart, Jock Mackinlay, and Ben Shneiderman, eds. 1998. Readings in Information Visualization: Using Vision to Think. San Francisco: Morgan Kaufmann. ISBN 9781558605336.

Chalmers, David, and David Bourget, eds. “PhilPapers: Online Research in Philosophy.” Last modified 1 December 2011. http://philpapers.org/.

Collins, Randall. 1998. The Sociology of Philosophies: A Global Theory of Intellectual Change. Cambridge, Massachusetts: Harvard University Press. ISBN 9780674816473.

Cronin, Blaise, Debora Shaw, and Kathryn La Barre. 2003. “A Cast of Thousands: Coauthorship and Subauthorship Collaboration in the 20th Century as Manifested in the Scholarly Journal Literature of Psychology and Philosophy.” Journal of the American Society for Information Science and Technology 54(9):855–71. ISSN 1532-2890. doi:10.1002/asi.10278.

Davidson, Cathy K. 2008. “Humanities 2.0: Promise, Perils, Predictions.” PMLA 123(3):707–17. ISSN 0030-8129.

de Rosnay, Joël. 1979. The Macroscope. New York: Harper & Row. ISBN 9780060110291. http://pespmc1.vub.ac.be/macrbook.html.

Dever, Joshua. The Philosophy Family Tree. University of Texas at Austin, https://webspace.utexas.edu/deverj/personal/philtree/philtree.html (accessed 31 May 2011).

Eakin, Emily. 2004. “Studying Literature By the Numbers.” New York Times. January 10. http://www.nytimes.com/2004/01/10/books/studying-literature-by-the-numbers.html  (accessed 31 May 2011).

Edwards, Paul, ed. 1967. The Encyclopedia of Philosophy. New York: Macmillan. ISBN 9780028949604.

Frisius, Johann Jacob. 1592. Bibliotheca philosophorum classicorum authorum chronologica. In qua veterum philosophorum origo, sucecessio, aetas, & doctrina compendiosa, ab origiine mundi, usq; ad nostram aetatem, proponitur. Tiguri: Apud Ioannem Wolphium, typis Frosch. OCLC 311644457.

Hellqvist, Björn. 2010. “Referencing in the Humanities and its Implications for Citation Analysis.” Journal of the American Society for Information Science and Technology 61(2):310–18. ISSN 1532-2890. doi:10.1002/asi.21256.

Herman, Ivan, Guy Melançon, and M. Scott Marshall. 2000.  “Graph Visualization and Navigation in Information Visualization: A Survey,” in IEEE Transactions in Information Visualization and Computer Graphics 6(1):24–43. doi:10.1109/2945.841119.

Hooke, Robert. 1665. Micrographia; or, Some Physiological Descriptions of Minute Bodies Made by Magnifying Glasses. London: J. Martyn and J. Allestry. OCLC 5390502.

Hunnex, Milton D. [1961] 1983. Chronological and Thematic Charts of Philosophies and Philosophers. Grand Rapids, Michigan: Academie Books. ISBN 9780310462811.

Jasenas, Michael. 1973. A History of the Bibliography of Philosophy. New York: Georg Olms Verlag Hildesheim. ISBN 9783487046662.

Lancho-Barrantes, Bárbara S., Vicente P. Guerrero-Bote, and Félix Moya-Anegón. 2010. “What Lies Behind the Averages and Significance of Citation Indicators in Different Disciplines?” Journal of Information Science 36:371–82. ISSN: 0002-8231.

Leiter, Brian, ed. “The Philosophical Gourmet Report.” Last modified 2009. http://www.philosophicalgourmet.com.

Lexis, Wilhelm. 1875. Einleitung in die Theorie der Bevölkerungsstatistik. Karl Trübner, Straßburg. OCLC 27127671.

Leydesdorff, Loet, Björn Hammarfelt, and Almila Salah. 2011. “The Structure of the Arts & Humanities Citation Index: A Mapping on the Basis of Aggregated Citations Among 1,157 Journals.” Journal of the American Society for Information Science and Technology 62(12):2414–26. ISSN 1532-2890. doi:10.1002/asi.21636.

McLuhan, Marshall. [1964] 1994. Understanding Media: The Extensions of Man. Cambridge, Massachusetts: MIT Press. ISBN 9780262631594.

Moretti, Franco. 2005. Graphs, Maps, Trees: Abstract Models for a Literary History. Brooklyn, New York: Verso. ISBN 9781844670260.

Morrow, David R. and Chris Alen Sula. Phylo. http://phylo.info (accessed 31 May 2011).

———. 2011. “Naturalized Metaphilosophy.” Synthese 182(2):297–313. ISSN 1573-0964. doi:10.1007/s11229-009-9662-1.

Newman, Mark, Albert-Lászlo Barabási, and Duncan J. Watts. 2006. The Structure and Dynamics of Networks. Princeton, New Jersey: Princeton University Press. ISBN 9780691113579.

Nichols, Shaun. 2007. “The Rise of Compatibilism: A Case Study in the Quantitative History of Philosophy.” Midwest Studies in Philosophy 31(1):260–70. ISSN 1475-4975. doi:10.1111/j.1475-4975.2007.00152.x.

Pepe, Alberto. 2011. “The Relationship Between Acquaintanceship and Coauthorship in Scientific Collaboration Networks.” Journal of the American Society for Information Science and Technology 62(11):2121–32. ISSN 1532-2890. doi:10.1002/asi.21629.

Plaisant, Catherine. 2004. “The Challenge of Information Visualization Evaluation” in Proceedings of Conference on Advanced Visual Interfaces AVI’04. IEEE:109–16. ISBN 1-58113-867-9. doi:10.1145/989863.989880.

Rand, Benjamin. [1905] 1965. Bibliography of Philosophy, Psychology, and Cognate Subjects. In Dictionary of Philosophy and Psychology, Vol. III. Edited by James Mark Baldwin. New York: Macmillan.

Singer, Charles. 1970. A Short History of Scientific Ideas to 1900. London: Oxford University Press. ISBN 9780198810490.

Soames, Scott. 2003. Philosophical Analysis in the Twentieth Century. Princeton, New Jersey: Princeton University Press. ISBN 9780691115733.

Stroll, Avrum. 2001. Twentieth-Century Analytic Philosophy. New York: Columbia University Press. ISBN 9780231500401.

Watts, Duncan. J. 2003. Six Degrees: The Science of a Connected Age. New York: Norton. ISBN 9780393041422.

White, Howard, Barry Wellman, and Nancy Nazer. 2004. “Does Citation Reflect Social Structure? Longitudinal Evidence From the “Globenet” Interdisciplinary Research Group.” Journal of the American Society for Information Science and Technology 55(2):111–26. ISSN 1532-2890. doi:10.1002/asi.10369.

Wilson, Catherine. 1995. The Invisible World: Early Modern Philosophy and the Invention of the Microscope. Princeton, New Jersey: Princeton University Press. ISBN 9780691034188.

Zalta, Edward N., ed. Stanford Encyclopedia of Philosophy. http://plato.stanford.edu/ (accessed 31 May 2011).

 

 

About the Author

Chris Alen Sula is an Assistant Professor at the School of Information & Library Science at Pratt Institute. He teaches courses in digital humanities, information visualization, knowledge organization, and theory of information. He co-founded Phylo in 2006 with David R. Morrow.

 

Notes

  1. Acknowledgements: I am greatly indebted to David Morrow for extensive feedback and conversation on the ideas presented here, as well as his work in co-founding and developing Phylo with me over the past six years. I am also grateful to Steve Brier, members of the New Media Lab, the article reviewers, and the journal editors for prompting further clarifications and urging me to reflect more critically on the representational and technical aspects of Phylo.
  2. As Jasenas reports, Varet himself wanted to “determine the constant issues which run through the history of philosophy” to determine the “relevance” of a work—hardly an unbiased decision (1973, 108).
  3. As of January 31, 2012, several of these diagrams are publicly accessible via Google Books. For an example, see <a href=”<a href=”As of January 31, 2012, several of these diagrams are publicly accessible via Google Books. For an example, see http://books.google.com/books?id=2HS1DOZ35EgC&lpg=PA1004&dq=collins%20sociology%20of%20philosophie&pg=PA96#v=onepage&q&f=true” target=”_blank”>http://books.google.com/books?id=2HS1DOZ35EgC&lpg=PA1004&dq=collins%20sociology%20of%20philosophie&pg=PA96#v=onepage&q&f=true”>http://books.google.com/books?id=2HS1DOZ35EgC&lpg=PA1004&dq=collins%20sociology%20of%20philosophie&pg=PA96#v=onepage&q&f=true
  4. Brian Leiter in his “The Philosophical Gourmet Report” “ranks graduate programs primarily on the basis of the quality of faculty.” The latest report (2009) is based on an online survey sent to four hundred-fifty philosophers throughout the English-speaking world in December 2008 and January 2009; about three hundred responded and completed some or all of the surveys.
  5. An alternative visualization was considered in which the total number of articles by cohort c in year y was weighted against the total number of authors, philosophers, or graduates during y, providing a relative sense of scholarship during that year. Doing so, however, would distort the unique contribution of each cohort during that year by expressing the number of its articles in terms of all the cohorts’ published articles. While aggregate measures have their uses, the purpose of Lexis diagrams is to highlight individual cohort contributions, so this alternative version was rejected in favor of figure 7.

Introduction

We are pleased to introduce the first issue of The Journal of Interactive Technology and Pedagogy (JITP), an open access and peer-reviewed forum for interdisciplinary collaboration, pedagogic study, and innovative computational research. In this, our inaugural issue, we wanted to describe for you the origins of the journal, the content you are likely to find on our site, our peer review policies, and the editorial decisions we have made to make JITP a progressive model for alternative modes of scholarly communication.

About the Journal

JITP has its origins in the Interactive Technology and Pedagogy Certificate Program at the Graduate Center of the City University of New York. Founded in 2002 by Steve Brier, the program enables faculty and doctoral students from different disciplines to engage with questions of how the increased availability of interactive technologies is changing pedagogical and research practice. By coupling the study of the history and theory of technology and pedagogy with practical experimentation with new tools and methods, the program has sought to promote and improve discourse surrounding teaching,learning, and scholarship in the digital age.

JITP launches under the ITP program’s aegis and with the added awareness of the changing landscape of scholarly communication. The certificate program has long been invested in the merits of open access scholarship and open source code, two important impetuses for change in contemporary academia. Our journal will continue this critical engagement with openness and hopes to follow the important strides made by open access scholarly journals such as Kairos and Computational Culture, organizations such as HASTAC and the National Endowment for the Humanities’ Office of Digital Humanities, and creative academic movements such as THATCamps. As such all materials are available free of charge and submissions to the journal will pass through an open peer review process where the names of both submitters and reviewers are transparent at all times.

Along with this drive toward openness, another important impetus behind many of the recent changes in scholarly publication has been the desire by academics to consider new modes of scholarship and new forms of peer review in the digital age. JITP was created with these questions very much in mind. As a result, we strive to redefine the traditional practices of scholarly journals. At the core of our practices is the “Editorial Collective,” a fourteen-member interdisciplinary group of faculty, staff, and students. The collective works together to do the daily tasks of maintaining and producing the journal, and through deliberation and consensus-building, determines what types of content will be included and what shape and feel the journal will take. Furthermore, each issue is co-edited by a two-person team consisting of a faculty (or staff) member and a graduate student. This structure mirrors the certificate program’s co-taught classes, which encourage an interdisciplinary approach to material, and provides the graduate students in the collective with a chance to work on a journal in more than an administrative role.

Through this framework JITP endeavors to better represent different voices from across the academic spectrum. The journal will publish a broad range of multimedia formats, including videos, Prezis, and interactive media, while providing a number of different platforms for different written formats and lengths of contributions. Each issue of JITP features articles that have passed through open peer review by one member of our editorial collective and one member of our review board. In addition, JITP will continuously publish submissions in our Book Reviews, Tool Tips, Teaching Fails, and Assignments sections. Submissions to these categories are rapidly approved or rejected by the category editors; if approved, minor corrections are suggested and then the submissions are quickly published on the journal’s site. Importantly, we still consider these submissions as being under the purview of peer review. But, instead of the double-blind review of traditional journals or the open peer review that our issue articles go through, these works will go through a post-publication peer review process. This model makes materials available to the larger scholarly community first and then leaves the review process in the hands of our readers, who will participate by providing feedback through comments in the journal’s blog-style environment. This open dialogue will be important in developing healthy online discourse and encouraging revisions by submission authors that take into consideration continually developing themes and trends. We believe strongly in the role of this post-publication peer review model in the future of scholarly communication and are enthusiastic about the impact it will have on the quality of work published in our journal.

About the Issue

As we put together the inaugural issue of JITP, we were reminded that computational research, tool development, and pedagogy are emergent and continually expanding fields. The articles in this issue cover a broad range of disciplines, utilizing a wide range of methodological approaches. While all of them share technology as a critical tool in their projects, each author has put that technology to use differently, revealing the breadth of work being done to interrogate and innovate pedagogical theory and practice.

Throughout the editorial process it has been good to see that collaboration is at the core of the work of our authors, and it is well represented in the articles in this issue. All six articles describe projects that integrate or require collaboration: one reports on collaborative research and writing projects in the undergraduate sociology classroom; another discusses the results of a collaborative philosophy project; two describe tools and platforms developed by pedagogues working with programmers; and the final two describe teachers collaborating on strategies that were developed and deployed across multiple courses. Collaboration, then, emerges as a source of strength in the technology and pedagogy field, an important development as projects become more complicated, requiring a multiplicity of skill sets and knowledge bases.

One of the main reasons for much of this collaboration is the challenge of developing new software that create new opportunities in learning environments. In “MyDante: An Online Environment for Collaborative and Contemplative Reading,” Frank Ambrosio, William Garr, Eddie Maloney, and Theresa Schlafly introduce their development of a collaborative reading platform, Ellipsis, that allows readers to annotate a text with their thoughts, scholarly references, or multimedia. They discuss early iterations of such a platform in supporting a philosophy course at Georgetown University and review student feedback in planning for future use. Bridget Draxler, Haowei Hsieh, Nikki Dudley, and Jon Winet’s “‘City of Lit’:  Collaborative Research in Literature and New Media” examines the development of an interactive phone app and website at the University of Iowa. Undergraduate students create and compile multimedia that engages users with Iowa City’s rich literary history.

This issue also shows how pedagogues can come together to develop new experiences for their students through the use of preexisting technologies. In “Let’s Go Crazy:  Lenz v. Universal,” xtine burrough and Emily Erickson review how they used YouTube video responses to discuss the Lenz v. Universal copyright infringement case and standards of fair use in a Media Law class and a Communications class. The article offers examples of fair use videos made by burrough’s students in response to Stephanie Lenz’s YouTube video of her child dancing to Prince’s music. The authors advocate for widespread education on copyright issues and align the fair use of media with the First Amendment right of free expression. In “Talking with Students through Screencasting:  Experimentations with Video Feedback to Improve Student Learning” Riki Thompson and Meredith J. Lee recommend the use of Jing screen capture software to create veedback–that is, video feedback in the form of screencasts of students’ essays with added audio of the professor’s commentary. They argue that video feedback can help personalize a professor’s responses and set an encouraging tone, especially in the teaching of online courses. Student surveys are used to discuss ways in which veedback might be improved.

Our two single-authored articles highlight the fact that most successful exercises in computational research or pedagogy require a detailed consideration of the practice of technological implementation and its impact on the construction of knowledge. In “Steps, Stumbles and Successes: Reflections on Integrating Web 2.0 Technology for Collaborative Learning in a Research Methods Course” Kate B. Pok-Carabalona recounts a semester-long experiment in using online tools to enable collaborative research and writing in an introductory sociology classroom. Pok-Carabalona self-critically explains her own methods in choosing each tool and addresses in detail the implications and drawbacks of using such tools in sociology classrooms. Chris Alen Sula’s “Philosophy through the Macroscope:  Technologies, Representations, and the History of the Profession,” on the other hand, discusses the development of a tool that will enable a “distant” view of how the field of philosophy has been shaped over time. Sula’s (and his partner David Morrow’s) Phylo project aims to remediate some of the weaknesses and biases apparent in a variety of classic representations of their academic field through visualizations that more inclusively and objectively represent how philosophical knowledge is constructed and disseminated.

In an era of such exhilarating, but sometimes overwhelming and exhausting, dynamism in both the academy and the realm of technology, change comes fast and furiously and it often takes a village to accomplish goals and reach expectations. Like many of the projects presented in this issue, the initiation of an academic journal requires coordination and collaboration. As we mentioned above, the editorial collective worked together to accomplish tasks such as copyediting, web design, site management, communications, and citation management, based on the backgrounds and expertise of its fourteen members. We are also very thankful for the efforts and contributions of our Editorial and Review Boards. To that end we look forward to the final piece of collaboration on this issue, online comments from our readers. We believe you’ll find the articles in this issue of JITP of interest. When you do, please engage with those articles through your own commentary. In this way the conversations can continue, and we can help the authors and one another find new and better ways of incorporating technology into pedagogy.

          Kimon Keramidas (Bard Graduate Center), Issue Editor

         Sarah Ruth Jacobs (Baruch College/CUNY Graduate Center), Issue Editor

About the Authors

Kimon Keramidas is Assistant Director for the Digital Media Lab at the Bard Graduate Center where he is responsible for the development and implementation of digital media practices across academic programs. His research focuses on digital media through the lenses of political economy and sociology of culture, and he is currently working on a book project about contemporary corporate theatrical production and a gallery project on the materiality of computer interface design. Kimon received his PhD in Theatre from the CUNY Graduate Center where he also completed the CUNY Graduate Center’s Certificate in Interactive Technology and Pedagogy.

Sarah Ruth Jacobs is a doctoral student in American literature at the Graduate Center of the City University of New York, and an English Teaching Fulbright Grantee at Mohammed V University in Rabat, Morocco.

Steps, Stumbles, and Successes: Reflections on Integrating Web 2.0 Technology for Collaborative Learning in a Research Methods Course

Kate B. Pok-Carabalona, the Graduate Center of the City University of New York

Abstract

This paper reflects on a semester-long experience of integrating several Web 2.0 technologies including Google Groups, Google Docs, and Google Sites into two Research Methods classes based on an active constructivist model of pedagogy. The technologies used in the course allow students the opportunity to actively engage with the central concepts in Research Methods under an apprenticeship model whereby they participate in all the steps of developing, conducting, and reporting on a research project. Students also interact with each other regularly through simultaneous collaborative writing and discussion. Despite evidence of the first and second digital divides as well as glitches and limitations associated with some new technologies, students overwhelmingly rated the experience positively, suggesting a promising argument for employing new technologies to make the central concepts in Research Methods more accessible and transparent to students.

 

Introduction

In the spring of 2011, I was assigned to teach two Introduction to Research Methods classes in the Sociology Department at Hunter College-CUNY. As I planned the course, I again faced the struggle of how to engage undergraduates in a topic that at times can seem fairly dry and abstract. Most students are accustomed to writing some kind of term research paper, usually requiring a visit to the library (or increasingly the Internet) to gather existing literature that they synthesize into new papers. But most of my students had never engaged in the kind of inquiry a research methods class addresses. Of course, term research papers as described above continue to be highly relevant and offer students practice analyzing, evaluating, and synthesizing literature. In short, such assignments strengthen students’ information literacy skills—an aptitude of particular importance in an age when the problem is more often too much rather than too little information. However, such assignments highlight only a small fraction of the skills emphasized in a research methods class where the focus is much more on practice—the framing, design, and implementation of actual research projects, the methodologies of conducting social science research.

I had taught Introduction to Research Methods several times before and each time I had learned better techniques to make the subject more relevant and “real” to undergraduate students. Lecture-only classes that depended primarily on textbook readings had morphed into classes that included substantial focus on analyzing the methods employed by researchers in existing journal articles. These classes had given way to ones that included more group work, more assignment scaffolding, and more technology to facilitate sharing student work and peer review. In short, I had steadily moved away from a top-down teaching style to one that was more experiential and constructivist, a style that seemed more fitting for the applied nature of research methods courses.

My previous attempts to integrate technology into a research methods course took the form of students using wikis to create ongoing “portfolios” of their semester-long work. Each portfolio could be reviewed by all students, making individual students’ research process, writing, and peer-review remarkably transparent. At the time, I worried that students might be nervous about making their work so publicly accessible to each other. Instead, many reported satisfaction that their work could be viewed by a larger audience. In fact, students seemed to find the process particularly informative and often incorporated each other’s comments as well as mine into revisions of their own work. They also seemed to take a certain pride in the idea that their work was being developed in a pseudo-published environment. In a perhaps controversial (but ultimately well-received) practice, students were able to see my comments not only on their own work but also on the work of their peers. Again, I worried that some students might be embarrassed by public critical comments, and indeed they expressed trepidation with this format. However, they later reported that they found this transparency refreshing; it seems that while the experience of being critiqued may be unpleasant, seeing just as many corrections on a peer’s work was liberating! Most students indicated that seeing the same error repeated by a peer helped them remember not to commit the same error in future assignments.1

Despite these positive reviews, this format still clung to a top-down approach to teaching and learning. Moreover, students still seemed to struggle with the substantive content of a research methods course. Students gained a nebulous and indistinct understanding of the course, but full comprehension of core concepts such as designing and carrying out research projects remained just out of their reach. I hypothesized that part of the difficulty resulted from the stronger focus on procedure and practice in Research Methods compared to other sociology courses.

Introductory research methods courses may include considerations of theoretical paradigms, but at their core, they introduce students to how social science research projects are designed and carried out and how data is collected—concepts that are fundamentally process-oriented. While “hard science” courses such as biology or chemistry usually include lab components to help students understand the basic foundations of data collection in these fields, a similar practicum is often absent in social science research methods courses.2 This conspicuous absence is not a casual oversight on the part of social scientists; social science projects tend to be remarkably complex and unwieldy to implement. Unlike recording data in laboratory experiments or even collecting data on Drosophila flies to understand genetic mutation, sociological research projects rarely take place inside controlled environments and often require outside interaction with respondents (and usually quite a few of them).3 Moreover, many social science projects are fairly large undertakings, often carried out collaboratively by teams of researchers, and can take years to complete. As such, the experience of carrying out a social science research project can be difficult to replicate in only one semester. In the absence of research-based practicums, we ask students to read about how research is conducted and to imagine themselves into this process. Even assignments that ask students to write up proposals or draft surveys do not fully address how research projects are implemented or how data collection and analysis might take place. It’s these aspects of research methods that I felt students had such a hard time fully grasping.

Meanwhile, Web-based technology had changed rapidly even in the short time since I first implemented student portfolios, and I wondered if the nature of Web 2.0 architecture, with its emphasis on communication, scalability, web applications, and collaboration might offer some solutions. I was particularly interested in web applications, such as collaboration and survey platforms, that might allow me to implement an apprenticeship model of learning. This type of platform would allow students to actively learn about research methods by designing, implementing, and deploying a class-based research project. In fact, I had recently collaborated with researchers in a similar manner using Web 2.0 survey and publishing tools, so why not do the same with students on a class project? With these considerations in mind, I set out to integrate some Web 2.0 tools into my two Introduction to Research Methods classes.4

Now that I have completed teaching these two classes (forthwith Class A and Class B), I can reflect on my semester-long experiment with integrating Web 2.0 technologies into Introduction to Research Methods courses. I should make clear that when classes began I had not planned to write a paper about the experience. Perhaps somewhat ironically then, this paper is not a methodologically planned, pre-crafted assessment of the effectiveness of technological innovations in teaching research methods. Rather, class outcomes and the seeds germinated by continuous student feedback and discussion throughout the semester resulted in my conlusion that this semester-long experience was worthy of a sustained post-mortem reflection on the steps, stumbles, and successes of integrating technology in research methods classes and pedagogy more generally.

Changing Pedagogy, Changing Technology

The literature on technology and pedagogy has grown steadily since the 1970s, but rose exponentially in the late 1990s as Web access became more pervasive. As late as the mid-1990s, only about 35 percent of American public schools were connected to the Internet; that percentage is now 100 percent (Greenhow, Robelia, and Hughes 2009). This explosive growth has been accompanied by a fast-paced rise in the construction of technology-based instructional classrooms and computer labs, a process that has steadily lowered student to computer ratios from 12:1 a decade ago to the current 3.8:1 (Greenhow et al. 2009). Penetration of Internet access outside of schools has been no less pervasive or fast-paced; Smith (2010) reports that as of May 2010, as many as two-thirds of American adults report having a broadband connection at home, up from 55% in May 2008.

Early Web architecture ushered in the transition to online library catalogues and facilitated access to academic information that might otherwise have been sheltered in imposing buildings, accessible only to elites or through cumbersome processes. However, it did not radically change the locus of the production or construction of knowledge. Information management on the Web continued to be restricted to “gatekeepers” who understood and had facility in its language (HTML) or who had the resources to hire programmers and designers to create their websites. The early Web, then, dovetailed well with forms of hierarchical knowledge construction, which remained largely the exclusive provenance of elites, specialists, or those with sufficient resources.

Given the relatively restrictive structure of this early Web architecture, it’s not surprising that academic interest and research into the nexus of technology and pedagogy also showed a marked weakness both in the integration of technological innovations by instructors and the ability to reinforce constructivist forms of classroom pedagogy. Studies of technological integration in classes at that time often reported that such attempts were fraught with the problem of simultaneously teaching course content with the more complex protocols of the technology, including HTML coding (Nicaise and Crane 1999). Attempts to integrate technology into classrooms often translated into a struggle between making the technology itself accessible to students and focusing on substantive course content. Predictably, these outcomes tempered academics’ attempts to integrate technology into classrooms, and arguably laid the groundwork for the continuing discourse over the limitations of technological innovation in classrooms.

While the Web 2.0 environment does not do away with these tensions altogether, it may lessen them, as Web 2.0 tools make it easier to contribute content to the Internet. Web 2.0 environments and architecture are characterized by a “read-write” model whereby users go beyond “surfing” the Internet to actually adding content through a multitude of venues and media, such as YouTube videos, comments, blogs, and Twitter. These environments have allowed the Internet to become an increasingly participatory medium, making it easier for users to become creators rather than just consumers of the Web by enabling them to add sophisticated content that is readily available for comment or revision. In these ways, Web 2.0 inherently embodies some of the main tenets of constructivist pedagogy, which similarly encourage students to be active participants in their learning through creation and discovery.

That being said, technology alone is not a cure-all for problems in the classroom, and the implementation of Web 2.0 technologies is unlikely to alleviate the “digital divide” (differential access to computers and the Internet) or the “second digital divide” (differences in technology use). These problems, which are rooted in socioeconomic disparities and are strongly correlated with race and sometimes even gender (Attewell 2000; Scott, Cole, and Engel 1992), may in fact be exacerbated by the use of Web 2.0 technologies in classrooms, by favoring those students who have had access to and experience with these new technologies. Moreover, the second digital divide is not a problem relegated only to students; there is evidence to suggest that wide disparities exist among instructors with respect to their knowledge and familiarity with newer technologies (Anderson 1976; Mumtaz 2000; Becker and Riel 2000; Wynn 2009). Finally, the ease with which information can be shared and re-shared comes with its own set of problems. As Jianwei Zhang (2009) cogently points out, Web 2.0 architecture may offer the promise of “generative social interactions, adaptability, interactivity, dynamic updating, [and] public accessibility,” but its unstructured and changing sociotechnical spaces can also challenge and even resist interpretation and synthesis— factors that limit the possibility of truly sustained knowledge creation.

Folk Pedagogy & Instructor Objectives

Before discussing the structure of the classes and how I employed technologies within them, it is important to clarify my own pedagogical preferences about the in-class use of technology since such orientations tend to be closely correlated with the ways in which technology is actually used (Gobbo and Girardi 2002). Like most instructors, one of my primary goals was to impart some greater understanding of the substantive content of the course. Here, my goal was primarily to help students develop a sophisticated understanding of research methods, as well as fluency in the working vocabulary and concepts of the course. Ideally, an illumination and demystification of the research process would allow students to develop the skills and knowledge necessary to evaluate existing research and conduct their own research projects. Because I had such end-goals in mind, my pedagogy does not fit strictly defined constructivist pedagogy whereby students locate and expand their own interests. However, neither do I fully embrace a top-down model of teaching and learning. Instead, I believe that students’’ active participation in process-oriented practice enables them to explore and truly build a sophisticated understanding of the central concepts in any course. Thus, my pedagogy is a constructivist one insofar as I view my role more as that of a facilitator and guide than as a disseminator of knowledge.

This pedagogical orientation has also informed my conception of technology itself and its incorporation in the classroom. As the Internet, smartphones, tablets, and e-readers have become virtually ubiquitous, it is increasingly difficult to live outside the realm of technological influence.  Therefore, I believe that: 1) learning unfamiliar technologies better prepares students to adapt to future innovations; and 2) active engagement with technology may help to bring a wider group of students into the world of online knowledge communities. Going through the process of mastering an unfamiliar technology not only allows one to develop competency in its use, but also teaches students new ways of thinking and problem-solving that they can use in future encounters with unfamiliar tools. This is especially true for learning newer Web-based platforms, such as Google Docs, wikis, or blogs, which provide a different usage paradigm than desktop programs such as Microsoft Word, and are often public and therefore speak to a different orientation, intention, and purpose of creation. In contrast to closed content which is either private or limited to a few readers, the public nature of many blogs and wikis opens the creative paradigm to immediate public engagement, forcing blog and wiki creators to consider and reconsider for whom the content is created and for what purpose, and address the impact of discussions and issues of re-use. Moreover, the collaborative nature of wikis raises issues of individual authorship and ownership.

To provide scaffolding upon which students can develop such proficiency, I combine active participatory learning with the generative promise and possibilities of more constructive learning tools as afforded by these technologies. Students are not given entirely free rein to decide on projects and follow their own learning paths, because I have specific goals and objectives relating to research methods that I wish to impart to students. But I do apply some models of constructivist pedagogy by encouraging active student participation through conversation and feedback. My objectives for the class include the following:

  1. students should gain a better understanding of the substantive concepts of research methods;
  2. students should develop skills for critically analyzing existing research;
  3. students should gain a better understanding of planning, organizing, and conducting their own research project;
  4. students should gain an understanding of new technologies and apply the learning process to other unfamiliar forms of technological innovation;
  5. technology should be a pedagogical tool, not a primary objective;
  6. through understanding substantive concepts and the practice of knowledge creation, students should be able to think more critically and reflexively about the general concepts of sociology as well as communities of knowledge production;
  7. students should improve research writing skills.

In this way, I am attempting to harness some of the promise of Web 2.0 environments to enhance my students’ experience of the class and its content and to further my course goals.

Class Organization and Technology Components

The overarching structure for the research methods course was relatively straightforward: each class would select a semester project and collaboratively to design and implement a class project, culminating in a collaboratively written report. The classes did not collaborate with each other, but the students within each class collaborated with each other on significant assignments. In short, each class was structured along the lines of a collaborative community charged with the goal of choosing, developing, implementing, and reporting on a semester-long research project. In fact, I welcomed students on the first day by congratulating them on their new jobs as “Research Associates” for an as-yet-to-be-determined research project. Assignments, in-class group work, and homework were sequenced throughout the semester to assist students at each step. At first glance, this structure may not be altogether unusual in a research methods class, but present technological innovations made it possible to take these demands a step further— namely, each class was also charged with fully implementing most of this project online and collaboratively. Each class would work collaboratively to write and revise a research proposal, conduct a literature search, annotate and share a literature review, develop a survey, collect data, analyze the results, and write a final report. It is worth repeating that while a great deal of research was implemented online, much of the students’ work consisted of in-class activities including workshops in computer classrooms, small group and whole-class discussions, and short writing assignments.

The core tools used in the class were Google-based, including Google Groups, Sites, and Docs.5 All are freely available and allow users relatively good control over permissions and privacy; while initially it may seem like these constitute a significant number of technology tools, they are the minimum required to operate effectively in this environment. Google Groups served as our discussion forum and listserv and is the linchpin to all Google services, providing the means to easily share Google services with a large group. The threaded organization of Google Groups allowed students (and me) to regularly view discussions contextually, making conversations that took place outside the classroom far more robust. The main public class website as well as two private class websites were created using Google Sites. The private individual class websites offered a space where students could post their own pictures and profiles, and collaboratively create content. Together, these two tools were intended to foster a greater sense of community, connectivity, and dialogue. Finally, we used Google Docs as the primary writing platform to allow students to collaboratively create their bibliography, survey, and final report. A feature of Google Docs called Forms allowed students’ class surveys to be turned into online surveys to ease data collection and analysis.

In addition to these Google tools, I also incorporated a blog developed on WordPress.com. Although Google has its own blog platform in Blogger.com, I elected to use WordPress because I felt that it offered greater flexibility. Both classes shared the same blog and all students were added as authors to the blog (www.thinkingsociology.wordpress.com).6 I incorporated and created the blog in this manner for several reasons. First, I thought it could be used to address the more theoretical orientation of research methods courses by requiring that blog posts be analyses of students’ daily encounters with research methodologies and studies. In other words, I intended this blog to be an online version of students’ assessments of the research methods used in existing research.7 I thought such an innovation would encourage students to think through the types of research that get undertaken. I also wanted to reinforce the idea of more diverse knowledge communities. While their class projects could be seen as contributions to and extensions of a very academically structured literature, I wanted to show that such formal research projects were not the only avenues of knowledge construction. Finally, since an increasing number of students rely on the Internet for their research, I hoped that as they began contributing to this information ether, they would become more critical consumers of this material.

However, I quickly realized that this framing of blog posts made the blog rather difficult for students, especially early in the semester. This owed to the fact that this type of analysis is actually quite difficult to make unless one already has a fairly good understanding of the weaknesses and strengths of different research methodologies. Thus, the blog as I had structured it actually limited rather than clarified the central concepts of Research Methods.

Upon reflection, the disconnect between the blog component and the research project was clearly a creative failure on my part. I had not considered alternative ways of organizing the blog and my requirements for blog posts still made it a very outcome-oriented enterprise. If the blog had been more process oriented, it could have been much better intertwined with the class project. Moreover, I did not make the blog a required component of the class since I felt the course already made heavy demands on students. Instead, students were able to earn as much as three percent extra credit if they elected to participate by making posts or comments, but none were penalized for non-participation.8 I learned a valuable lesson in structure and pedagogy when students reported in their exit survey that they would have been happy to contribute to the blog had it been a requirement. In fact, when asked what suggestion they had to improve the class, this suggestion came up repeatedly.

Discussion: The Good, the Bad, the Ugly, and the Not So Ugly

In order to come to a better understanding of students’ perceptions of their technological skills, I began the semester with a survey created and deployed using the Forms feature in Google Docs. The survey comprised of questions moving from very broad to more exacting measures of technological skill and fluency. Questions included “On a scale of 1-5, how would you rate your technological skills?”, “On a scale of 1-5, how familiar are you with discussion listservs? Blogs? Google Docs?” etc. After students had completed the survey, I used Google Docs’ basic data summaries and charts to report results on each variable to both classes.

Not surprisingly, when technology was defined broadly, the large majority of students in both classes rated their own level of technology skills relatively highly (3-5 on a scale of 5). However, as these skills became increasingly defined by specific products, including discussion groups (Google Groups), Google Docs, Google Sites, wikis, and blog platforms such as WordPress or Blogger, students’ reported level of familiarity dropped precipitously (see Figures 1 and 2).

Figure 1: Familiarity with Specific technology – Class A

Figure 2: Familiarity with Specific Technology – Class B

Both classes indicated the least familiarity with blog platforms such as WordPress and Blogger. It’s clear from these results that the largest percentage of students in both classes was either “Somewhat” or “Not At All” familiar with most of these technological tools. The outlier is Class B’s reported familiarity with Google Sites— a whopping 15 of 28 students (52 percent) of Class B reported that they were ‘Very Familiar’ with Google Sites. However, in subsequent class discussions, it became clear that my formulation of the question led to some confusion and few students in this class had used or knew very much about Google Sites. There were more mixed results for Google Docs. While as many as 15 of 31 (48 percent) of Class A and 18 of 28 (62 percent) of Class B indicated that they were ‘Somewhat’ familiar with Google Docs, this description turned out to be rather broad. Class discussions about these technologies revealed that few students actually made actual use of Google Docs. Instead, students reporting this level of familiarity also took it to mean that he/she remembered hearing about Google Docs or knew of its existence. In short, all the Web 2.0 tools planned for this course were relatively new to students in both classes.

This survey served several purposes. First, it gave me a rough gauge of students’ skills and perceptions of their own skills as they pertain to the technology tools I had chosen to use for the course. More importantly for the students, this survey offered them a first glimpse into one technological tool that they would soon be learning to use and also introduced them to a few of the central concepts in Research Methods—questionnaire design and the operationalization of variables. This exercise gave students their first chance to analyze and consider the importance of survey questions and how their construction affects and determines the actual data collected. Many students were startled to see the difference between how a broad and relatively unspecific term such as “technology skills” could yield radically different results once it was more rigorously defined, reinforcing the importance of clarity and detail in questionnaire construction. I hoped that that this experience would raise students’ curiosity and interest about research methods as well as the format of the course.

Once the necessary steps had been taken and the structure set up for collaborative work, students were ready to begin their new “jobs” as “research associates,” which included choosing a project, crafting a research proposal, developing a literature review, constructing a survey, gathering and analyzing data, and ultimately writing a final research paper. In short, students were ready to engage in all the activities that researchers undertake in genuine research projects.

Selecting a Topic and First Discussions

The students’ first assignment to select a research project immediately offered valuable insight into how Web 2.0 technology might be particularly useful. I initially assigned several in-class group discussions to encourage students to think through their nascent research proposals. I also encouraged them to use their discussion forum (Google Groups) to continue discussing their ideas. I was surprised by their dedication and readiness in adapting to an online discussion. Many of the online debates were as impassioned and sometimes as acrimonious as their in-class discussions. Moreover, ideas arising in online discussions were usually ported back to in-class discussions.

These discussions were also important because they made evident at least two problems that affected a relatively small number of students, but which I had not thoroughly considered. Those who were not accustomed to participating in online discussions found the extensive commenting somewhat disorienting. Many reported that since they receive email directly on their smartphones, they found the numerous posts a disruptive experience and were likely to just stop reading. The result was that these students might come to class unaware that several points or topics had already been discussed and debated by a large proportion of the students in the class. Consequently, students who did not participate were likely to feel as if their input was being ignored or disregarded while students who participated more readily in online discussions found it frustrating to have to repeat material that they felt had already been settled. I tried to rectify this issue by suggesting that students change their email notification settings in Google Groups to stop push notifications to their email and smartphones and participate in class discussion through the online portal. However, I also pointed out that their active participation in these online discussions was expected and a part of the course.

The second issue was more problematic and required more delicate handling. In some cases, differential participation in the online discussion resulted not only from a technological divide, but from a more old-fashioned delimiter between students: language ability and fluency. A small number of students for whom English was not a native language or who were uncomfortable with either their speaking or writing ability were much more likely to refrain from contributing to the discussion forum. Although I had sequenced an in-class discussion to prepare them for the online discussion, I had not required that they prepare anything written, believing that a pre-written statement would lead to less organic conversations online. However, I instituted a low-stakes writing policy whereby students were given time to write down their ideas to better prepare more reticent students so that they would at least have a base upon which to build their online comments. Moreover, as the semester progressed, extensive in-class group work helped foster greater familiarity and camaraderie and I found all students became far less reticent about participating in either in-class or online discussions.

In order to assess the extent to which the technologies used in the course contributed to clarifying the processes of and concepts related to conducting a research project, I asked students to complete a non-anonymous exit survey focused on the course format. Since the exit survey was extra credit, not all students participated in it; of the 56 students, 48 took the exit survey and eight did not.9 The results from the exit survey confirm that students found the Google Group discussion board invaluable to their work in this class.  As much as 90 percent of students (43 students out of the 48 who took the survey) reported that Google Groups was useful.10

Student A:      I find it useful yes. I found receiving all the e mails as rather annoying, but overall feel the listserv was very useful for contacting the professor and other students. i feel it is was very useful in regards to the the communal aspect of the class and research project. For the project to be successful, i feel it was important for us all to be on contact.

Student B:       It made it easy to communicate with my fellow classmates. I found myself communicating more with classmates in this class than others.

Even students who viewed it negatively found it to be a useful tool for communication:

Student C:       Too many emails and it was frustrating at times. Although it was an easy way to communicate with group members it was a bit annoying to recieve 10 eamils within 5 mins.

Despite these issues, I believe that this exercise teaches a crucial lesson in a research methods class; these debates offered students the chance to experience the kinds of discussions, compromises, and considerations that often influence the choice and realization of actual research projects. In short, students’ own debates often mirrored the same processes among practicing researchers. I was thoroughly impressed with the level of these discussions and student engagement in them. Even students who initially expressed discontent with the technology or who were shy about participating in online discussions were eager to offer their opinions and defend their positions during class discussions. Moreover, since the selection of a research project took place over the course of a couple of weeks, students soon found a system that worked for them and all students began to participate more regularly in online discussions. In fact, perhaps the toughest part of this assignment was forcing students to narrow their choices and “settle” on only one topic. In a telling moment, Class A initially selected a research project centered on technology use and access. They even collaboratively drafted a proposal for this project. But after a growing feeling of injustice over what they perceived to be a social stratification system within their own college, they actually elected to change their project completely including revising and resubmitting a new draft proposal.11 Admittedly, I was thrilled that they had the chance to experience how the perception of injustice could inform their choice of a research project. Class A ultimately decided on a project that compared Honors College students with non-Honors College students. This project assessed the differences between these two groups of students, and attempted to gauge the extent to which the general student body was aware of the specialized Honors College program. Class B, perhaps influenced by the heavy integration of technology into this class, chose a descriptive project on New Yorkers’ use of Internet-enabled devices. Students wanted to research the demographic profile of who uses these devices, how they use them, and if technologies such as emails and text messages had usurped traditional communication such as phone calls. Both of these projects were challenging and sophisticated undertakings that essentially asked new questions requiring data collection. I believe that the level of sophistication owes much to collaborative revisions and in-class and online discussions.

Collaborating on a Proposal

Having successfully chosen a research project, students were required to collaboratively edit and create a page, titled Research Proposal, in their private Google Sites class website. I assigned them to do it on their wiki site because I thought they would enjoy seeing their class website evolve to reflect their semester’s work as each stage of their project contributed to building their website. Moreover, since Google Sites is a wiki platform, I envisioned the slow evolution and incremental changes so typical of Wikipedia articles. Instead, this assignment offered me and the students our first lesson in the incongruity between our expectations and outcomes when it comes to technology. Rather than slow, incremental changes to the assignment page, students were more apt to wait until the evening before the assignment was due to add their revisions. The result was a sudden influx and bottleneck of students competing with each other to add and delete comments. Worse, wiki platforms are not designed for simultaneous editing. Although a page is “locked” when it is being edited in Google Sites, contributors also have the chance to “break” the lock, and students did so with abandon, often causing each other’s contributions to be entirely lost. Their fear that they would not have contributed to the assignment prior to its being due easily overrode any sense of loyalty to each other.

In the class meeting following this debacle, I reassured them this farce was entirely a failing on my part. I clearly had failed to take into account the problems that might ensue from a mismatch between how students tend to work and the chosen technology. The wide-scale edits that take place in Wikipedia are radically different in scale and form than those that take place in smaller wikis with instituted deadlines. After this fiasco, we moved permanently to Google Docs for all future collaborative work and retained their class website primarily as a space for them to add more personal content such as pictures and descriptions of themselves— a makeshift Facebook, if you will. I do not mean to suggest that wikis do not have a place as a pedagogical tool; I merely point out that Google Docs offered a better fit for the purposes and assignments of this class. And although we largely abandoned their wiki as the primary space for the writing, I believe that its continued use as a more personal space for students encouraged the kind of camaraderie that eventually developed among the students. Many commented on each other’s photos and personal pages. Eventually, we also embedded various completed assignments into their website to give them a greater sense of progression, which they appreciated.

Understandably, this wiki assignment was particularly frustrating and students eyed the rest of the technological tools in the course somewhat warily. Given the problems in execution, I was surprised to see that students nevertheless rated this assignment quite highly in terms of usefulness as reported in the exit survey:

Figure 3: Usefulness of Research Proposal

I believe such high ratings reflect the importance of giving students the chance to discuss, revise, and make mistakes. Even if the assignment was far from perfect, it did demonstrate how committed students were to contributing to the assignment. Moreover, these repeated efforts to develop the assignment helped them to refine their ideas, ultimately resulting in a better proposal. It’s difficult to imagine a similar experience arising from a more traditional approach.

Sharing Knowledge through Collaborative References

The next assignment, a shared annotated bibliography, required each student to first locate ten journal articles that might be relevant to the class research project. In addition to sharing bibliographic references, each student also shared his or her search terms and strategies for locating articles. Thus, this exercise addressed individual skills-building as well as peer-teaching and development. Each student then annotated two articles and shared the annotations with his or her class. In this way, the students gained individual practice reading and assessing articles while participating in knowledge construction by creating a shared repository of knowledge. These assignments also received high ratings in the exit survey:

Figure 4: Usefulness of Shared References

Figure 5: Usefulness of Shared Annotated Bibliography

Constructing a Survey Together

Perhaps one of the most enriching assignments as well as the one that arguably took the most advantage of the benefits offered by Web 2.0 technologies was students’ collaboratively crafted project survey. After spending two to three weeks in small, in-class group activities developing broad themes (derived from their literature review) to include in their survey, students collaboratively created a questionnaire for the targeted population of their research project.12 The premise here was that through collaborative writing and ongoing discussions, students could develop a more rigorous and detailed survey than by working alone. By having students work in Google Docs, I could also view, monitor, and comment on their work in an ongoing manner; I could see not only how students’ ideas progressed, but also what concepts they struggled with the most. In short, I could provide continuous feedback to students as they developed their assignment. And while I never outright suggested survey questions, I did often highlight or prod them to clarify some of their own questions.

Figure 6: Usefulness of Survey Design

It was impressive to witness how often students commented on each other’s work and how often they made revisions based on other students’ comments. As students in each class struggled to create their collaborative survey, it was not unusual for individuals to make comments such as “Can we phrase this better?” “Let’s expand on this,” or “Shouldn’t we ask the exact age, rather than categories?” Even more impressive was how often my suggestions were ignored! This assignment also received very high ratings from students in terms of how well it contributed to clarifying concepts in Research Methods.

Of the 48 students who took the Exit Survey, only one indicated that this assignment was only “Somewhat Useful.” The rest of the students found this assignment at least “Useful” with a large majority reporting that it was ‘Very Useful.’ Of course such an assignment does not require using tools such as Google Docs, but students’ comments about using Google Docs contextualize the value of employing a Web 2.0 technology for such an assignment:

Student A:      It was cool being able to see other classmates viewing/editing the document at the same time and have a real-time discussion about our document.

Student B:       I started using Google Docs for my own personal uses after you introduced it to me in class. I thought the editing simultaneously part was the best!

Student C:       Despite the times it froze and was inaccessible due to high volume of student usage, Google Docs was amazing because it allowed us to work [our] assignments simultaneous and provided the means of critical feedback and necessary editing.

Clearly, students felt that the simultaneous collaborative possibilities of Google Docs in combination with its integrated discussion component made the tool particularly useful for collaborative group work. The technology component made it possible for students to engage in an ongoing dialectic of discussion and creation, leading to the development of far richer and more detailed questions on their survey.

At this stage, in a more conventionally structured research methods course, copies of the survey might be printed and students assigned to go out and gather data. Such a process might entail gathering data, developing and readying the database structure to receive data, data entry, and learning the chosen data analysis software. Given the time-consuming nature of the process, this step is often skipped over. However, advances in online survey tools—particularly the ease with which they can be created and deployed—offer an ideal opportunity for students to attempt to answer their research questions by actually deploying the surveys they worked so hard to create. I initially planned for students to turn their survey document into an online questionnaire using Google Forms. I altered the assignment when I determined that while such a requirement would teach students how to use another tool, the skill itself would contribute little to clarifying the concepts of research methods. Instead, I used Google forms to turn their questionnaires into online surveys that they could then deploy to collect data. As a final quality check, students completed the surveys themselves before they were finalized.13 This exercise immediately made it clear to them which questions required additional revisions.

The next step included recruiting respondents and entering the data. At least two lessons learned during this process point to the importance of integrating survey technologies into research methods. The first lesson came in the form of problems students encountered during recruitment. At first, many students assumed that friends, family, and acquaintances would readily answer their calls for help to complete a class assignment. Instead, students reported that it was often difficult to get even these familiar associates to complete the survey. Students were forced to find methods for locating additional participants, thereby learning a valuable lesson about the complexities and difficulties involved in recruiting respondents. Again, I doubt that students would have had the same opportunity to learn such lessons without the chance to collect data as afforded by the integration of these survey tools.

A second telling moment came during the data analysis portion. Students could collect data in two ways: 1) conduct face-to-face interviews with respondents and enter the data themselves through the online link; or 2) send the link to the online survey directly to a respondent and allow for self-reporting. Although students elected the latter because it was seemingly more convenient, they quickly learned that the easy route was fraught with pitfalls. When at least one respondent participating in the survey entered “Klingon” as his/her race, they realized the cost of exchanging face-to-face interviews for the online survey and were immensely relieved that not all the participants had done the same. Once again, students learned a valuable lesson about the unintended difficulties of data collection that was able to be made more cogently because of affordances provided by the Web 2.0 tools used to perform and process data collection.

Ultimately, each class recruited nearly 300 respondents (8-10 respondents per student), and while the data collection process alone was irreplaceable, the fact that Google Docs also offered basic data summary tables and charts made this an even more valuable exercise for students. Students expressed pride and even awe in seeing their work summarized into colorful bar and pie charts. I wanted to show them how their data might answer their questions so we spent one class entirely devoted to statistical analysis. Students developed questions that they wanted to find from the data and I ran the analyses for them in SPSS.14 Again, they expressed pride in the types of answers they were able to determine from their data.

Even when students are provided with summary tables and sophisticated data analyses, turning them into coherent narratives still requires a certain amount of skill and training; it was here that students ran into the most trouble. While they could report the data summaries and even the results of crosstabs or inferential statistics that I provided for them, they found it more difficult to craft sophisticated answers to their research question using this data output. As previously noted, this outcome is not surprising given that the goals of an introductory research methods class do not usually include data analysis; most students go on to take an introductory statistics course which addresses this goal more directly. However, even without a clear understanding of the data analysis, students seemed to recognize that the process of data collection gave them a better understanding of the strengths and weaknesses of how data is collected as reflected in their high ratings of this exercise:

Figure 7: Usefulness of Data Collection

In fact, the Survey Design and Data Collection assignments received the highest approval ratings of all the assignments.

A Research Paper Beckons

This semester-long work culminated in a collaborative final paper written in Google Docs. Given the amount of work most students had already put into their project, I debated whether to assign a final paper; I ultimately elected to do so because I believed that it would address one of the concerns I had about relying too heavily on process as product. The strength of Web 2.0 architecture, according to Zhang, is the ease with which it is “generative [of] social interactions and sharing, adaptability, interactivity, dynamic updating, information, and public accessibility (quoted in Greenhow et al. 2009).” However, like Zhang (2009), I believe that these same strengths can too easily work against a sustained and steady evolution of ideas (Greenhow et al. 2009). In other words, the ease with which Web 2.0 architecture allows for sharing and commenting all too often leads to merely commenting or opinion-making, falling far short of the requirements of structured, formal education. For better or worse, the ability to write a coherent, cohesive paper is still at the forefront of standards of education— I know of no educational institution that rewards graduate students for merely speaking about their doctoral research. Discussion clearly helps to clarify concepts, but the ability to synthesize, interpret, and think critically about information (whether it arises from discussion, collaboration, or elsewhere) is equally important in academia and beyond. Nonetheless, I hoped that by making this assignment a collaborative one, such a goal-oriented assignment could also be combined with independent learning and constructivist models of knowledge creation.

The usefulness of this last assignment received slightly more mixed reviews. While a majority of students rated this assignment ‘Very Useful,’ at least a few rated it “Not At All Useful’ or only “Somewhat Useful.’

Figure 8 : Usefulness of Final Paper

This final assignment was the least popular of all the assignments amongand it easily garnered the greatest contention and loudest grumbles. Tellingly, student resistance was not necessarily to a final paper or report. Rather, student resistance centered on the assignment of a single and collaborative final paper. Despite an entire semester of collaboratively working together and often successfully producing complex and critical work, many students continued to worry about “freeloaders” and perceived disparities in writing ability and even critical thinking (I did offer students the option to write their own research paper if they preferred, but only two took me up on this offer).

A more general complaint about this assignment was directed at the technology itself. Although Google Docs claims that it allows simultaneous collaborative editing for up to fifty users at a time and can be shared with as many as 200 users, students repeatedly remarked on how slow the document could become when too many students were editing it at the same time, a factor that frustrated them and deterred their enthusiasm for the technology.15 It was clearly a difficult process to have thirty students all collaborating on one document even if it was only being edited by two to seven students at any given time.

Despite these laments, each class’s final paper represents some most of the most sophisticated undergraduate research work I’ve seen. I believe that what imperfections they exhibit result from the nature of diffuse, collaborative work rather than a lack of comprehension on the part of students. In fact, the ideas and concepts elaborated in each paper were relatively sound and creative. That being said, I tend to agree with their assessment— 30 students working on one single paper was difficult to manage. A few alternatives to this situation come to mind: 1) have students work in smaller groups of four to five students to write several papers; 2) have smaller groups of students simultaneously editing different sections of the same paper so that all students still have the opportunity to work on all parts of the paper; 3) assign different sections of the paper to smaller groups of students. Admittedly, the first and second options are the most appealing to me. In the first, all the papers would be on the same topic (the class project), but that does not preclude the submission of several papers and in fact such an approach is likely to allow for greater creativity and more nuanced details as each group may focus on different aspects of the topic. Meanwhile, the strength of the second allows for all students to work on all parts of the paper, ensuring that all students have similar levels of competency in all areas of the research project. By contrast, the third option is the least attractive and may still lead to some of the same issues involved with writing a single paper, and given the diverse voices, may actually result in a more discordant final report. Moreover, assigning different sections to small groups of students may limit students’ competency to their assigned section.

Grading such collaborative work was another challenge of this project, since it was such a central component of this course and also a primary concern for students. I tried to balance collaborative work with individual work by sequencing all larger assignments and offering individual credit for each smaller assignment. For example, students earned individual credit for participating in class discussions, locating and annotating references for the collaborative bibliography, creating survey questions for the class survey, collecting data, etc. While few students complained about this individual grading policy, the grading policy for the collaborative pieces was far more contentious since all students theoretically received the same grade on these components (the proposal, survey, and final report). My solution was to offer feedback at least two or three times as they were in the process of writing the assignment in hopes of improving their final work. Once the assignment had been turned in, I graded it. Then I used Google Docs’ revision history feature to track student contributions (see here). So long as students contributed to the work and made improvements to it, they received full credit (the grade that I had given the assignment). If students made only minor changes such as adding a period or indenting a paragraph, they received slightly lower grades. Students who made significantly more contributions than others received slightly higher grades. I also considered students’ comments, suggestions, and replies on the document as measures of participation, but with the stipulation that these could not be the only contribution to these assignments. I hoped that by making individual student assignments a relatively large percentage of their final grades, it would balance out any perceived injustice in grades on their collaborative assignments. Admittedly, this system was far from perfect–in particular, Google Docs’ revision history feature is not nearly as robust as I would like–and grading was a stressful task.

The literature on grading collaborative work suggests asking students to grade or rank contributions made by themselves and others. However, this suggestion would have been virtually impossible to implement given how I had structured some of these collaborative assignments. Since each class worked as a unit, how could thirty students grade or rank each other’s contributions? This struggle alone suggests revamping some of these collaborative assignments to be based on smaller groups and is something I’m seriously considering.

Final Remarks & Further Considerations

Figure 9: Class Format

Figure 10: Technology Use in Class

In conclusion, the experience of integrating Web 2.0 technologies into these Introduction to Research Methods courses was overwhelmingly positive for both the students and myself. In the exit survey, students gave high scores to the class format and use of technology in the course. When asked to rank on a scale of scale of 1-5 where 1 was “Hated it” and 5 was “Loved it,” 36 of 48 students (75 percent) rated the format of the class a four or higher. Another ten students (21 percent) rated the course format a three, and only two students reported disliking the class format.
Students reported similar ratings for the integration of technology such as Google Groups, Google Docs, and Google Sites in the class with even more students giving the use of technology in the class the highest possible score of five.

Figure 11: Comprehension

One final question in the survey asked students to rate their own understandings of research methods as a way to provide some kind of grounded base for reading these results. Although this was purely self-assessment, it is worthwhile to report that—at least by their own gauge—an overwhelming majority of the students reported that they now have a ‘Good’ to ‘Very Good’ understanding of research methods, and as many as four students think of themselves as experts.
Perhaps more valuable than how students felt about the course format, the use of technology in the course, or even their own understanding of research methods, were questions that specifically asked students to consider the extent to which they felt that the assignments and class activities helped to clarify the concepts and processes involved in conducting research. As Figures 3-8 (above) indicate, students’ responses to how well the assignments helped them to understand the central concepts of research methods was overwhelmingly positive. And while it is clear that a few students did indeed repeatedly report that they did not find these assignments useful, a much larger percentage of students regularly gave the assignments high marks in terms of their usefulness in clarifying concepts of the course. Moreover, the vast majority of students consistently gave the highest possible rating, “Very Useful,” to all of these assignments. Of course in the absence of other, more conventional assignments in this course, it is perhaps not too surprising that students would rate these assignments so highly. However, since each assignment was grounded in and designed to take full advantage of Web 2.0 technologies, I believe that a significant portion of their valuation can be attributed to this relationship.

In many ways, this format allowed students to gain a very nuanced, rich, and applied understanding about the central processes of conducting social research: survey design, implementation, and data collection. Together, these steps demystified the process of quantitative data collection and allowed students to directly address many of the most central issues and concepts in research methods, including ethics, respondent anonymity and confidentiality, cost, population sampling and design, and questionnaire design and construction. By structuring the class in this way, the course effectively became a problem-solving exercise—how would students structure and organize their inquiry to best answer the research question they had chosen? Such an exercise can easily be translated beyond the confines of a research methods course.

The incorporation of Web 2.0 technology tools allows research methods courses to be conducted in an apprenticeship model, giving students the opportunity to learn about research methods and methodologies by conducting research. Could such an apprenticeship model be conducted without the use of technologies such as Google Docs, Google Groups, and Google Sites? Yes and no. Certainly students could be asked to work in groups to collaboratively create a survey, but this method would result in several surveys that would have to be re-synthesized into a single survey, a process that would require significant time either inside or outside the classroom. Another option is for students to email documents back and forth to each other as researchers have traditionally done. However, part of the strength and richness of students’ surveys owed much to the fact that the entire class collaboratively contributed to a single document, creating, adding, or revising questions and expanding on concepts. Moreover, I continue to believe that the ease with which the survey could be deployed and data collected and summarized makes this a unique addition to research methods courses. In the absence of such tools, the data collection and analysis portions of this assignment would have required significant class time devoted to explaining the use of quantitative software such as SPSS.16

However, far from being a panacea to smooth over differences between students, the integration of these digital tools and the format of the class also contributed their own set of problems. By far the most prominent problem was the persistence of both the first and second digital divides; students who had more access to computers at home and expressed greater comfort using newer technologies—those who were more likely to say that they followed blogs, had contributed to blogs, or saw themselves as particularly comfortable in online environments—were also more likely to be committed to the format of the class and to learning the technologies used in the class. Students who were more comfortable in a traditionally structured classroom found the class somewhat confusing and reported feeling uncomfortable navigating the numerous technological resources used in the class.

Student D:      Google documents was useful, it let the class work together and we could see what each other was working on, so it helped. However, there were issues with working together on google documents that made it difficult to fully appreciate it.

Student E:       I was a bit weary about google docs because I’ve never used them before. I got used to working with google docs towards the end of the class but I would say to have an extra class showing up more about how to use google docs.

Student F:       It was better than blackboard but I’ll never love online work. It’s a bit confusing.

Student G:      I learned alot about using Google document… it would be nice if you taught us how to use before we had to figure it out ourselves and it took me days to learn which i lost motivation …

For this group of students, rather than making it easier to collaborate, online tools made the process more intimidating and confusing, and they often expressed a preference for traditional group work where students meet outside the classroom in small groups to work on an assignment. Moreover, even for students comfortable with technological innovations, it was clear that the promise of these new technologies as described in much of the literature remained theoretical. Students often stumbled in working with these technologies, and many still required considerable guidance to critically assess and use these online tools. To address this issue from a pedagogical standpoint, it is crucial that instructors continuously monitor and provide sufficient training on all technological tools, as well as make sure that the use of these tools does not become the central part of any grading scheme. Additionally, steps should be taken to ensure that students have equality of access to hardware—if a student does not have home access to computers, laptop lending or other similar programs should certainly be considered.

Despite these concerns, the fact that so few students provided consistently negative feedback about either the course structure or content leads me to believe that the vast majority of students felt that the class structure contributed to their understanding of course content. Further, while the second digital divide was evident among students, exposing and training students on these technologies affords the opportunity to go through a process of learning that may help them learn unfamiliar tools in the future, thereby narrowing that divide.

It remains to be seen how applicable or useful Web 2.0 technology is to all courses. Generally, “real-world” research projects are collaborative enterprises, and a fair number of researchers have learned about research methods through experience and practice. Thus, the technologies integrated into this course fit particularly well with the demands and paradigms associated with research methods. However, that does not mean that these technologies are equally applicable or suited to all courses. My misstep with my initial implementation of the class blog demonstrates how important it is for instructors to think carefully about how they wish to structure the tools they plan to integrate. While my initial structuring of the class blog led to very little participation, after I opened up the discussion to any topic students could relate to larger sociological concepts, students immediately began adding content. Thus it was not the technology itself that was ineffectual, but how I had designed the assignment.

Their few posts immediately demonstrated that at least one strength of blog platforms lies in the opportunities they offer for students to critically reflect on broader issues and themes from other courses. It also hinted at an interesting pedagogical issue. Just as students sometimes ignored my comments on their collaborative work, preferring to pay attention to the comments of their peers, their few blog posts made quite plain the diffuse nature of information and knowledge construction. That many of the students made posts on topics that I was either unaware of or which were of no concern to me, but nevertheless seemed quite valuable to them, speaks to a more complete decoupling between instructor and the dissemination of information. The more such free-form tools are integrated into teaching, the more the nodes of knowledge become diffuse and multiple. Just as specific Web 2.0 technologies were used to fulfill a constructivist pedagogy in this Introduction to Research Methods course, the integration of a blog platform in more theoretical courses may better fit the demands of constructivist learning in them.17

A more generalized problem with using these technologies is that despite their advances and functionality, they have not actually quite caught up with the demands and requirements of an academic environment, probably because they were not designed initially for academia. There is no unified digital platform linking knowledge creation, bibliographic management, and project management. Because Google Sites does not allow single pages to be locked,18 I juggled a main website alongside two class websites which were kept private and accessible only to students; this made the flow of information somewhat disjointed. In addition, because bibliographic and annotated reference management was somewhat makeshift and clumsy (we used the spreadsheet option in Google Docs for this purpose), their papers were written in Google Docs, and their blog was hosted entirely on a different online service (WordPress), students found the flow clumsy to navigate. In the words of one student, it was actually “information overkill.”19

A final problem, more related the constructivist and somewhat self-directed format of the class than to technology, should be noted: some students respond better than others to different styles of teaching. A constructivist model demands a great deal of self-motivation and self-discipline on the part of students and it is paramount that instructors remember that in many ways, this requirement may actually be an added burden for both teacher and students. To my great surprise, quite a few students (although a relatively small minority) requested more and greater instructor oversight in the form of regular quizzes (to ensure that they read the assigned readings) and mandatory assignments (including the blog). While I worried that my presence on their discussion board or my ability to review their work in Google Docs was too intrusive or dictatorial, many students reported preferring that I provide even more feedback, especially in the form of PowerPoint presentations and even lecture notes to make sure they grasped the most important aspects of the course. In short, many students reported preferring a more top-down approach toward discipline and control. And although a relatively small number of students indicated such preferences, it does speak to different learning styles that some students may find the more free-form constructivist model of pedagogy more difficult. However, such demands may need to be balanced with the goal of fostering greater self-discipline in students.

What has emerged from this experience is that despite some problems, Web 2.0 technologies offer a promising and tantalizing possibility for making the concepts of research methods courses more accessible and transparent for many students. In addition to making it possible for students to carry out fairly complex projects, it helped to foster a sense of community among students and helped them to see themselves as active knowledge creators and contributors. Clearly, instructors must continue to be aware of the way students actually work to ensure a proper match between technological tools and assignments; since many students are likely to wait until the last minute to complete an assignment, this fact may well determine how to structure an assignment. And while we must continue to consider differential levels of technology use among students as well as cultural contexts (Bruce 2005), I firmly believe that the technological tools employed as described offer students the chance to engage with the central concepts of research methods in a manner that would not have been possible through a more conventionally structured course. Moreover, the collaborative nature of these tools bolstered student confidence, helping to foster a sense of ownership and pride in their own contributions to the overall project, which could be readily identified by students and myself; students often proudly pointed out which questions were “hers” or “his” at various times throughout the data collection and analysis phase of the project.

Additionally, the sense that educational institutions continue to play technological catch-up persists, as an overwhelming number of students preferred even these ad hoc digital methods of course organization over those of Blackboard. When asked if they would have preferred if we had worked more in Blackboard, only eight (17 percent) students answered “Yes” while forty (83 percent) reported “No.” Those who preferred Blackboard usually did so for the sake of convenience, noting that it would have been easier if this course could be found in the same place with their other courses, more than any other reason. Finally, it is clear from student comments that they found learning the technology itself a valuable experience. There is little higher praise than the fact that quite a few students indicated that they intended to continue using Google Docs or had already begun doing so. One student even reported that when another course required group work, he took the initiative to introduce Google Docs to his group as an additional means of collaboration!

References

Alexander, Bryan. 2008. “Web 2.0 and Emergent Multiliteracies.” Theory Into Practice  47, no. 2:150-160. ISSN 00405841. doi: 10.1080/00405840801992371.

Attewell, Paul. 2011. “The First and Second Digital Divides.” Sociology of Education 74, no. 3: 252-259. ISSN 0038-0407.

Becker, Henry J. and Margaret M. Riel. 2000. “Teacher Professional Engagement and Constructivist-compatible Computer Use.” Teaching, Learning, and Computing: 1998 National Survey, Report #7. Centre for Research on Information Technology and Organisations, University of California, Irvine. http://ed-web3.educ.msu.edu/digitaladvisor/Research/Articles/becker2000.pdf.

Benson, Denzel E., Wava Haney, Tracy E. Ore, et al. 2002. “Digital Technologies and the Scholarship of Teaching and Learning in Sociology.” Teaching Sociology 30, no. 2:140-157. ISSN 0092-055X.

Bruce, Bertram C. 2005. Review of Crossing the Digital Divide: Race , Writing , and Technology in the Classroom, by Barbara Monroe. Journal of Adolescent & Adult Literacy 49, no. 1): 84-85. http://www.jstor.org/stable/40009281. ISSN 1936-2706.

Greenhow, Christine, Beth Robelia, and Joan E. Hughes. 2009. “Learning, Teaching, and Scholarship in a Digital Age: Web 2.0 and Classroom Research: What Path Should We Take Now?” Educational Researcher 38, no. 4: 246-259. http://edr.sagepub.com/cgi/doi/10.3102/0013189X09336671. ISSN 0013-189X.

Greenhow, Christine, Beth Robelia, and Joan E. Hughes. 2009. “Response to Comments: Research on Learning and Teaching With Web 2.0: Bridging Conversations.” Educational Researcher 38, no. 4: 280-283. http://edr.sagepub.com/cgi/doi/10.3102/0013189X09336675. ISSN 0013-189X.

Nicaise, Molly, and Michael Crane. 1999. “Knowledge Constructing through Hypermedia Authoring.” Educational Technology Research and Development 47, no. 1: 29-50. http://www.springerlink.com/index/10.1007/BF02299475. ISSN 1042-1629.

Scardamalia, Marlene, and Carl Bereiter. 2006. “Knowledge Building: Theory, Pedagogy, and Technology.” In Cambridge Handbook of the Learning Sciences, edited by R. Keith Sawyer. New York: Cambridge University Press, 97-118. ISBN 9780521607773.

Scott, Tony, Michael Cole, and Martin Engel. 1992. “Computers and Education: A Cultural Constructivist Perspective.” Review of Research in Education 18:191. http://www.jstor.org/stable/1167300. ISSN 0091-732X.

Smith, Aron. 2010. “Home Broadband 2010.” Washington, DC: Pew Charitable Trusts. http://pewinternet.org/Reports/2010/Home-Broadband-2010.aspx

Wynn, Jonathan R. 2009. “Digital Sociology: Emergent Technologies in the Field and the Classroom.” Sociological Forum 24, no. 2:448-456.  doi: 10.1111/j.1573-7861.2009.01109.x. ISSN 0884-8971.

Zhang, Jianwei. 2009. “Comments on Greenhow, Robelia, and Hughes: Toward a Creative Social Web for Learners and Teachers.” Educational Researcher 38, no. 4: 274-279. doi: 10.3102/0013189X09336674. ISSN 0013-189X.

 

About the Author

Kate B. Pok-Carabalona is a doctoral candidate in sociology at the Graduate Center, City University of New York (CUNY). Her interests include immigration, race and ethnicity, comparative urban contexts, and the changing role and impact of technology; her dissertation uses in-depth interviews to examine how contexts structure the integration and subjective experiences of Chinese second generation immigrants living in Paris and New York. She has received funding from a Social Science Research Council (SSRC) summer grant, CUNY Doctoral Dissertation Fellowship, and a CUNY Writing Across the Curriculum (WAC) Fellowship. Kate holds a BA in History with concentrations in Classical History and Women’s Studies from Cornell University and an M.Phil.from the Graduate Center.

 

Notes

  1. I did not make overly extensive feedback to student papers online. Instead, my public criticism tended to be restricted to common errors committed by many students such as suggestions for strengthening thesis statements, grammar and syntax corrections, and pointing out citation mistakes. Since such errors were committed by most of the students, few felt embarrassed by them— in fact, many expressed relief that they were not alone in their errors. In addition, I also made public suggestions about how a research project might be conducted and pointed out when a research question needed to be revised or restructured since it couldn’t be answered in the current way it was framed. These last two types of public comments directly address one of the central goals of research methods courses— to teach students how to formulate and frame research questions in a manner that they can be answered using common social science research methodologies. Students tended to find such comments to be some of the most useful; in effect this exercise gave them practice considering the weaknesses and strengths of many research questions rather than just their own. Nevertheless, I also elected to send the most detailed and critical comments to students individually to alleviate any embarrassment for students.
  2. Although scant research exists on how many research methods classes implement the kind of semester-long project I describe in this paper, I believe that I am correct in my assertion that few classes do. I came to this conclusion based on reviewing numerous research methods syllabi as well as on personal networks of faculty and graduate students who teach research methods at CUNY. This assertion does not mean that some classes do not require students to gather data or have group projects, but the scale and form of these projects tend to be smaller or limited to unobtrusive observation. I have never heard of a student class that collaboratively created a survey or carried out research projects like the ones described in this paper.
  3. The exception of course is research projects using secondary or existing data and artifacts.
  4. According to McManus (2005), the term “Web 2.0” was first coined in 2004.
  5. A Google Calendar was used in lieu of a traditional syllabus, but students did not actually use this tool in the sense that they did not add to it or manipulate it in any way. Since I used it as a syllabus to add readings, due dates, and reminders, students primarily used to keep track of assignments.
  6. I’ve continued to use the blog in more recent classes so it has grown quite a bit.
  7. For example, an article in the New York Times reporting statistics on unemployment should lead students to consider the strengths and weaknesses of the methodology employed in the data collection. Such assignments are not entirely novel in research methods courses.
  8. There were also other opportunities to earn extra credit so that students who might be less enamored with technology would still have the opportunity to improve their grade through extra credit.
  9. The grade split for the eight students was three As, two Bs, and three Cs.
  10. Grammar and syntax errors for students’ quotes have been retained.
  11. The Honors College program is a specialized program instituted throughout the CUNY’s 4-year colleges. The stated purpose of the program is to attract high-achieving students and to “raise the academic quality” throughout CUNY. By and large, the Honors College does attract very talented students; many of the students in the Honors College have competitive GPAs and SAT scores. However, the program has also been controversial in that students in the Honors College program also receive benefits such as free tuition, room and board in some cases, laptop computers, and educational stipends to name only a few. That these students tend to come from wealthier backgrounds than their non-Honors College peers has led to charges that such benefits and focus are being unfairly allocated to those students least likely to need them at the expense of the broader student body.
  12. At this juncture, we also transitioned to using Google Docs almost exclusively to avoid the problems associated with wiki mediums. Unlike wikis, Google Docs allows for simultaneous editing with as many as 50 people at the same time and can be shared with as many as 200 people.
  13. Again, although they did not actually create the physical online survey, the design, format, questions, etc. were essentially their work. I did little more than translate their questions into a digital survey form.
  14. Another advantage of using Google Forms is that the data is collected in a spreadsheet that can be relatively easily exported to SPSS for further analysis.
  15. See http://docs.google.com/support/bin/answer.py?hl=en&answer=44680
  16. In many cases, SPSS is taught in conjunction with Introduction to Statistics classes where students learn in greater detail to analyze quantitative data. For this class, students formulated questions and relationships between variables that they thought might exist, and if it was possible to run such an analysis, I ran the analysis for them in class.
  17. In my current course, the blog is much better integrated and students’ critical thinking as well as the diffuse nature of knowledge is even more evident.
  18. As of August 2011, Google has started to offer page-level permissions, see: http://googledocs.blogspot.com/2011/08/better-control-in-google-sites-with.html and https://support.google.com/sites/bin/answer.py?hl=en&answer=1387384&topic=1387383&rd=1
  19. Some might suggest that the Blackboard (Bb) course management system may be a better alternative, but I have seen little evidence of this. Bb may have wiki, discussion forum, and even blogging functionality, but compared to the existing services described in this paper, these functionalities are poorly implemented and in many ways less robust. Moreover, academic work in a content-management system (CMS) service such as Bb is fundamentally different than creating work in a public environment. For a more detailed discussion of these differences, see Web 2.0 and Emergent Multiliteracies (Alexander 2008). When asked, my students overwhelmingly report a preference for the digital structure that I developed over that of Bb.

Talking with Students through Screencasting: Experimentations with Video Feedback to Improve Student Learning

Riki Thompson, University of Washington Tacoma
Meredith J. Lee, Leeward Community College

Abstract

Changing digital technology has allowed instructors to capitalize on digital tools to provide audiovisual feedback. As universities move increasingly toward hybrid classrooms and online learning, consequently making investments in classroom management tools and communicative technologies, communication with students about their work is also transforming. Instructors in all fields are experimenting with a variety of tools to deliver information, present lectures, conference with students, and provide feedback on written and visual projects. Experimentation with screencasting technologies in traditional and online classes has yielded fresh approaches to engage students, improve the revision process, and harness the power of multimedia tools to enhance student learning (Davis and McGrail 2009, Liou and Peng 2009). Screencasts are digital recordings of the activity on one’s computer screen, accompanied by voiceover narration that can be used for any class where assignments are submitted in some sort of electronic format. We argue that screencast video feedback serves as a better vehicle for in-depth explanatory feedback that creates rapport and a sense of support for the writer than traditional written comments.

 

“I can’t tell you how many times I’ve gotten a paper back with underlines and marks that I can’t figure out the meaning of.”

–Freshman Composition Student1

Introduction

The frustration experienced by students after receiving feedback on assignments is not unique to the student voice represented here. Studies on written feedback have shown that students often have difficulty deciphering and interpreting margin comments and therefore fail to apply such feedback to successfully implement revisions (Clements 2006, Nurmukhamedov and Kim 2010). A number of years ago one of us participated in a study about student perceptions of instructor feedback. The researcher interviewed several students, asking how they interpreted her feedback and what sorts of changes they made in response to the feedback (Clements 2006). Students reported that some comments were indecipherable, others made little sense to them, and some were disregarded altogether.2

Clements (2006) suggests that the disconnect between feedback and revision is complicated by a number of factors, including the legibility of handwriting and editing symbols which sometimes read more like chicken scratch than a clear message. Students usually did their best to interpret the comment rather than ask for clarification. Other times, students made revision decisions based on a formula that weighed the amount of effort in relation to the grade they would receive. In other words, feedback that was easier to address gained priority, and feedback that required deep thinking and a great deal of cognitive work was dismissed. Sometimes these decisions were made out of sheer laziness. Other times students’ lack of engagement with feedback was a strategic triage move to balance the priorities of school, work, and home life. These findings motivated us to find more effective ways to provide feedback that students could understand and apply to improve their work.
We both rely upon a combination of written comments and conferences to provide feedback and guidance with student work-in-progress, but we find that written comments make it too easy to mark every element that needs work rather than highlight a few key points for the student to focus on. We often struggled to limit our comments to avoid overloading our students and making feedback ineffective, as research in composition studies shows that students get overwhelmed by extensive comments (White 2006). After years of using primarily written comments to respond to student papers, we were often frustrated by the limits presented by this form of feedback.

Wanting to intellectually connect with students and explore ideas collaboratively while reading a paper, we are often having a conversation in our own heads, engaging the text and asking questions. We experience moments of excitement when we read something that engages us deeply. We think, “Wow! I love this sentence!” or “Yes! I completely agree with the argument you’re making,” or “I hadn’t thought of it that way before.” We also ask questions: “What were you thinking here?” or “Why did you start a new paragraph here?” in hopes that the answers appear in the next draft. Unfortunately, written comments are often in concise, complex explanations that students find difficult to unpack. That is, the necessary supplemental explanation that students require for meaning-making remains largely in our heads, rather than appearing on student papers. Thus, we wanted to make the feedback process more conversational, less confusing, and less intimidating for students, especially in online classes.

In both of our teaching philosophies, our primary motivation as writing teachers is to help our students improve upon their own ideas by revising their writing and utilizing feedback from us and their classmates. Thus, we recognize that our feedback needs to be personalized and conversational in nature. We don’t want our feedback to be perceived as a directive, which we know results in students focusing all their energy on low priority errors rather than considering global issues. Instead, we want our feedback to inspire students to think about what they’ve written and how they might write it in a way that is more persuasive, clearer, or more nuanced for their intended audience. Moreover, we want their writing to be intentional; we don’t want students to think writing should be merely a robotic answer to an assignment prompt. With goals such as these, it’s no surprise that traditional feedback methods were deemed insufficient and wanting. We teach students that argumentation is about responding to a rhetorical situation–joining the conversation so to speak—and yet our written feedback was not effectively serving that purpose.

To remedy this problem, we experimented with screencasting technology as a tool to provide students with conversational feedback about their work-in-progress. Screencasts are digital recordings of the activity on one’s computer screen, accompanied by voiceover narration. Screencasting can be used by professors in any class to respond to any assignment that is submitted in an electronic format, be it a Word document, text file, PowerPoint presentation, Excel spreadsheet, Web site, or video. While using Screen Capture Software (SCS), we found that screencasting has most commonly been used pedagogically to create tutorials that extend classroom lectures.

Screencasting has been used as a teaching tool in a variety of fields, with mostly positive results reported, specifically in relation to providing students with information and creating additional avenues of access to teaching and materials. In the field of chemical engineering, screencasting has served as an effective supplement to class time and textbooks (Falconer, deGrazia, Medlin, and Holmberg 2009). A study of student perceptions and test scores in an embryology course that used screencasting to present lectures demonstrated enhanced learning and a positive effect on student outcomes (Evans 2011). Asynchronous access to learning materials—both to make up for missed classes as well as to review materials covered in class—is another benefit of screencasting in the classroom (Vondracek 2011, Yee and Hargis 2010). An obvious advantage for online and hybrid classrooms, this type of access to materials also creates greater access for brick-and-mortar universities, especially those that serve nonresidential and place-bound student populations. Research on screencasting in the classroom is limited, but so far it points to this technology as a powerful learning tool.

While most of the research on screencasting shows positive results for learning, such studies focus on how this digital technology serves primarily as a tool to supplement classroom instruction; no research has yet shown how it can be used as a feedback tool that improves learning (and writing) through digitally mediated social interaction. This study examines the use of and student reactions to receiving what we call veedback, or video feedback, in order to provide guidance on a variety of assignments. We argue that screencast video feedback serves as a better vehicle for in-depth explanatory feedback that creates rapport and a sense of support for the writer than traditional written comments.

Literature Review

Best practices in writing studies suggest that feedback goes beyond the simple task of evaluating errors and prompting surface-level editing. The National Council of English Teachers (NCTE) position statement on teaching composition argues that students “need guidance and support throughout the writing process, not merely comments on the written product,” and that “effective comments do not focus on pointing out errors, but go on to the more productive task of encouraging revision” (CCCC 2004). In this way, feedback serves as a pedagogical tool to improve learning by motivating students to rethink and rework their ideas rather than simply proofread and edit for errors. At the 2011 Conference on College Composition and Communication, Chris Anson (2011) presented findings on a study of oral- versus print-based feedback, arguing that talking to students about their writing provides them with more information than written comments.

The task of providing comments that students can engage with remains a challenge, especially when feedback is intended to help students learn from their mistakes and make meaningful revisions. Not only for composition instructors but also for any instructor who requires written assignments, providing students with truly effective feedback has long been a challenge both in terms of quality and quantity. Notar, Wilson, and Ross (2002) stress the importance of using feedback as a tool to provide guidance through formative commentary, stating that “feedback should focus on improving the skills needed for the construction of end products more than on the end products themselves” (qtd in Ertmer et al. 2007, 414). Even when it provides an adequate discussion of the strategies of construction, written feedback can often become overwhelming.

Written comments usually consist of a coded system of some sort, varying in style from teacher to teacher. Research about response styles has shown that instructors tend to provide feedback in categorical ways, with the most common response style focused primarily on marking surface features and taking an authoritative tone to objectively assess right and wrong in comments (Anson 1989). Writing teachers, for example, tend to use a standard set of editing terms and abbreviations, although phrases, questions, and idiosyncratic marks are also common. According to Anson (1989), other teachers used feedback to play the role of a representative reader within the discourse community, commenting on a broad range of issues, asking questions, expressing preferences, and making suggestions for revision. Comments can be both explicit–telling students when an error is made and recommending a plan of action–and indirect, implying that something went well or something is wrong. In this way, indirect feedback seems a bit like giving students a hint, similar to the ways in which adults give children hints about where difficult-to-find Easter eggs might be hidden in the yard. Although the Easter egg hunt is intended to challenge children to solve a puzzle of where colorful eggs might be hidden from view, adults provide clues when children seem unable to figure out the riddle. In other words, adults give guidance when children seem lost, similar to the ways instructors give guidance to students who seem to have veered off track.

Written feedback tends to be targeted and focused, with writers filtering out the extraneous elements of natural speech that may further inform the reader/listener. All communication—whether it be written or spoken—is intrinsically flawed and problematic (Coupland et al. 1991), such that the potential for miscommunication is present in all communicative exchanges. Thurlow et al. (2004, 49) argue that nonverbal cues such as tone of voice usually “communicate a range of social and emotional information.” Everyday speech is filled with hesitations, false starts, repetitions, afterthoughts, and sounds that provide additional information to the listener (Georgakopolou 2004). Video feedback allows instructors to model a reader response, with the addition of cues that have the potential to help students take in feedback as part of an ongoing conversation about their work instead of a personal criticism. We recognize that this claim assumes that an instructor’s verbal delivery is able to mitigate the negativity that a student may interpret from written comments and that the instructor models best practices for feedback regardless of medium.

Serving as a medium that allows instructors to perform a reader’s response for students, digital technology can be an effective tool to continue the conversation about work-in-progress. By talking to students and reading their work aloud, instructors can engage students on an interpersonal level that is absent in written comments. It’s about hearing the reader perform a response full of interest, confusion, and a desire to connect with the ideas of the writer. This type of affective engagement with student work is something that students rarely see, hear, and sense—the response from another reader that’s not their own. Veedback offers students an opportunity to get out of their heads and hear the emotional response that is more clearly conveyed through spoken words than writing.

Thus, audiovisual feedback has the potential to motivate students and increase their engagement in their own learning, rather than just to assess the merits of a written product or prompt small-scale revision. Holmes and Gardner (2006, 99) point out that student motivation is multifaceted within a classroom and point to “constructive, meaningful feedback” as characteristic of a motivational environment. Changing digital technology has allowed for instructors to capitalize on new or evolving digital tools in creating that motivational environment.

As universities move toward hybrid classrooms and online learning and consequently make investments in classroom management tools and communicative technologies, communication with students about their writing is also transforming. Instructors in all fields are experimenting with a variety of tools to deliver information, present lectures, conference with students, and provide feedback on written and visual projects.

Experimentation with digital technologies in traditional and online composition classes has yielded fresh approaches to engage student writers, improve the revision process, and harness the power of multimedia tools to enhance student learning (Davis and McGrail 2009, Liou and Peng 2009). By employing screencast software as a tool to talk to students about their work-in-progress, we are adding another level of interpersonal engagement—palpably humanizing the process.

Our Pedagogy

Because inquiry and dialogue are foundational to our pedagogical practice, writing workshops, teacher-student conferences, and extensive feedback in which we attempt to take on the role of representative reader are common in our courses. Although we each work hard not to be the teacher who provides students with feedback that they don’t understand, more often than we would like to admit, we know that we too are sometimes those teachers who use underlines and marks that make little sense to our students, as this paper with written comments demonstrates (Figure 1).

Figure 1. A student paper with written comments.

Even after informing students of our respective coding system, many students remain confused. This example is one instructor’s chart of editing marks given to students with their first set of written feedback (Figure 2).

Figure 2. A chart of editing marks given to students with their first set of written feedback.

We know students are confused by written comments because some students come to office hours and share their confusion over some of our statements and questions. Many confirm that they don’t really know what to do with the comments or how to make the move to improve their work or transfer their learning to the next assignment or draft. Students’ difficulty in decoding comments may be based on their expectations of feedback as directive rather than collaborative and conversational. Moreover, students’ prior (learned) experiences with feedback may color the way students read and respond to comments. That is, many students expect directive feedback and believe that the appropriate response is merely to edit errors and/or delete sections that are too difficult to revise. Thus, students feel confused (and frustrated) when a comment does not yield a specific solution that fits into the paradigm of “what the teacher wants.”

Although we both require in-person student conferences (or in a digitally mediated form via phone, Skype, or Blackboard Collaborate) as one of the most important pedagogical tools to improve student writing, we acknowledge the limitations of conferences as the primary means of giving feedback. Time is the most obvious obstacle. While allowing the most personalized instruction for each student, one-on-one student-teacher conferences are labor-intensive for the teacher. Conferences are usually held only twice in a sixteen-week semester (or ten-week quarter) and are characterized by a non-stop whirlwind of twenty-minute appointments. For those teaching at nonresidential university campuses and community colleges, requiring students to schedule a writing conference outside of class time is even more challenging as most are overextended with jobs and family responsibilities. The most important feature of writing conferences is the dialogic nature of it–the conversation about the work-in-progress and the collaborative planning about how to make improvements. Acknowledging both the effectiveness and limitations of face-to-face conferencing, we considered alternatives to the traditional writing conference.

Initially, one of us experimented with recording audio comments as a supplement to written comments and an extension of the writing conference, but was not satisfied with the results. This method requires the instructor to annotate a print-based text (which is problematic for online courses and digitally mediated assignments) in addition to creating a downloadable audio file. The separation of the annotated text from comments can create logistical problems for students finding and archiving feedback and create extra work for the instructor providing it.

When we discovered screencasting, we began to experiment with this digital tool as an alternative form of feedback. We each employed Jing screen-recording software to record five minutes of audiovisual commentary about a student’s work. This screencasting software enabled us to save the commentary as a flash video that could be emailed or uploaded to an electronic dropbox. This screenshot shows what appears on the screen for students when they click a link to view video feedback hosted on the Dropbox site.

Opportunities and Obstacles

New methods of delivering instruction, such as in hybrid or online courses, create a need to solve the feedback dilemma in a variety of ways. We believe a key component to effective feedback is the collaborative nature of conversation built upon a rapport cultivated in “normal” classroom interaction. However, with limited (or no) face-to-face time between instructor and student (or between student and student), creating a collaborative and conducive environment for writing is a challenge as the tone of the class is often set by the “performance” of the instructor during class. In online environments, students cannot see or hear their instructors or their classmates, which can potentially stifle the creation of a positive learning community. The face-to-face experiences of the traditional classroom allow students to develop rapport with a teacher, which can mitigate the feeling of criticism associated with formative feedback.

Without these face-to-face experiences, students in online classes are more likely to disengage with course content, assignments, and their instructor and classmates. This increased tendency to disengage is evidenced in the lower completion rate for online classes. According to a Special Report by FacultyFocus, “the failed retention rate for online courses may be 10 to 20 percent higher than for face-to-face courses.” And according to Duvall et al. (2003), the lack of engagement by students in online courses is linked to the instructor’s “social presence.” They state that “social presence in distance learning is the extent that an instructor is perceived as a real, live person, rather than an electronic figurehead.” Research shows that the relationship between student and teacher is often an important factor for retention (CCSSE – Community College Survey of Student Engagement n.d., NSSE Home n.d.); this relationship is a compelling argument for why we should look for socially interactive ways to respond to our students’ work.

While multimedia technology has allowed instructors to create more “face time” with students in an online class, technological savvy does not automatically translate into more social presence. While we would agree that any use of audio/video formats in the online class contributes to creating a learning community, video lectures are not personal in the same way that face-to-face lectures are not personal. In providing feedback on individual students’ writing, we are engaging in a conversation with our students about their own work—a prime opportunity to personalize instruction to meet student needs (also called differentiated instruction).

Logistically, screencasting has its challenges, such as those we encountered—additional time at the computer and a quiet place to record the videos—but we both discovered ways to mitigate those challenges. For example, one author found that this medium relegates the instructor to a quiet space and the other experienced limited storage capacity on her institution’s server. The first author discovered that a noise-cancelling headset allowed her to be mobile while using this feedback method. And the second author had to create alternative means of delivery and archiving by giving students the option of receiving video files via email, downloading and deleting files from the dropbox, or accessing videos via Screencast.com, which is not considered “private” by her institution.

Initially, the process was time-consuming because it was difficult to get out of the habit of working with a hard copy; we each initially wrote comments or brief notations on a paper (or digital) version as a basis for the video commentary. Keeping to the five-minute time limit was also a challenge, but the time limit also helped us to focus on the major issues in students’ writing rather than on minor problems. Perhaps most importantly, as we have become accustomed to the process, it takes us less time to record video comments than when we started using screencasting for feedback. Moreover, positive student response has encouraged us to be innovative in addressing the drawbacks.

Veedback allows instructors to move the cursor over content on the screen and highlight key elements while providing audio commentary as shown in this response paper. These two samples (a response paper [Video 1] and an essay draft [Video 2]) show how instructors can take advantage of the audiovisual aspects of screencasting to engage students in learning.

Video 1 (Click to Open Video). The instructor highlights key elements while providing audio commentary on a response paper.

Video 2 (Click to Open Video). A student essay.

After providing commentary within a student paper, this sample shows how instructors discuss overall strengths and weaknesses by pasting the evaluation rubric into the electronic version of the student essay and marking ranges (Video 3).

[flv image=”https://jitp.commons.gc.cuny.edu/files/2012/02/f5.jpg” width=900 height=200]http://dl.dropbox.com/u/37665637/talking%20about%20assignment%20in%20relation%20to%20evaluation%20criteria.mp4[/flv]

Video 3. The instructor has pasted the evaluation rubric into the electronic version of the student essay and marked ranges.

One of the many ways in which we used screencasting was to give feedback about work-in-progress that was posted to online workspaces, such as a course blog or discussion board. In this case, students posted drafts of their thesis statement for their essay on the blog and we responded to a group of them in batches and linked to the feedback on the course blog (Figure 3).

Figure 3. The instructors responded to thesis statements in batches and linked to the feedback on the course blog.

This method gave students access to an archive of feedback through the course blog and allowed for an extension of in-class workshops about work-in-progress to help students focus their research essays.

This snippet of one of the ten-minute videos mentioned above shows how one of the authors uses the audiovisual medium to show students how their writing may be seen and heard simultaneously (Video 4).

[flv image=”https://jitp.commons.gc.cuny.edu/files/2012/02/f7.jpg” width=900 height=200]http://dl.dropbox.com/u/37665637/Feedback%20on%20thesis%20statement%20draft%20on%20course%20blog-representing%20the%20reader.mp4[/flv]

Video 4. A snippet of a video response to student work.

We have also found veedback to be especially useful for presentations because screencasting software allows us to start a conversation about the impact of visual composition and to manipulate the original document to present alternatives. In this particular sample veedback, the instructor used a sample presentation for an in-class workshop and ran screencasting software to provide an archive of notes that students could access when they were ready to revise (Video 5).

[flv image=”https://jitp.commons.gc.cuny.edu/files/2012/02/f8.jpg” width=400 height=300]http://dl.dropbox.com/u/37665637/Workshop%20on%20presentations.mp4[/flv]

Video 5. A sample veedback presentation.

Methods

Screencasting was used in five sections of college-level writing courses by two instructors. Students from two sections of one author’s research and argument course were surveyed about screencasting feedback on essay drafts and PowerPoint presentations. In the second author’s three online sections of a research and writing course, students were informally asked about the use of veedback, and one online section was surveyed. The screencasts were produced on PCs using Jing software to create individual Flash movie files to be shared and posted to the classroom management system for student access. Both instructors also used screencasting as an extension of classroom lectures by offering mini-workshops on specific aspects of writing and providing tutorials for assignments and software use. Veedback was used instead of written comments, not in addition; in other words, assignments that got veedback didn’t get written comments—only some highlighting or strikethrough font in the file versions that were returned to them. We employed a color-coding system to differentiate between types of comments. For example, yellow highlighting may signal grammatical errors and green highlighting marks problems with content or interpretation as shown inthis example of veedback on an annotated bibliography assignment.

Students in the first author’s classes were asked to fill out an optional, anonymous, web-based questionnaire that would provide feedback about the course. Along with questions that asked students to reflect on their ability to meet the learning objectives for the course, the midquarter surveys also included specific questions related to particular assignments, activities, or teaching technologies that were added to the course. During that quarter, this author added an additional question, eliciting a short response of 500 words maximum, targeted to student perceptions of using videos as a method of feedback. The questionnaire prompted students to speak about their experiences with Jing videos for the two particular assignments in which it was used and to specifically address whether it was beneficial (or not) to their learning. The final question was “Please tell me about your experience getting feedback through Jing screen capture videos on a response paper and your presentation. How did it improve your learning (or not)?”

The data set for this survey is limited as it was elicited from two sections of the same 200-level writing course at one institution, with a maximum potential of 40 respondents. Thirty-two students took the online survey, 22 responded to the short answer questions about Jing, and 3 responded about digital classroom tools other than Jing. An additional data set was elicited from one section of a similar 200-level course at a second institution, with a maximum potential of 16 respondents. All 11 students who participated in the survey provided short answer questions about the veedback. Six respondents also commented on the use of videos for instruction. Thus, the data used in this paper comes from 30 short answer responses which were analyzed using content analysis. A number of key themes emerged and are discussed below. Most students who responded about Jing were extremely positive and found it beneficial to their learning. A few students, including those who found it beneficial, spoke of hearing and seeing through this digital tool as enhancing learning.

With only two out of 30 students stating a preference for comments “written down,” Jing comments received rave reviews as a form of feedback that aided student learning. Student preference for this type of feedback demonstrates how important it is that teachers deliver feedback employing multiple modes of delivery, combining the auditory, visual, and kinesthetic. Many spoke directly to the importance of auditory feedback as a key factor that contributed to their learning, and others claimed that the auditory in combination with the visual made the difference. Many students implied that the auditory explanations, coupled with the visual representation of their essay, gave them enough information to make meaningful revisions and apply feedback.
Students overwhelmingly included statements like “I like the Jing screen capture videos a lot” and “I think the Jing videos are very helpful.” Some students compared this video feedback form to traditional written comments, focusing on the negative side of the written comments rather than fully explaining the positives of the new form. “It felt as if I was talking with them – a much more friendly review rather than harsh critique.” In these comments, student preferences were implied and, therefore, were analyzed for meaning.

Can You Hear Me Now? Can You See It Now?

Inherent in the student-teacher relationship is a power differential in which teachers have more power and the student is somehow deficient and in need of correction. Students expect correction from teachers, not dialogue about their work. Oftentimes, tone of voice is obscured in written comments, forcing students to imagine the teacher in their head. This imagined teacher often sounds harsh and punishing. For example, we might ask questions in our margin comments that are indeed questions. While we might be looking for further explanation or description, students might read these questions as rhetorical, not to be answered, flatly stating that they made some unconscionable mistake that should not appear in a future version or assignment. Anything written in the margins is the “red marker that scolds” (White, 2006). Using one’s actual voice makes the tone of voice apparent. Audio feedback erases the red pen, and replaces it with the sound of a human conveying “genuine” interest in the ideas presented. By giving veedback, we are able to use a conversational tone to talk about writing with students. We are able to share how their writing sounds and offer a variety of options.
Students overwhelmingly pointed to auditory feedback as beneficial to their learning. “Hearing” what the teacher was saying was the most important reason that screencasting was found to be such a successful feedback tool, with many students stating a preference for hearing someone’s voice going through their paper.

“Being able to hear your explanations was very helpful.”

“The fact that you are hearing somebody’s voice instead of reading words on a piece of paper.”

“Instead of just writing comments it helps hearing the feedback. It helps a lot with knowing what specific things to work on.”

The feedback may be perceived as friendly because students can hear tone of voice, recognizing that we as teachers are encouraging them and not criticizing them. We surmise that students may be gaining a way into the conversation because they hear us talking with them about writing, not preaching or using teacherly discourse.

In commenting on veedback, students pointed to more than just the audio component as valuable to learning; for some, it was the combination of hearing feedback while simultaneously seeing the site where ideas may be re-imagined. These comments pointed to the importance of learning through multiple modes of delivery simultaneously, specifically audio and visual.

“I liked being able to hear you and see my paper at the same time.”

“It’s great to be able to get the feedback while watching it being addressed on the essay itself.”

“It’s one thing to just read your instructors feedback but to be able to see it and understand what you are talking about really helps!”

“I can see and follow the instructor as she reads through my writing with the audio commentaries. It helps me to pin-point exactly what areas need to be corrected, what is hard to understand, which areas I did well on, and which areas could be improved.”

Some students showed metacognition about learning preferences, judging the tool as beneficial to them specifically because they believed themselves to be visual learners who benefited from “seeing” what was being discussed. Reproducing discourse about learning styles, these students took on the identity of self-aware learners.

“This way seemed to be very good for visual learners like myself.”

“I like the capture videos. I’m a visual person.”

Making Connections

A number of students described their confusion and frustration after receiving feedback through traditional methods, demonstrating the challenges of making connections between feedback and learning goals. Negative experiences with written feedback were contrasted with previous positive responses to veedback.

“I can’t tell you how many times I’ve gotten a paper back with underlines and marks that I can’t figure out the meaning of.”

“Sometimes when you receive a paper back half of it is decoding what the teacher said before even seeing what was commented on.”

Despite the fact that (written) feedback is intended to communicate important information to students, the end result is often quite the opposite; students feel frustrated, disempowered, and unable to take the necessary steps to apply the comments.

Students noted that veedback simulates the student-teacher conference. Although this form of feedback is only one side of the conference, from teacher to student, the conversational nature of the feedback is clear. Students picked up on this intention, calling veedback an “interactive” form of feedback that is available beyond office hours (24/7) in comparison to a one-to-one writing conference with the teacher.

“it’s like a student-teacher one-on-one conference whenever I can get the time. Very helpful.”

“I really love the Jing screen capture videos that you have given as feedback. It’s very interactive and has helped me a lot. Thank you.”

“It helped my learning by answering questions I had about my writing.”

“Video feedback helped me to better improve my work because it was almost like a classroom setting that allowed the teacher to fill in the interaction gaps without actually having an in-class setting. Not only that, the information could be replayed repetitively, allowing me to review them and reflect on them once I need help with my work.”

While veedback does not allow students to ask questions as they would in a face-to-face, phone, or video conference, hearing the voice of the teacher going through the paper does give students the sense that they can ask more questions because it establishes a personal connection and rapport, creating a sense of availability.

Veedback does more than allow teachers to create a more personal mentoring connection with students; it allows us to take advantage of digital technologies–often thought to dehumanize interaction–to personalize instruction beyond the classroom. Unlike with written comments that rely upon brief descriptions, many students noted that video comments improved their learning primarily because teachers provided deeper explanation.

“I think the Jing videos are excellent because they help me understand a lot better as to what I need to revise. They are a lot better and more helpful than regular comments.”

“It did help my learning, i was able to understand what i was doing wrong, and how to fix it.”

“I think I received more detailed feedback than I might have from written comments.”

“I like it better than normal comments because I can hear your thought process when you are making a comment so it is easier to understand what you’re trying to say.”

Students stated that explanations within video feedback made the thought process of the reader visible, allowing them to identify problems. Thus, veedback provided students with greater guidance about how to improve. One of the second author’s online students stated that veedback “felt like you were explaining it to me,” not just pointing out mistakes. In this way, veedback engages the student in ongoing learning rather than grade justification. Moreover, veedback encourages a response and encourages revision as a re-vision (seeing again), not as just changing to whatever the teacher wants.

It is important to clarify here that it is the audio part of veedback that allows students to hear tone, which is a difficult skill for many students. Moreover, the medium of audio comments encourages students to think of feedback as a conversation. Inexperienced or less experienced (student) writers tend to conflate medium with tone, register, purpose, etc. That is, students often perceive written comments as directive—even when these comments are phrased as questions to consider or presented as guidance for revision. What veedback allows is for instructors to convey tone in both what they say and how they say it, thereby increasing the likelihood that students will understand our comments to be part of an ongoing negotiation between the meaning-making a reader enacts and the intended meaning writers attempt to create. While it is possible to transcribe spoken comments into written form, we posit that it is in hearing our voices that students are engaged in the conversation.

Accessing and Applying Veedback

The problem with traditional margin comments isn’t necessarily in the marks themselves, but in the disconnect between what teachers communicate and how students interpret that feedback. Teachers comment on assignments in hopes of reaching students by providing feedback about what worked well (if a student is lucky) and what went wrong. Feedback is frequently given merely as a form of assessment–justification for a grade. Regularly, feedback is provided after an assignment is completed and with the belief that a student will be able to transfer knowledge about what he did wrong and what he needs to do right the next time. Students are expected to fill in the gaps of their own knowledge. If students are lucky, feedback is given on early attempts (practice activities or essay drafts) to provide guidance, helping those who have lost their way to find their way back to the path.

Although most reviews of screencasting in the classroom have been positive, a recent study in the field of computer science found no significant effect of screencasts on learning (Lee, Pradhan, and Dalgarno 2008), and another that uncovered pedagogical challenges of integrating screencasting (Palaigeorgiou and Despotakis 2010). These critical reviews help us to see that this technology is not a panacea. Like other learning technologies, many of us are quick to see the benefits without fully assessing the problems they present for learners. Many of the problems faced by the computer science students in the first study, such as access, speed, and uncertainty about how to use the tool, were also experienced by our writing students.

Although there are increasing expectations—for instructors as well as students—to use digital tools, sometimes there are additional obstacles based on students’ lack of digital literacy in new media that go beyond typical social networking and entertainment-based tools. The free version of Jing creates SWF files that require a Flash player to open and often requires students to specify which program to open the file with. For the click-and-open generation, this has proved to be a challenge. Alternative software programs include options to save video files in the MP4 format, which can be more easily opened or played on other media devices (such as iPods). However, MP4 files are larger than SWF files, which presents other problems for downloading and/or uploading.

Technological difficulties were one of the primary obstacles to using video feedback. Students participating in the survey overwhelmingly liked veedback, but some complained of difficulties accessing and/or using the technology. Despite written instructions and campus resources providing students with help using academic technologies, two of the nineteen respondents said that they didn’t even know how to get into the videos. Because the survey was anonymous, the instructor remained clueless about which students had problems with access.

“Jing feedback videos and [Dropbox] comments still do not work on my end. I have talked with tech guys and they can’t figure it out. I can’t find out how I did and ways to improve my writing.”

“I like the videos but they were really hard to get them to work.”
“Sometimes it’s hard to open the videos.”

“I have no clue what Jing Feedback Video is and if I got a comment back it may have not opened because I tried to open some of the comments you left, but they would not open for me.”

“I think all the tech we use in class is great, but I have to teach myself how to use it :)”

The technological problems faced by these students resemble the difficulties faced by students unable to decipher the comments on the written page. That is, the technology acted as a barrier between our students and the conversation we tried to enact in written comments—both marginal and end comments—in the same way that written comments themselves are a barrier to the rich conversation that they are meant to convey. Until they asked for help or clarification both groups of students—those with technological problems and those with written comments–remained in the dark, unable to access the feedback in any useful way.

After assessing video feedback in our early classes, we were surprised to learn that technological issues were not always the obstacle to learning. In fact, the obstacle was students’ difficulty understanding how to utilize the feedback in their revision process. Although only two of the respondents stated a preference for written feedback, the complaint brings to light important issues of how students access and apply feedback to make improvements to their work.

“personally i don’t like the jing videos. i’d rather have the comments written down so that I can quickly access the notes and not have to keep track of just where in the video a certain comment is.”

“Written feedback helps more because I get to see the description and review it again if I need to. It is more easier for me to see it written out than video”

The complaint about video feedback in this context can be compared to the specific problems described in the studies about computer science courses (Lee et al. 2008, Palaigeorgiou and Despotakis 2010). It is apparent from these two comments above that the students’ revision practices operate within print-based culture. That is, written feedback is a norm within education and students have background knowledge and a repertoire for working with this mode of feedback, which consequently creates a perception that working with written feedback is easier (even if it is not). Some students, therefore, feel frustrated by unfamiliar modes of feedback and resist new revision practices that require learning new strategies to engage with feedback. While it is not uncommon for new technologies to be resisted when they require some adaptation, students in other contexts show a propensity to develop strategies to overcome these challenges. Thus, continuing research needs to evaluate whether the potential difficulties of implementing veedback outweigh the benefits for learning.

Through this study, we found that students need instruction on strategies for interacting with written and digitally mediated forms of feedback before they can deeply engage in the revision process. Proposed solutions to improve student learning with video feedback include teaching them how to read and apply feedback, not unlike the ways in which we teach them how to interpret the comments we put on a paper. We suggest that teachers encourage students to take advantage of the video format by re-watching sections and pausing when necessary to “digest” comments, as well as teaching them how to use feedback. We also recommend creating tutorials for students to demonstrate how to annotate a “hard copy” of the draft while watching the video, including highlighting and circling key points, time-stamping the draft to correspond with important places in the video, interpreting video feedback, and paraphrasing teacher comments in the margins. When students write their own comments, they do so in terms they understand and use writing to make sense of their own ideas through the act of rephrasing, reworking, and revising. Students already do translation work of digesting feedback during class, student-teacher conferences, and when they sit down to revise their work. What is valuable about student comments on their own work is that in that moment students are actively engaging in the process of revision (and learning).

Conclusion

Even when students understand what we are saying in our comments, they often don’t know how to reconceive the structure of their writing and change it (that is, they don’t understand how to reconfigure their ideas in their own voice). Many students continue to use templates and try to fill in the blanks, rather than see the model and then use the comments to make decisions about the types of revisions that can be made. In the service of learning, finding richer ways to teach students to engage with the work is of the utmost importance. It is our contention that students should be taught how to apply feedback to improve their work. Feedback that engages multiple learning styles while providing deeper explanation offers the possibility of increased student learning in a variety of higher education contexts.

Screencasting allows instructors to provide students with in-depth feedback and/or evaluation. With response papers and short written assignments, veedback allows the teacher to zoom in and highlight portions for discussion while scrolling through the document. With visually-oriented work (e.g., art work, Web sites, and PowerPoint presentations), using the mouse to point at key elements, instructors can talk about the impact of the student’s choices. We suggest that instructors be mindful of time and create multiple videos if there is a need for extensive feedback. Conceivably, students can view each of the videos at different times, even on different days. It is debatable how long web-based videos should be (Agarwal 2011, Scott 2009, SkyworksMarketing 2010), but the need for concision and clarity remains vital for both the student and the instructor. We also recommend instructors inform students if only certain types of issues will be discussed in a particular feedback session.

Based on our pilot study, the majority of students perceived that they understood video comments in a more meaningful way than written comments. Veedback can be used to perform the “confused reader” instead of the “finger-wagging critical teacher.” A margin comment that says “this is awkward” is different than hearing it read aloud from a real reader. The audio portion of veedback allows for communication that is conversational. In other words, teachers can speak the student’s language with veedback in ways that are absent in written comments. When teaching multilingual speakers, teachers may find that reading sentences aloud models Standard English and possible alternative forms that are commonly spoken. Another way to consider using veedback is to give students a sense of a reader’s experience, presenting alternatives through visual imagery and analogies.

We can see that video feedback is effective in terms of engagement with the revision process because we have noticed that students responding to video feedback appeared to attend to big picture issues, making global revisions rather than merely edits to surface level errors. With video feedback, students hear what is confusing about a sentence (rather than just a phrase identifying the error type) and therefore are more willing to attempt revision. Video feedback provides an opportunity to elaborate on problems in writing assignments, which gives students more direct guidance about how to solve the communication problem.

Although students have responded positively to this multimodal teaching tool, additional studies comparing revisions that responded to written feedback and video feedback are needed to investigate specifically what it is about veedback that is so compelling. Student interaction with and application of veedback requires further investigation. Furthermore, assumptions that the current generation is more audio/visual-oriented, a claim that has yet to be proven, may create external pressures for teachers to incorporate digital media into their teaching before research proves its effectiveness. Debates about pedagogy and technology are intricately tied to these assumptions, which must be interrogated. The question remains whether veedback is in fact more effective in improving student performance, or if it is merely student perception because “it’s not your grandfather’s Oldsmobile.” That is, not only are we not using the scolding red pen, but we are also not using any of the traditional feedback methods with which students may have had prior negative experiences.

While redesigning e-learning pedagogy should yield improved student learning, the question of how to measure outcomes will likely remain a source of debate. Although studies have found subtle differences in the impact of technology on student learning, variation in study types and research methodologies continue to leave more questions than answers about the effectiveness of digitally mediated modes of instruction (Wallace 2004), with alternate modes of instructional delivery showing “no significant difference” in student outcomes (Russell 2010). Rather than assessing the effectiveness of e-learning tools like veedback as measured by improved grades, drawing upon the Seven Principles for Good Practice in Undergraduate Education to examine “time on task” (Chickering and Ehrmann 1996) would provide a better indicator of student engagement. We propose that further research utilizing digital tools like Google shared docs would provide an avenue to review writers’ revision histories. This would allow for an examination of the types of revisions students produce in response to different feedback modes during time on task and to garner information about how students are engaged in revision.

We argue that assessing video feedback in terms of performance or most effective mode of delivery would miss the most important point of what our research is attempting to propose. It seems most useful to answer this question: is it fruitful to deconstruct the idea of “engagement in the revision process” by discussing “engagement” and the “revision process” on their own terms? Although there are other ways to assess engagement in the revision process, we believe that students’ attitudes about engaging with feedback provide a wealth of data about affective engagement in the revision process, which gets us closer to understanding what makes our students motivated and, thus, invested in their own learning. While scholars continue to debate about effective ways to motivate students, we propose that using veedback can be an effective way to address the affective component in motivating students. That is, students who are invested in the interpersonal relationship with their instructor/reader are likely to engage in more extensive and/or intensive revision and, consequently, learn at deeper levels.

One of the shortcomings of our study–the fact that our data on student attitudes cannot be compared to writing samples because the survey tool elicited anonymous responses–highlights the challenges of assessing the impact of video feedback on student learning. In our case, the use of anonymous surveys to elicit honest responses conflicted with a desire to triangulate data, leaving us with more answers about students’ perceptions about their own engagement with feedback than proof of whether students who claimed that veedback improved their learning did in fact make improvements.

In courses that teach skills acquisition through a cumulative drafting process, a number of variables at play further trouble the ability to assess the effectiveness of this particular tool. In writing intensive courses, for example, we might question whether improvement in skills from an early draft to a later draft is a product of the feedback method specifically; when assessing improvement in a course that aims to improve skills over the course of a term, supplemental instruction during class (or through online tutorials) and the cumulative effect of skills and knowledge gained between drafts are likely to skew the results. In addition, improvement in the final product (in the form of a revised draft) can differ widely across the data sample, in terms of both classroom dynamics and individual student motivation, background knowledge, ability, and commitment to the course.

Future research that attempts to mitigate some of these variables and triangulate the data may provide a more satisfying answer about the effectiveness of veedback. For example, an option that would allow for a comparison between feedback forms within one class is to use both forms to respond to the same type of assignment (e.g. summaries for two different articles) within one class. While this method may eliminate one variable by using the same students, other problems may arise, such as whether the form used later in the quarter may provide better results on account of cumulative learning or whether one of the assignments produced inferior results on account of the content. To compare across classes, researchers may want to use written feedback first in one class and video feedback first in another. While this may allow researchers to compare across classes and mitigate the problem presented by the order of the feedback form, other variables remain.

While it may be tempting to only ask whether video feedback is superior to traditional modes, we suggest that instructors also consider how this method supplements written feedback through an integration of technology in educational environments (Basu Conger 2005). Because student response to veedback was overwhelmingly positive–and despite technological issues, students preferred this form of engagement to traditional written comments—we intend to continue to evaluate how veedback may improve student learning and enrich teaching. The following student comment reminds us that taking the time for innovation with digital teaching technologies is valuable to student learning and doesn’t fall on deaf ears: “It was a very unique feedback process that helped considerably. I know it’s time consuming but more of this on other assignments would be great!”

Bibliography

Agarwal, Amit. 2011. “What’s the Optimum Length of an Online Video.” Digital Inspiration, February 17. http://www.labnol.org/.

Anson, Chris M. 1989. Writing and Response: Theory, Practice, and Research. Urbana, IL: National Council of Teachers of English. ISBN 9780814158746.

———. 2011. “Giving Voice: Reflections on Oral Response to Student Writing.” Paper presented at the Conference on College Composition and Communication, Atlanta, GA.

Basu Conger, Sharmila. 2005. “If There Is No Significant Difference, Why Should We Care?” The Journal of Educators Online 2 (2). http://www.thejeo.com/Archives/Volume2Number2/CongerFinal.pdf.

CCCC. 2004. “CCCC Position Statement on Teaching, Learning, and Assessing Writing in Digital Environments.” http://www.ncte.org/cccc/resources/positions/digitalenvironments.

Chickering, Arthur W., and Stephen C. Ehrmann. 1996. “Implementing the Seven Principles: Technology as Lever.” AAHE Bulletin 49, no. 2: 3-6. ISSN 0162-7910.

Clements, Peter. 2006. Teachers’ Feedback in Context: A Longitudinal Study of L2 Writing Classrooms. PhD diss., University of Washington. https://digital.lib.washington.edu/researchworks/handle/1773/9322.

Community College Survey of Student Engagement (CCSSE). (n.d.). http://www.ccsse.org/.

Coupland, Nikolas, Howard Giles, and John M. Wiemann. 1991. “Miscommunication” and Problematic Talk. Newberry Park, CA: Sage Publications. ISBN 9780803940321.

Davis, Anne, and Ewa McGrail. 2009. “‘Proof-revising’ with Podcasting: Keeping Readers in Mind as Students Listen To and Rethink Ttheir Writing.” Reading Teacher 62 (6): 522-529. ISSN 0034-0561.

Duvall, Annette, Ann Brooks, and Linda Foster-Turpen. 2003. “Facilitating Learning Through the Development of Online Communities.” Presented at the Teaching in the Community Colleges Online Conference.

Ertmer, Peggy A, Jennifer C Richardson, Brian Belland, Denise Camin, Patrick Connolly, Glen Coulthard, Kimfong Lei, and Christopher Mong. “Using Peer Feedback to Enhance the Quality of Student Online Postings: An Exploratory Study.” Journal of Computer‐Mediated Communication 12, no. 2 (January 1, 2007): 412-433. http://jcmc.indiana.edu/vol12/issue2/ertmer.html.

Evans, Darrell J. R. 2011. “Using Embryology Screencasts: A Useful Addition to the Student Learning Experience?” Anatomical Sciences Education 4 (2): 57-63. ISSN 1935-9772.

Falconer, John L., Janet deGrazia, J. Will Medlin, and Michael P. Holmberg. 2009. “Using Screencasts in ChE Courses.” Chemical Engineering Education 43 (4): 302-305. ISSN 0009-2479.

Furman, Rich, Carol L. Langer, and Debra K. Anderson. 2006. “The Poet/Practitioner: A New Paradigm for the Profession.” Journal of Sociology and Social Welfare 33 (3): 29-50. ISSN 0191-5096.

Georgakopolou, Alexandra, and Dionysis Goutsos. 2004. Discourse Analysis: An Introduction. Edinburgh: Edinburgh University Press. ISBN 9780748620456.

Holmes, Bryn, and John Gardner. 2006. E-Learning: Concepts and Practice. Sage Publications Ltd. ISBN 9781412911108.

Lee, Mark J. W., Sunam Pradhan, and BarneyDalgarno. 2008. “The Effectiveness of Screencasts and Cognitive Tools as Scaffolding for Novice Object-Oriented Programmers.” Journal of Information Technology Education 7: 61-80. ISSN 1547-9714.

Liou, H.-C., and Z. -Y. Peng. 2009. “Training Effects on Computer-Mediated Peer Review.” System 37, (3): 514-525. doi:10.1016/j.system.2009.01.005.

NSSE Home. (n.d.). http://nsse.iub.edu/.

Notar, C. E., J. D. Wilson, and K. G. Ross. 2002. “Distant Learning for the Development of Higher-Level Cognitive Skills.” Education 122: 642-650. ISSN 0013-1172.

Nurmukhamedov, Ulugbek, and Soo Hyon Kim. 2010. “‘Would You Perhaps Consider …’: Hedged Comments in ESL Writing.” ELT Journal: English Language Teachers Journal 64 ( 3): 272-282. doi:10.1093/elt/ccp063.

Palaigeorgiou, George, and Theofanis Despotakis. 2010. “Known and Unknown Weaknesses in Software Animated Demonstrations (Screencasts): A Study in Self-Paced Learning Settings.” Journal of Information Technology Education 9: 81-98. ISSN 1547-9714.

Russell, Thomas L. 2010. “The No Significant Difference Phenomenon.” NSD: No Significant Difference. http://www.nosignificantdifference.org/.

Scott, Jeremy. 2009. “Online Video Continues Ridiculous Trajectory.” ReelSEO: The Online Video Business Guide. http://www.reelseo.com/online-video-continues-ridiculous-trajectory/.

SkyworksMarketing. 2010. “What’s the best length for an internet video?” SkyworksMarketing.com, February 2. http://skyworksmarketing.com/right-video-length/.

Thurlow, Crispin, Laura B. Lengel, and Alice Tomic. 2004. Computer Mediated Communication: Social Interaction and the Internet. Thousand Oaks, CA: Sage. ISBN 9780761949534.

Vondracek, Mark. 2011. “Screencasts for Physics Students.” Physics Teacher 49 (2): 84-85. ISSN 0031-921X.

Wallace, Patricia M. 2004. The Internet in the Workplace: How New Technology Is Transforming Work. New York: Cambridge University Press. ISBN 9780521809313.

White, Edward M. 2006. Assigning, Responding, Evaluating: A Writing Teacher’s Guide. Boston: Bedford/St. Martin’s. ISBN 9780312439309.

Yee, Kevin, and Jace Hargis. 2010. “Screencasts.” Turkish Online Journal of Distance Education 11 (1): 9-12. ISSN 1302-6488.

 

About the Authors

Riki Thompson is an Assistant Professor of Rhetoric and Composition at the University of Washington Tacoma. Her research takes an interdisciplinary approach to explore the intersections of the self, stories, sociality, and self-improvement. Her scholarship on teaching and learning draws upon discourse, narrative, new media, and composition studies to reflect upon, assess, and improve methods for using digital technology in the classroom.

Meredith J. Lee is currently a Lecturer at Leeward Community College in Pearl City, HI. Her pedagogy and scholarship draws upon discourse, rhetorical genre studies, composition studies, sociolinguistics, and developmental education. Dr. Lee’s work also reflects her commitment to open access education.

 

Notes

  1. Data for this study comes from in-class surveys about assessing learning through written and video feedback. Student comments were provided anonymously through a web-based survey tool. In compliance with Human Subjects review, the web-based surveys anonymized responses.
  2. Acknowledgements:  We would like to thank the community of scholars whose constructive feedback made this article richer: reviewers George H. Williams and Joseph Ugoretz, editors Kimon Keramidas and Sarah Ruth Jacobs, as well as Colleen Carmean for her thoughts on the complexities of measuring meaningful outcomes when integrating technology with teaching.

Skip to toolbar