Issue Three

Establishing a New Paradigm: the Call to Reform the Tenure and Promotion Standards for Digital Media Faculty

James Richardson, LaGuardia Community College

Abstract

The challenges facing tenure-track faculty in the areas of digital technology are unique. The relative infancy of web and multimedia technology has created an unexpected quandary for digital scholars teaching within academia. In many cases, these teachers are the vanguard for the movement to educate students and faculty across disciplines in how to best utilize new technology in the academic, artistic, and economic sectors of society. Until now, professors teaching in the area of digital technology have been traditionally judged by the liberal arts definition of scholarship. However, in the case of new and evolving fields of study, there are alternative criteria that would be better suited for the digital disciplines, and would serve as a more accurate assessment on the quality of faculty scholarship as they march towards tenure, promotion and reappointment. Under the current system there are numerous institutional biases and obstructions that unnecessarily complicate the pathway to tenure and promotion for faculty working with technology. If digital scholars are going to advance within the academy, the existing tenure and promotion system must be redefined and expanded to include a more
modern definition of intellectual excellence.
 

 

There is a growing risk that the academy will begin to seem irrelevant if it continues to underestimate the cultural and technological shifts taking place all around us. The sharp divide in academia over the nature of what constitutes tenure-worthy digital scholarship cannot be universally defined without updating the current peer-review system. In “Tenure in a Time of Confusion,” historian Paula Petrik states that the most pressing questions involving digital scholarship are “who will review digital projects, what criteria should be used to evaluate multimodal scholarship, and what skills and qualifications should the reviewers of digital research possess?” (Cheverie, Boettcher, and Buschman 2009, 224).

The key dilemma in assessing digital scholarship is that many academics, who have direct responsibility for setting standards under which digital practitioners are judged, are not technically conversant with and remain largely unaware of the distinctive training and discipline-specific research that is required to effectively excel in these fields (Cross 2008, 2). In fact, most committees responsible for evaluating candidates for tenure and promotion have historically been populated with senior faculty members from traditional non-technical disciplines. As a result, it can be difficult for some conventional scholars to appraise the academic merit of work from disciplines that did not exist twenty years ago (Jaschik 2009). More to the point, in many cases we are asking those tasked with setting standards for multimedia-based research to create fair and impartial rubrics to assess the quality of non-traditional faculty scholarship when they do not adequately understand the technologies and the industries from which these digital professionals have originated. Even in cases where committee members may have a background in digital fields, the predominant attitude in the academy is that digital projects are inferior to publications in peer-reviewed scholarly journals, and should be viewed with some skepticism as to their merit as scholarship (Cheverie et al. 2009, 220).

In the most recent attempt to address the need to establish standards for digital scholarship, the Modern Language Association (MLA) takes the explicit position in its 2012 Guidelines for Evaluating Work in the Digital Humanities and Digital Media that institutions of higher education that “recruit or review scholars working in digital media or digital humanities must give full regard to their work when evaluating them for reappointment, tenure, and promotion” (MLA 2012, 3). In addition to this basic request for broadening the definition of what constitutes digital scholarship, the new MLA guidelines highlight several areas in which institutions and digital practitioners can more effectively set the stage for fair and equitable assessment of multimodal scholarship. Or, to state it more succinctly, institutions of higher education need to create more open standards for evaluating scholarship that blend diverse forms of media as well as evolving technological methods to deliver that scholarship.

Many of the MLA’s proposed solutions call for institutions to improve communications between governing bodies and digital practitioners by clearly documenting scholarship expectations at the beginning of the hiring process and crafting discipline-specific guidelines for faculty producing digital scholarship so that they “can be adequately and fairly evaluated and rewarded” (MLA 2012, 1). The recommendations also call for engaging other digital experts, internal and external to the institution, at key review intervals to define what constitutes exceptional digital work, and to respect that work by viewing and assessing it within the medium for which it was created. Essentially the MLA is advocating that tenure and promotion committees should be discouraged from evaluating multimedia and web-based work by asking the faculty member to reproduce it in printed format. They are requesting that digital artifacts be reviewed only within their original digital media.

In the case of individual digital practitioners entering the tenure system, the MLA advocates that scholars be mindful of the emerging nature of their fields and negotiate at the start of their service the methods by which they will be assessed (MLA 2012). In short, the MLA is asking digital scholars to become fully engaged in their careers by taking a proactive stance in negotiating their responsibilities and methods of assessment, documenting their successes in regard to the impact that their work has on the furtherance of multimodal studies, and making use of all available institutional supports to maximize the opportunities for fair scholarly evaluation.

While the MLA should be commended for being one of the few professional organizations bold enough to buck the traditional academic evaluation system and to take on the task of laying the groundwork for the future assessment of digital scholarship, the guidelines they are proposing stop short of offering specific criteria that can immediately be applied to improve the tenure and promotion process for digital scholars currently in the system. Many of the suggestions put forth by the MLA speak to the future while ignoring the present challenges facing digital practitioners. While it is understandable that the fast moving and fluid landscape of digital disciplines makes it difficult for any organization to craft guidelines to cover all contingencies, one underlying problem facing the academy is that unless immediate changes are made there is a strong possibility that many of the current generation of digital educators could leave traditional institutions of higher education and not return.

The New Paradigm: Immediate Solutions

Until now, faculty members teaching in digital disciplines such as web development and interactive design have traditionally been judged by liberal-arts definitions of scholarship. In most cases, this definition has been limited to whether or not the candidates for tenure or promotion have published articles in double-blind peer-reviewed journals. There is a significant limitation in applying the academy’s existing reliance on peer-reviewed journals to the digital media in that, compared to other long-established fields, there is a lesser number of refereed publications universally dedicated to the field (Ippolito, Blais, Smith, Evans, and Stormer 2009). The limited number of digital media–specific journals can also present a professional stumbling block for digital faculty looking to advance through the ranks of the professoriate. The lack of appropriate publishing venues for their work can compel digital academics to seek opportunities to publish in journals external to their fields of expertise and place them in direct competition, and at a great disadvantage, with authors from non-digital disciplines.

To address this need for change in the current tenure and promotion process, there are several benchmarks that can be immediately applied to provide a more balanced approach toward evaluating the multimodal scholarship of digital practitioners. The suggestions offered in this article to improve the system of tenure, reappointment, and promotion could easily be implemented for digital practitioners currently in the tenure pipeline.

Step 1: acknowledge the distinctions between production and research degrees

Part of the reason that digital practitioners find themselves in difficulty during the tenure and promotion process is that they have not successfully advocated for greater latitude in what constitutes scholarly activities. This includes being clear about various and often subtle subcategories within multimodal studies, most notably the distinction between digital humanities and digital media. The academic delineation between these two new fields is usually lost on many contemporaries from non-technical academic fields. To further complicate matters, the cross-disciplinary nature of these new academic fields has made the lines of demarcation between them significantly less precise. Since both fields are at their cores driven by, or at least defined by developing technologies, it can be easy for traditional academic colleagues to confuse the two. Nevertheless, the differences between the areas of expertise are real and can require different methods of assessment in regard to scholarly production by faculty in each field.

Faculty members teaching in the digital humanities are generally from the liberal arts and social sciences where they study the theoretical effects of technology from a cultural and pedagogical standpoint. These instructors are less involved with the technical inner workings of systems, software, and hardware and are more focused on how the technology can be, and is, used in society and in the classroom. As an example, consider a history professor within the digital humanities, who can, without being able to program an interactive application on the steam powered train engine, utilize existing software to create a multimedia presentation to illustrate how the development of the railroad helped to transform the early American economy. While there are a growing number of academics, like University of Nebraska scholar Stephen Ramsay, who believe that digital humanists (“DHers”) must be able to code and build multimodal artifacts (Gold 2012, 3), there are many DHers who are not required to have the system design skills necessary to educate students on the impact and educational uses of new technologies. Over time as digital convergence continues to blur the lines between the theory and production, this will become less and less true. However, at the present time, the academic responsibilities of digital humanists and media technologists can be somewhat different.

Educators within the field of digital media frequently originate in the visual arts or computer information systems disciplines. They are called upon to teach students how to create and implement systems, software, or hardware from the design phase all the way through the physical conception of new technology. Consequently, it is absolutely essential that these educators are experienced in building digital artifacts and systems in the course of their duties. As opposed to digital humanists, digital media technologists are usually less concerned with the cultural and pedagogical impact of their discipline and mainly concentrate their efforts in assessing which technologies and creative processes offer the greatest opportunities for long-term high-tech innovation. These practitioners are generally more focused on educating students in the specific creative and technical skills necessary to plan and develop the next set of digital tools.

As we begin to discuss methods of evaluation, in the case of the digital humanities—where many professors have research-centric PhDs in traditional fields such as English, the social sciences, and even economics—the peer-reviewed article may be an appropriate base from which to begin to assess their academic scholarship. The work of organizations like the MLA has had a profound impact in prompting traditional publishing venues to recognize the increasing value of digital technologies in influencing humanistic inquiry. This work has helped to redefine the methods of scholarship for future digital humanists by fostering a number of new journals, such as Digital Humanities Now and the Journal of Digital Humanities, that recognize scholarly work beyond the traditional research article. These online and open-access peer-reviewed publications have aided educators who study the effects of technology from a cultural, economic, and pedagogical standpoint in presenting their research in true multimodal fashion. The web-based nature of these new journals has created an environment where the work of digital humanists can move beyond the purely textual to a more visual and technologically dynamic presentation.

Conversely, the educators teaching in the field of digital media commonly have terminal degrees with a more production-centric focus resembling Master of Fine Arts (MFA). The degrees earned by these academics generally concentrate less on the traditional ability to research and write and place greater emphasis on the capacity to design or build new creations. For these practitioners, instead of using the peer-reviewed article or monograph as the evaluation standard, academic institutions could adopt measures more closely resembling the criteria utilized in evaluating professors in the visual and performing arts in which educators are required to develop and maintain professional portfolios of their work. That approach would allow a digital media scholar’s academic excellence and scholarly achievement to be determined by peers within their field through exhibition and critical portfolio evaluation. Since the concept of the professional portfolio has long been a foundation in creative fields, embracing this process to demonstrate digital practitioners’ command of their production discipline would be a natural extension and a more effective base from which to begin to evaluate their scholarship.

At LaGuardia Community College, where I am a faculty member in the humanities department, there are differing standards applied to the creative and the more traditional academic disciplines in regard to tenure, promotion, and reappointment. Faculty members from the creative and performing disciplines, such as theater and fine arts, have far more latitude in what constitutes scholarly achievement within their areas. They are not required to publish in refereed journals, but must instead provide scholarly evidence of their work through recognized gallery exhibitions and artistic reviews of their portfolio creations in appropriate publications. On the other hand, faculty members in more traditional academic disciplines within the humanities, such as philosophy, are strongly encouraged to follow the customary path of academic publishing in order to successfully move through the ranks of the professoriate. Another institution within the City University of New York system, Hostos Community College, has taken this process a step further by adopting clearly defined written guidelines that are specific to the departments and to the disciplines in which faculty members are being assessed. In their 2010 Guidelines for Faculty Evaluation, Reappointment and Tenure, posted on their website, the college not only outlines rubrics for judging faculty but also establishes the use of a portfolio in the overall process.

While the specific methods of scholarly valuation outlined above may be appropriate for digital humanists and digital media technologists, these approaches should merely be a starting point for assessment and not the only, or even primary, system for measuring academic achievement. The digital convergence that is taking place in information technology has affected institutions of higher education. Depending upon the university in question, faculty members in digital programs can be drawn together from multiple disciplines, both technical and non-technical, to constitute new digital media departments or programs. It is not uncommon for instructors with backgrounds in fields such as fine arts, film, theater, graphic design, information technology, photography, computer science, English literature, business, and law to comprise the core teaching staff of a digital media program (Ippolito et.al. 2009, 72). As a result, any department consisting of faculty members drawn from such diverse fields can pose difficulties for their tenure and promotion committees in determining the scholarly quality of a scholar whose academic work is within a singular academic discipline. In many instances, the qualifications for success within the various subfields of digital media can be so varied that applying a single assessment standard to digital scholarship becomes impractical. The predicament then facing academics engaging in digital media is that the cross-disciplinary nature of their work necessitates that they advocate for themselves to develop and frame the context of their creative work in a manner acceptable to tenure and promotion committees (Jaschik 2009).

Step 2: give greater recognition of professional development via industry certifications

In order to stay current with the technical advances that are affecting so many contemporary social and economic changes in the academy, educators in the digital disciplines are required to spend a great deal of their time updating and mastering new technologies. Ongoing training is necessary to enable the digital media faculty to bring evolving information into the classroom and to incorporate it into their traditional research and production work. This can be especially true for educators who have the responsibility for teaching production-centric courses that require a firm grasp of current versions of software, hardware, and technical procedures. However, under the current tenure and promotion system much of this ongoing research and training work to maintain competence with digital technologies is unfairly regarded as merely supplemental activity. While some may argue that evolution and changing standards occur in nearly every discipline, the rapid progression of technical innovation is especially concentrated within the digital disciplines as entirely new software, languages, and methods of development are adopted rapidly and repeatedly. These constant technical changes present a significant challenge for practitioners in the digital fields.

An effective way to address this disparity and credit digital practitioners for the constant technical preparation that is essential to their professional success would be to more fully credit the attainment of well-established industry certifications in the tenure and promotion process. Obtaining qualified certifications from established and valued organizations is a rigorous process in which educators must demonstrate both practical and theoretical expertise. Industry certifications from leading companies such as Microsoft, Apple, and Adobe can also provide external professional validation of faculty expertise in both the technical and creative disciplines. In addition, these companies offer specialized teacher certifications such as the Microsoft Certified Trainer (MCT), the Apple Certified Trainer (ACT), and the Adobe Certified Instructor (ACI) distinctions that not only evaluate a candidate’s mastery of the material but also appraise an educator’s ability to teach technical and creative subjects.

Companies offering accreditations have established their own set of rubrics for software and system development that identify the critical information candidates must master before they can be granted the designation of Certified Trainer or Instructor. In the case of the Adobe Certified Instructor, candidates must demonstrate expertise not only with various creative software packages but also distance learning and presentation software such as Adobe Connect and Presenter, both of which facilitate the development of e-learning content for digital distribution. Attaining these certifications can greatly benefit faculty members teaching production courses as they are introduced to vendor specific workflows that can be passed on to students to facilitate greater efficiency in producing digital content.

The Mozilla Foundation has started a recent trend in online certification and skill representation that has begun to gain traction in many educational and professional circles. The Open Badges initiative is an open source standard in which can be adopted by organizations to issue digital badges as a means to verify educational achievement or competency. The badges would be issued and backed by an organization or school to serve as a graphical certification of accomplishment that could be displayed on websites, social media networks, or traditional offline venues such as resumes. If the badges are supported by well-developed rubrics to substantiate effective instruction in the subject matter, these symbols could function as a powerful endorsement of technical or educational proficiency. Institutions such as Codeacademy, Peer to Peer University (P2PU), and the Carnegie Mellon Robotics Academy have already, or are currently, developing open badges as a way to acknowledge technical achievement.

For many years, tenure and promotion committees have struggled to evaluate intellectual work from disciplines outside of their areas of expertise. It is for this very reason that publishing in peer-reviewed journals has been the default method for determining the academic worth of a candidate for tenure or promotion. Laura Mandell, a Professor of English literature and chair of the MLA Information Technology Committee, has suggested that “a big part of the problem is that for the past 50 years, what people have done on promotion and tenure committees is to say ‘OK, this was accepted by Cambridge University Press. I don’t need to read it because I know it’s quality’” (Jaschik 2009, 1).

Committees have typically been able to “outsource” tenure and promotion decisions to peer-reviewed journals and rely on that process to vet the competence of fellow academics (Harley and Acord 2011). Unfortunately, this practice of evaluating by proxy can only be successful if there are established peer-reviewed journals within the field in question, or failing that, qualified authorities on tenure and promotion committees who can assess the work. What happens to this process when the scholarship that needs to be evaluated originates from a field like digital media where there are few peer-reviewed journals? Or in the case of the digital humanities where standards for publications are only just beginning to evolve to include multimodal artifacts? How can tenure and promotion committees be expected to serve the best interests of their institutions, as well as fairly evaluate faculty in digital disciplines, without the benefit of this specialized expertise? The answer is simply that they cannot. By expanding the number of external sources for evaluating technological excellence to include select industry certifications, tenure and promotion committees would be presented with additional and appropriate measures through which to vet candidates for advancement.

Step 3: give greater recognition to curriculum design and development

Just as instructors within digital humanities and digital media must maintain their skills through ongoing professional development, designing and updating course materials in a rapidly evolving technical field is also a time-consuming process that requires constant research and updating. Under the current tenure and promotion system, curriculum design is unfairly regarded as a supplemental activity and as of lesser value than the traditional printed article.

Faculty members who are designing innovative online courses in various disciplines are at the forefront of an entirely new method of student instruction. Hybrid and online courses, because of asynchronous interaction between teacher and student, require a different level of preparation and engagement by instructors. The interpersonal dynamics of the face-to-face classroom are radically altered when the interaction between student and teacher takes place in a virtual environment. As new forms of online education, most notably MOOCs (Massive Open Online Courses), are being adopted at a startling pace within both private sector and traditional academic circles, by essentially relegating this new area of curriculum development to auxiliary status in the tenure and promotion process, many institutions are setting a precedent that may cause future complications. The secondary status given to curriculum development, regardless of innovation, will help assure that only senior faculty with tenure will chance engaging in this new area of teaching and scholarship.

According to data released by the research firm Ambient Insight, the number of post-secondary students in the United States who will take some or all of their classes online is expected to climb sharply to more than 22 million by the year 2014. The CEO of Ambient Insight, Tyson Greer, has suggested that “the rate of growth in the academic segments is due in part to the success and proliferation of the for-profit online schools” (Nagel 2009, 2). Until recently there has been very little serious competition in the higher education arena for traditional academic institutions. Similar to many other industries, the introduction of technology, in this case online instruction, presents an opportunity for the significant digital disruption of higher education, especially if the curriculum development work of digital practitioners is not adequately recognized in their assessments by tenure and promotion committees and if the academy fails to provide incentives to academic curriculum designers to respond to the competitive threat from the private sector.

Step 4: give greater recognition to service supporting innovative uses of technology

Faculty with digital media expertise are in a unique position to educate students as well as faculty members in other disciplines in how best to utilize new technology in the academic, artistic, and economic sectors of society. As a result, colleges and universities are increasingly asking these educators to consult on and lead large-scale initiatives that benefit the institution. In many cases these educators are asked to serve in these highly specialized roles at a fraction of the price that an outside consultant would be paid. Even in cases where faculty members are helping to build, support, and promote pedagogical initiatives that enhance the reputation of the institution and numerous disciplines, the valuable services that they provide are rarely viewed as scholarship.

A perfect example of the type of service that should be recognized can be found in the recent launch of the City University of New York’s Academic Commons project (https://commons.gc.cuny.edu). The CUNY Academic Commons was the brainchild of a small number of non-tenured faculty and staff who had the pioneering idea to create an online academic social network exclusively for use by the university’s faculty, staff, and graduate students. Built entirely on a foundation of open source software, the focus of the online network was to create an environment for communication and collaboration between the scholars teaching within the 24 units that make up the CUNY system. Since the launch of the online network in 2009 the project has expanded to include the “Commons in a Box” initiative, an open-source venture that will enable other academic institutions to create and customize their own virtual spaces for academic research and collaboration. However, in discussions with Matthew Gold, the project leader for the initiative and the only key person on the project in a traditional tenure-track academic role, I learned that his contribution to the creation and expansion of the Commons was defined as “service to the university,” and thus not given the same weight as a traditional refereed publication would have been in his faculty evaluation for tenure and promotion. Despite the fact that the project has brought a considerable amount of attention to CUNY as organizations such as the MLA sign up to utilize the Commons in a Box software to support their own academic initiatives and institutional purposes (Roscorla 2011), Gold felt compelled to publish an article on his experience to have the project be counted as true scholarship. Gold’s article, co-written with George Otte and entitled “The CUNY Academic Commons: Fostering Faculty Use of the Social Web” (Gold and Otte, 2011), was a case study on the implementation of the Common project to detail the creation and impact of this new academically focused social network.

As a faculty member and digital practitioner, Gold’s experience is not unusual. Sean Takats, a history professor and director of research projects at the Roy Rosenzweig Center for History and New Media, details similar challenges in his blog post “A Digital Humanities Tenure Case: Part 2: Letters and Committees” (Takats 2013). Takats takes the bold step of pulling back the curtain and discussing in great detail the challenges he faced as a digital humanist on the tenure track. Takats was a project lead and co-director for Zotero, a digital software platform designed to assist academics in organizing and sharing research. The software he helped to bring to fruition has been widely recognized and adopted as an excellent resource within the digital humanities and communities well beyond. However, many of the digitally inspired accomplishments achieved by Takats were met with resistance by members of his college-wide tenure committee because “some on the committee questioned to what degree Dr. Takats’ [sic] involvement in these activities constitutes actual research (as opposed to project management). Hence, some determined that projects like Zotero et al. while highly valuable, should be considered as major service activity instead” (Takats 2013, 1).

As technology continues to digitally disrupt the established methods of operating inside the academy, it will be imperative for institutions of higher education to be able to take advantage of innovative ideas developed by the multimodal “thought leaders” within our midst. In the coming years projects like the CUNY Academic Commons and Zotero, which converge on the boundaries bordering cutting-edge technology and ground-breaking pedagogy and academic collaboration, will become increasingly prevalent. If these endeavors are to be successful they will require expert stewardship that can usually only come from leaders familiar with both the technology and the pedagogy. Unless the academy starts to recognize in the tenure and promotion process the contributions of faculty with the capabilities to shepherd these types of digital initiatives, institutions may find it increasingly difficult to get non-tenured educators to play active roles in the future.

Step 5: create discipline-specific communities of digital innovators and thought leaders

Anvil Academic, a new joint project by the National Institute for Technology in Liberal Education (NITLE) and the Council for Library and Information Resources (CLIR), is seeking to fill the void in objectively judging digital scholarship by offering a new virtual ecosystem where non-traditional scholarly work can be evaluated under the direction of traditional university presses and publishing outlets. The founders of the Anvil project hope to provide a true multimodal publishing platform that would enable all forms of digital media to be presented, reviewed, and sanctioned by well-established academic associations possessing the gravitas to substantiate the quality of digital scholarship (Kolowich 2012).

The Anvil project and similar initiatives, such as the CUNY Academic Commons, can help to provide answers to many of these problems by fostering virtual communities for multimodal scholars to collaborate and create more efficient methods of peer-to-peer communication specific to the digital disciplines. For example, the CUNY Games Network, a group on the CUNY Academic Commons dedicated to the study and pedagogical uses of interactive simulations and games, is helping to connect digital practitioners from across the the CUNY system. These educators, many whom may have rarely been able to interact with their colleagues on other CUNY campuses, are now collaborating on research, sharing curricular material, and engaging in ongoing discussions surrounding all aspects of gaming. The Academic Commons, and similar projects, can help to establish essential enclaves within the ranks of the digital disciplines to promote reform and respond to the concerns that tenure and promotion committees may have on the topics of digital scholarship and peer review. For example, digital practitioners within the tenure review process could use similar online systems to establish portfolios to display their interactive creations and have them assessed by qualified peers in the larger academic community to ascertain the quality of the scholarship. The establishment of these online portfolios could also provide snapshots to assess professional growth of a candidate over the period of time they are on the path toward tenure.

The underlying fears surrounding the establishment of discipline-specific communities invariably revolves around whether or sufficient peer review would occur in such environments. In Planned Obsolescence, Kathleen Fitzpatrick explains how the open-source blogging system CommentPress, integrated into the larger MediaCommons academic network, was used as a means to enable peers from within the digital humanities to provide an ongoing critique of her latest manuscript throughout various stages of the reviewing and publishing process. The asynchronous, communal, and open peer-to-peer review that took place within these digital confines would have been difficult to replicate in a traditional print setting. Fitzpatrick suggests that communal learning systems like CommentPress can become “useful tools not just for quickly and engagingly publishing a text, and for seeking feedback while a text is in draft form, but for facilitating an open mode of review” (Fitzpatrick 2011, 115) of digital publications. The open nature of these communal learning systems, where commenters do not reply in the manner attributed to standard double-blind, peer-to-peer reviews, can produce a higher level of trust in the critiques offered because the reviewers are not anonymous and have placed their opinions and academic reputations out in public.

Step 6: broaden the definition of publications to include multimodal productions

The definition of academic publishing should and must be expanded to include new multimodal outlets that are poised to overtake print-based media. Paula Petrik notes that academics are traditionally “people of the book” and will have to adapt to a new digital paradigm in order to fairly evaluate “non-traditional forms and formats of scholarship” (Cheverie et al. 2009, 224). These “people of the book” will continue to have their perceptions of scholarship challenged as academics integrate larger amounts of technical, visual, audio, and web-based elements into their scholarly pursuits. For example, in the same way that high impact sites like the Huffington Post have supplanted conventional printed newspapers and magazines, the rapid adoption of tablet computers and smartphones will redefine the ways students and educators will consume and process information in the coming years. This transformation is already underway. Apple released its iBooks Author application in early 2012, which was designed to enable educators to produce and distribute content that previously required traditional publishing venues. In addition to conventional text, multimodal scholars will now be able to combine videos of speeches, slideshow presentations, music and spoken audio, animated 2-D and 3-D illustrations, and interactive applications, all within a digital format that will run on a tablet device running the Apple iOS. And Apple isn’t the only company banking heavily on the future of fully interactive digital publications. The applications within Adobe’s Digital Publishing Suite offer similar functionality as iBook Author, with the added benefit of being able to create content for alternative tablet devices by Microsoft, Android (Google), and others.

Now that faculty members have access to these alternative production applications, they will be fully able to design customized textbooks to better support the specific curricular needs of their classes and programs. The impact that will be felt on a curriculum-design level will be nothing short of revolutionary for digital practitioners innovative enough to incorporate these tools into their scholarly practice.

Professor Stephen Nichols of Johns Hopkins University, in a discussion of academic peer review, believes that the continuing use of phrases such as “publications” as the primary seal of approval for tenure and promotion will discourage younger faculty members from engaging in digital scholarship, since it is viewed as of significantly lesser value than print-based, peer-reviewed journals (Cheverie et al. 2009). The bias against digital scholarship that Nichols describes creates a climate of fear inhibiting experimentation, which is detrimental not only to scholars in the digital disciplines, but for the entire academy. Fearing to test the limits of academic and technical innovation runs contrary to everything that the educational system should aspire to achieve, and also has a negative impact on the evolution of pedagogical practice.

Ken Norman, a professor of psychology at the University of Maryland, agrees with Nichols. Based on research that he conducted on university models for tenure and promotion, Norman concludes that junior faculty members generally “wait to get tenure before they become cyberized” because “positive tenure and promotion decisions are based on grants and publications in top-tier journals” (Cheverie et al. 2009, 227-28). While delaying the integration of technical innovation into their scholarship may not constitute a burden for faculty in liberal arts and science departments, it can be a substantial professional barrier for digital practitioners. The speed at which technological advances occur in digital disciplines creates a finite window of time to study and implement digital research. Any delay in assimilating new developments into their scholarship places digital scholars at risk of having their research become obsolete before it can ever be published. It is precisely for this reason that it is imperative for the academy to recognize that educators are no longer limited to the printed word in order to participate in deep and meaningful scholarly production. If this position is adopted by academic tenure and promotion committees they will be forced to take the appropriate steps to acknowledge these educational trends, and reward them accordingly.

Conclusion

The definition of scholarship can take many forms and will vary greatly based upon the academic discipline. One of the fundamental goals of scholarship is to create intellectual work that advances the field of study in which the academic endeavor originates. The holy trinity for tenure and promotion—encompassing publishing, service, and teaching—has always been skewed more heavily toward publishing. The impediments to scholarly acceptance of digital media educators closely mirror the challenges that faced earlier academic pioneers of ethnic, Black and, women’s studies during the 1960s and 1970s (Jaschik 2009). It can be said that very little has changed since that time. The academy is an institution bound by tradition, and when new fields of study are developed, it often responds with hesitation and skepticism to emerging disciplines.

Under the current system there are numerous institutional biases and obstructions that unnecessarily complicate the pathways to tenure and promotion for digital faculty. Key among these barriers is the traditional peer-review system that has essentially contracted out the decision-making process for tenure candidates to a select group of academic journals and presses. Because most tenure and promotion committees lack the expertise to critique every discipline, especially in fields that span several areas of study, this aging paradigm is not practical for the emerging digital disciplines. Just as other industries outside of the academy have been altered by major economic and technical changes, higher education may experience a similar transformation unless the academy begins to adapt (Pearce, Weller, Scanlon, and Kinsley 2010). Without modifications many of these digital scholars, in order to validate their own definition of intellectual excellence, will leave the academy in favor of the higher salaries that they can command in the private sector.

Looking back on my own academic career, I am amazed at the naiveté with which I negotiated my academic contract and the methods by which my scholarship would be assessed. As the sole fulltime faculty member in a new discipline established by my college, I was completely unaware of the territory that would have to be traversed to fashion appropriate standards for my scholarly evaluation. While my educational and professional experience had equipped me to teach in the digital disciplines, I was ill prepared as digital media faculty member for navigating the terrain of the academic tenure and promotion process. If any of the recommendations from the MLA had been in place when I was hired to help establish a new digital technology major at my college, my journey through the tenure process might have been a more balanced and constructive experience.

I transitioned to the university from the private sector more than a decade ago, and I have found that my experience is not unique among educators working within the digital humanities and digital media fields. The tenure and promotion system should embrace expanded definitions of acceptable scholarly venues to advance the practice of multimodal scholarship, not only to attract and retain the next generation of digital professionals, but also in order not to discourage new or established faculty members from engaging in technology-based pedagogy and scholarship.

References

Cheverie, Joan F., Jennifer Boettcher, and John Buschman. 2009. “Digital Scholarship in the University Tenure and Promotion Process: A Report on the Sixth Scholarly Communication Symposium at Georgetown University Library.” Journal of Scholarly Publishing 40:210-30. OCLC 360067692.

Cross, Jeanne Glaubitz. 2008. “Reviewing Digital Scholarship: The Need for Discipline-based Peer Review.” Journal of Web Librarianship 2:1-29. OCLC 652131661.

Fitzpatrick, Kathleen. 2011. Planned Obsolescence: Publishing, Technology, and the Future of the Academy. New York: New York University Press. Kindle edition. OCLC 710019002.

Gold, Matthew K., George Otte. 2011. “The CUNY Academic Commons: fostering faculty use of the social web.” On the Horizon 19: 24-32. OCLC 701118378.

Gold, Matthew K, ed. 2012. “The Digital Humanities Moment.” Debates in the Digital Humanities. Minneapolis, MN: University of Minnesota Press. Kindle Edition. OCLC 784886612.

Harley, Diane and Sophia Kryz Acord. 2012. “Peer Review in Academic Promotion and Publishing: Its Meaning, Locus, and Future.” CSHE Center for Studies in Higher Education:1-117. OCLC 709559995. Accessed February 14, 2013: http://escholarship.org/uc/item/1xv148c8#page-1.

Ippolito, Jon, Joline Blais, Owen Smith, Steve Evans, and Nate Stormer. 2009. “New Criteria for New Media.” Leonardo 42:71-5. OCLC 4893498214.

Jaschik, Scott. 2012. “Tenure in a Digital Era.” Inside Higher Ed. Accessed February 14, 2013: http://www.insidehighered.com/news/2009/05/26/digital.

Kolowich, Steve. 2012. “New Seal of Approval.” Insider Higher Ed. Accessed April 17, 2012: http://www.insidehighered.com/news/2012/02/13/anvil-academic-aims-provide-platform-digital-scholarship.

Modern Language Association (MLA). 2012. “Guidelines for Evaluating Work in Digital Humanities and Digital Media.” Accessed February 14, 2013: http://www.mla.org/guidelines_evaluation_digital.

Nagel, David. 2009. “Most College Students to Take Classes Online by 2014.” Accessed Feb 14, 2013: http://campustechnology.com/articles/2009/10/28/most-college-students-to-take-classes-online-by-2014.aspx.

Pearce, Nick, Martin Weller, Eileen Scanlon, and Sam Kinsley. 2010. “Digital Scholarship Considered: How New Technologies Could Transform Academic Work.” In Education, 16. OCLC 728081434. Accessed February 14, 2013: http://www.ineducation.ca/article/digital-scholarship-considered-how-new-technologies-could-transform-academic-work.

Roscorla, Tanya. 2011. “CUNY Plans to Share Social Network Tools That Break Down Silos.” Accessed February 14, 2013: http://www.convergemag.com/infrastructure/CUNY-Social-Network-Tools.html.

Takats, Sean. 2013. “A Digital Humanities Tenure Case, Part 2: Letters and Committees” Accessed February 14, 2013: http://quintessenceofham.org/2013/02/07/a-digital-humanities-tenure-case-part-2-letters-and-committees/.

 

About the Author

James Richardson holds a M.P.S. in Interactive Telecommunications from New York University’s Tisch School of the Arts and has served as a project manager and consultant for numerous Fortune 500 companies.

During his career he has managed the deployment of Multimedia and Telecommunication initiatives for companies such as MetLife, Century 21, ADP, Bankers Trust, Suze Orman Inc, and the City University of New York.

Professor Richardson is well versed in Internet technology, game design, digital audio and video production, e-commerce strategy, animation, and web development. His latest project involves creating an interactive iPad application to motivate at risk youth to find their voice in the information age.

Incorporating the Virtual into the Physical Classroom: Online Mastery Quizzes as a Blended Assessment Strategy

Kyle Beidler, Chatham University
Lauren Panton, Chatham University

Abstract

An increasing volume of research has supported the assumption that pre-lecture, online, and mastery quizzes can be a beneficial pedagogical strategy. However, there has been limited documentation of attempts to combine these pedagogical tools as an assessment of individual course lectures. This paper presents a “blended” instructional approach, which combines an online mastery quiz format with traditional face-to-face meetings within the context of a small graduate course. Preliminary findings suggest that online mastery quizzes that are incorporated into traditional classroom instruction are a useful means of evaluating weekly course lectures and also provide a catalyst for classroom discussion.
 

 

Indexing

Landscape Architecture; Pedagogy; Assessment; Mastery Quizzes

Introduction

Course quizzes represent a common assessment strategy and teaching technique that have been used by instructors for generations. Quiz formats have increasingly varied with the advent of digital technologies. There are now pre-lecture, out-of-class, and mastery quiz formats that have been implemented using both traditional and digital media. In addition to this growing range of quiz typologies, quizzes have produced somewhat mixed findings when studied from an educational perspective.

Paper-based quizzes given at the start of a class period have been used as a means of encouraging students to be both punctual and prepared for scheduled class meetings. Pre-lecture quizzes are also a common tool used to assess the students’ current understanding of the course material. Generally, such quizzes are believed to increase student engagement. However, research findings have varied in terms of student performance.

For example, Narloch and his colleagues found that students who received pre-lecture quizzes, as compared to no quiz, performed better on exam questions (Narloch, Garbin, and Turnage 2006, 111). This study also suggested that simple objective or low-level questions (i.e. fill-in-the-blank, matching) improved student performance on higher-level assessments such as essay questions (ibid., 112). These findings are similar to an additional study that suggested low-level quiz questions can increase student exam performance. However, this same study contradicts the proposed correlation between low-level questions and higher-order cognitive skills such as deductive exploration (Haigh 2007).

In contrast, others have suggested that pre-lecture quizzes do not automatically lead to increases in student performance as indicated by final grades. A comparative study found that exam scores were not significantly improved in sections of a biology course that included weekly quizzes comprised of fill-in-the-blank questions (Haberyan 2003). Connor-Greene found that daily essay quizzes can be a catalyst for thinking within the classroom. However, the author cautioned that the relationship between quizzing and actual learning warrants further study (Connor-Greene 2000).

With the increase of computer technology in higher education, much research has also analyzed the perceived benefits of computerized and online quizzes. Early findings suggested that computerized quizzes can improve exam performance if students used the quizzes to test their knowledge as opposed to learn the material (Brothen and Wambach 2001, 293). Others have suggested that online quizzing is as effective as in-class quizzing only after reducing the possibility of cheating by adjusting the question bank and available time (Daniel and Broida 2004).

Additional studies of online quizzing found that students who elected to use online quizzes performed better in summative exams (Kibble 2007). Kibble’s online quizzes were voluntary, and thus better performing students were more likely to use online quizzes to improve their performance. A later study was able to control for selection bias as well as a number of confounding factors by using retrospective regression methodology. Findings remained consistent with the majority of the literature and suggest that exposure to regular (low-mark) online quizzes has a significant and positive effect on student learning (Angus and Watson 2009, 271).

A study of online, out-of-class quizzes within the context of a small course found that digital quizzing could only be significantly related to student engagement and perceptions toward learning, as opposed to student performance (Urtel et al. 2006). Despite the lack of support regarding academic performance, the authors still concluded that the unintended benefits of the online format outweighed traditional in-class quizzing. This suggests that additional or secondary benefits may alone justify the use of online quizzes within the context of small courses.

A third quizzing format has been studied in both traditional and virtual contexts. Commonly referred to as “mastery” quizzes, the definition and application of this assessment strategy are not consistent within the literature. The distinguishing feature shared among mastery quiz formats is that students have multiple attempts to take any given quiz. Typically, in a virtual context, each mastery quiz randomly selects from a pool of previously prepared questions on a designated topic. The random selection of questions fosters a more dynamic interface because it is unlikely that multiple attempts are identical, assuming a sufficiently large question-bank.

In an early study of digital mastery quizzes, this pedagogical tool was used as an instructional supplement to an online course (Maki and Maki 2001). Students were required to pass a web-based mastery quiz prior to a set deadline. Students were allowed to repeat the quiz and earned course points for passing up to four mastery quizzes. The researchers found that performance on the mastery quizzes was correlated with the student’s performance on exams given in a physical classroom setting (212).

Additional studies have also supported the correlation between online mastery quizzes and exam performance (Johnson and Kiviniemi 2009). Johnson and Kiviniemi’s mastery quiz format required students to take an electronic quiz based on the weekly assigned readings. Administered by a web-based system, the software randomized both questions and answer choices to prevent students from memorizing response-options. However, students were not given a time limit, and there were no apparent controls to limit the potential for cheating in this study. Brothen and Wambach (2004) have suggested that online-quiz time limits are associated with better exam performance because they reduce the opportunity to look up answers in lieu of learning the material.

Other studies have defined a mastery quiz as an “unannounced spot quiz that is presented twice during class, once at the beginning of the lecture period and then again at the end” (Nevid and Mahon 2009, 29). This pre-lecture and post-lecture application of the mastery quiz concept allows students to acquire knowledge on the tested concepts and focuses their attention during the lecture period. The authors of the study found that students showed significant improvements assessed by pre-lecture and post-lecture comparisons. Credits earned on mastery quizzes also predicted exam performance on concepts covered by the mastery quizzes (Nevid and Mahon 2009).

Collectively, this body of literature largely suggests that quizzing is a beneficial pedagogical strategy, but warns that the relationship with student performance has been somewhat inconsistent. This begins to imply that quizzes may present greater assessment benefits than teaching and learning outcomes. However, none of the studies reviewed have focused on the use of mastery quizzes as a means of assessing an instructor’s classroom activities. Therefore, this study highlights the application and lessons learned from a “blended” quizzing approach that incorporated web-based, pre-lecture and post-lecture mastery quizzes within a physical classroom setting as a means of assessing the effectiveness of face-to-face lectures.1

Methods and Procedure

Data was collected during a single semester in a landscape architecture construction course at a small east coast university. The program is only offered at the graduate level and thus the course was comprised of a small set of graduate students (N = 11, 73% women). The class generally reflected the graduate school’s ethnic (75% = White) and age (mean age = 29.3) composition. Permission from the university’s institutional review board was received to use course and survey data to analyze the effects of mastery quizzes implemented in the course.

Previous pedagogical studies of landscape architecture construction studios have suggested that there are significant differences between the learning preferences of undergraduate and graduate students. In a 2003-04 survey, online lectures were found to be highly preferred by graduated students, compared to undergraduate students (Li 2007). This finding was supported by a 2011-12 survey which reported that undergraduate students significantly preferred in-class lectures using PowerPoint slides (Kim, Kim, and Li 2013). The authors of this multiyear study concluded that undergraduate landscape architecture students are “more likely to rely on the help from instructors or classmates rather than to prefer individual or independent learning” (ibid. 95).

Differences in learning styles between undergraduate and graduate cohorts have also been reported in the context of “e-learning” outside of the landscape architectural discipline. Novice undergraduate e-learners significantly differed from graduate e-learners in two indexed learning style domains, including information perception and information understanding (Willems 2011).

Given the context of the research within a graduate program, it was impossible for this study to make similar comparisons across learner cohorts. However, it is important to note that the course and its materials have been developed within a context of a first-professional curriculum. Therefore, the materials, concepts, and topics covered by this course do not dramatically vary whether it is offered on an undergraduate or graduate level. At either level, the learning objectives are largely dictated by accreditation standards and professional expectations.

For this study, we administered a total of 24 digital mastery quizzes throughout a single semester in a pre-lecture/post-lecture format. Specifically, online quizzes were developed using the university’s learning management system (Moodle). Each quiz was composed of low-level objective questions (true/false and multiple choice) and higher-level graphic problems (short-answer). The short-answer questions required students to solve a given problem presented in a graphic image. Thus, short-answer questions are classified here as requiring higher-level thinking skills because they required the students to “apply” concepts covered in previous lectures. In comparison, the lower-level questions simply asked students to “recall” new concepts presented in the week’s assigned reading.2

Six pre-lecture and six post-lecture quizzes were given at the start and end of each class prior to the mid-term examination. An additional six pre-lecture and six post-lectures quizzes covered the second half of the term and the material leading up to the final exam. In total, the quizzes accounted for 10% of each student’s final grade. All quizzes were announced prior to each lecture.

To limit cheating and manage the classroom schedule, a time limit was set on each quiz. The online quizzes were administered and taken by the students in the physical classroom at the start and end of each lecture and questions were randomly selected from a weekly question bank. Each class period was scheduled for three hours per week and allowed for ample time to implement the quiz format. Students were required to have laptops for every class meeting.

Results

Given the limited sample size (n=11), it was meaningless to test for any correlation between quiz and exam performance. However, as depicted in figure 1, the descriptive statistics reveal that there was a consistently higher post-lecture average quiz score. All quizzes were based on 10 available points. The average pre-lecture mastery quiz for the semester was 71.8%. In comparison, the post-lecture average score was 88.9%.

Quiz/week Number

Pre-lecture Mean Score (SD)

Post-lecture Mean Score (SD)

Exam Mean (SD)

2

8.32 (1.01)

9.22 (1.02)

3

6.26 (2.14)

8.35 (1.37)

4

6.50 (2.06)

9.12 (1.34)

5

6.45 (2.35)

8.45 (2.10)

6

6.80 (2.34)

9.32 (0.83)

7

7.88 (1.64)

9.54 (0.89)

Midterm Exam

84.73 (14.33)

10

8.36 (2.01)

9.82 (0.57)

11

7.36 (1.49)

8.09 (1.73)

12

8.09 (1.73)

10.00 (0.00)

13

4.27 (2.56)

6.32 (2.83)

14

8.55 (1.08)

9.45 (0.78)

15

7.30 (2.44)

9.06 (0.91)

Final Exam

87.73 (5.06)

Figure 1. Pre-lecture and post-lecture quiz averages compared to exam scores.

All quiz scores are out of a possible 10 points. All exams scores are out of a possible 100 points.

Using an ANOVA analysis of variance, we found that the mean pre-lecture and mean post-lecture scores varied significantly (F = 66.086, p < .001). Furthermore, a given week did not predict the difference between pre- and post-lecture scores (F= 0.899, p > .05). Given the controlled testing environment, these results begin to suggest a positive outcome in terms of the students’ understanding of the material. This finding is supported by the results of the course evaluation, which indicated all respondents (n=9) believed that the quizzes had aided in their learning of the course material. The majority of respondents also agreed that the mastery quizzes aided in their identification of new topics. In addition, students believed that quizzes aided in their review of course topics and encouraged good reading habits.

While the findings are not generalizable, this preliminary data suggests that positive learning outcomes could be measured between the pre- and post-lecture average quiz scores. The question of whether weekly mastery quizzes actually increases learning cannot be asked of this contextual data. As others have pointed out, many factors influence test scores, including the wording and formatting of individual questions (Urtel et al. 2006). Therefore, a more appropriate question for this type of data is: “How can weekly mastery-quiz results inform classroom instruction?”

Assessing the Effectiveness of Individual Lectures

As alluded to previously, the course data generated by the mastery quiz format can also be used to gauge teaching effectiveness. By graphically charting and comparing each weekly mean, it is possible to visualize the relative effectiveness of each course lecture (see Figure 2). This technique is especially useful in smaller courses with limited enrollment where more robust statistical analysis is not possible.

Figure 2. Graphic comparison of pre-lecture and post-lecture mean scores

Figure 2. Graphic comparison of pre-lecture and post-lecture mean scores.

Figure 2 displays the consistent improvement suggested earlier by the data in previous table. Obviously, the post-lecture average scores are greater than the pre-lecture averages throughout the semester. More importantly, the distance between the charted lines begins to depict the degree of effectiveness of each face-to-face lecture. In short, the degree of student improvement — and (arguably) the effectiveness of a given week’s lecture, material, and planned activities — is revealed in the space between the charted averages.

From a theoretical perspective, this simple interpretation of the descriptive statistics allows us to more closely assess the quality of the instruction as opposed to student performance. We would argue that this chart begins to identify which specific weeks of instruction need the greatest improvement. This concept can be more clearly depicted by charting the difference between pre-lecture and post-lecture scores against the semester average improvement of the mean scores (see Figure 3).

Figure 3. Average weekly improvement in post-lecture scores as compared to the semester’s average improvement on weekly quizzes.

Figure 3. Average weekly improvement in post-lecture scores as compared to the semester’s average improvement on weekly quizzes.

On average throughout the semester, students scored 1.72 points greater on a post-lecture quiz as compared to a pre-lecture quiz. Figure 3 shows which weeks drop below this average. Thus, this analysis helps the instructor to identify specific weeks in their lesson plans that should be targeted for improvement. The efficiency of this course assessment strategy is mirrored by the promptness by which students receive feedback from the digital quiz format.

Based on our experiences with technique, we would argue that the digital mastery quiz format is a useful course assessment strategy that can guide instructional efforts. In addition, the efficiency of a digital format and the speed which feedback is generated outweigh for us any remaining concerns regarding the statistical significance between quiz results and student exam performance. Thus, the following section highlights additional software techniques that aid in the interpretation of the data that is generated by digital mastery quizzes.

Google Motion Charts

As noted earlier, both pre-lecture and post-lecture quiz scores were recorded in the university’s learning management system and charted as a series of averages. While it proved to be a convenient way to track quiz scores, it created several challenges for analyzing individual student progress over time. Also, given the relatively small sample size, it became important to consider the data from multiple aspects in order to make full use of it. With these two issues in mind, a search for another tool to analyze the data proved necessary. After experimenting with several different visualization tools, Google Motion Chart was selected, based on its ability to provide animation and a multi-dimensional analysis in a interactive, easy-to-understand way.

In addition, the Google Motion Chart was a freely available gadget within Google Docs (now Drive), making it an easy and viable tool for us, and others, to use. In 2007, Google acquired the software Trendalyzer, used by Hans Rosling. It was incorporated as a Google Gadget, an option that can be inserted into any Google Spreadsheet.3 In its essence, the motion chart is a Flash-based chart used to explore several indicators over time. Again, this made it the ideal tool to explore as it provided up to four dimensions for analysis. As illustrated in Figures 4 and 5, the parameters we used for analysis were pre-lecture quiz scores (x-axis); post-lecture quiz scores (y-axis); and the difference between the pre- and post- scores for individual students (color).

Figure 4. Data formatted in Google Spreadsheet and converted to a Google Motion Chart.

Figure 4. Data formatted in Google Spreadsheet and converted to a Google Motion Chart.

Once the data is converted to a Google Motion Chart, a “play” button appears in the lower left of the chart. When clicked, this button sets the data in motion. Optional “trails” or lines feature assist in tracking individual student progress over time (see Figure 5). The Google Motion Chart allows these variables to be quickly modified as needed by choosing a different variable from the drop down list provided. Once the chart is set in motion, it becomes easy to focus on one aspect of the data set.

Figure 5. A Google Motion Chart illustrating a student’s scores over the course of the semester.

Figure 5. A Google Motion Chart illustrating a student’s scores over the course of the semester.

In our example, we focused on the difference between the pre-lecture and post-lecture scores represented by color and charted by the gadget over time. The more blues and greens represented, the smaller the difference between pre- and post- scores (see Figure 6), while the more yellows and reds, the greater the difference (see Figure 7). This quickly provides a way for instructors to gain a general sense of the gap differences and thus student performance on the quizzes.

[vimeo]https://vimeo.com/65057284[/vimeo]

Figure 6. A Google Motion Chart (captured as a movie) illustrating less positive learning outcomes as evident by the cooler colors

[vimeo]https://vimeo.com/65057283[/vimeo]

Figure 7. A Google Motion Chart (captured as a movie) illustrating more positive learning outcomes as evident by the warmer colors

Discussion

Anecdotally, we found that the mastery quizzes did encourage regular, punctual attendance. All quizzes were electronically “opened” and “closed” to students based on the precise timing of the physical class meetings. The learning management software does allow the instructor to restrict access based on an IP address; thus in larger courses these settings could potentially increase attendance. However, the question of whether the quizzes actually promoted the completion of reading assignments prior to class warrants further investigation.

The consistent improvement in post-lecture averages suggests that the mastery quiz format guided the students’ understanding of the material by signaling important concepts. Students overwhelmingly expressed favorable attitudes towards the mastery quizzes in their evaluation of the technique. These results seem to suggest positive learning outcomes. However, the findings are not generalizable and outcomes between different learner types should be considered in future studies.

Our experience highlights the usefulness of the digital mastery format as a course assessment strategy. The efficiency and clarity in which digital quizzes provide feedback to the instructor regarding his or her relative success in the classroom seems to present compelling justification for the implementation of this strategy in other courses. The electronic benefits of the digital quiz format are further enhanced by the abilities of current web-based software to aid in visualization and analysis the data.

The selection of the Google Motion Chart as our visualization tool for the data provided a unique opportunity to not only see the changes in both individual and class performance over time, but more importantly, it allowed the instructor to monitor quiz results concurrently for each given week of the semester. As additional data is added to the Google Spreadsheet, the motion chart should provide the instructor with a means for quick analysis of student progress over time, making this a useful retrospective tool to help inform teaching decisions. This is just one simple interpretation of the data; we feel, however, there is value in being able visualizing data in this manner. It can provide instructors with the ability to see classroom trends and patterns over time. However, we do not want these assessment benefits to overshadow the perceived pedagogical value of the quizzing format as a teaching technique.

The digital mastery quiz format also presented equally important instructional opportunities. The pre-lecture quizzes were designed not to provide additional feedback to the students; they only notified the students if they had answered the question correctly or not. This aspect of the quiz design was implemented in an attempt to focus the students’ attention on specific content they seemingly did not understand. Anecdotally, this design detail of the assessment strategy seemed to significantly increase the number of questions at the start of the lecture and increased the overall engagement of the students during class as compared to previous semesters in which the course was taught.

From this perspective, digital mastery quizzes presented a valuable catalyst for class discussion. As suggested by Connor-Greene (2000), assessment and testing can become a more dynamic process rather than a static measure of student knowledge if it is used to generate classroom conversation. Therefore, we believe that the blended nature of the digital mastery format, as it was implemented in our study, was critical in meeting our educational objectives. Specifically, the pre-lecture quiz administered at the start of each class combined the efficiency of online quizzing with the opportunity for immediate and collaborative discussion in the physical classroom. This approach to quizzing seemingly encouraged students to “test their knowledge” and then use the scheduled classroom period as an opportunity follow up with questions in a more interactive and personal forum.

Finally, daily quizzes can also be a catalyst for multiple levels of thinking if more robust question types are included in the design of the quiz. We included both “recall” and “applied” short-answer questions within our quiz design. In hopes of increasing additional levels of higher-order or critical thinking, future development of the mastery quiz format should focus on the quality and extent of thinking required by distinct question types. Assessment strategies and techniques must be consistent with the level of thinking an instructor is attempting encourage in the classroom. In terms of digital and online quizzes, new electronic question types such as “drag-and-drop” responses are increasingly allowing instructors to develop higher-level assessment. Therefore, all educators could benefit if future research increases our understanding of the relationship between digital question types, quiz outcomes, and Bloom’s (1956) Taxonomy.

Bibliography

Angus, Simon, and Judith Watson. 2009. “Does regular online testing enhance student learning in the numerical sciences? Robust evidence from a large data set.” British Journal of Education Technology no. 40 (2):255-272.

Bloom, Benjamin. 1956. Taxonomy of educational objectives, Handbook I: The cognitive domain. New York: David McKay.

Brothen, Thomas, and Cathrine Wambach. 2001. “Effective student use of computerized quizzes.” Teaching of Psychology no. 28 (4):292-294.

———. 2004. “The Value of Time Limits on Internet Quizzes.” Teaching of Psychology no. 31 (1):62-64.

Connor-Greene, Patricia. 2000. “Assessing and promoting student learning: Blurring the line between teaching and testing.” Teaching of Psychology no. 27 (2):84-88.

Daniel, David, and John Broida. 2004. “Using web-based quizzing to improve exam performance; Lessons learned.” Teaching of Psychology no. 31 (3):207-208.

Haberyan, Kurt. 2003. “Do weekly quizzes improve student performance on general biology exams.” The American Biology Teacher no. 65 (2):110-114.

Haigh, Martin. 2007. “Sustaining learning through assessment: An evaluation of the value of a weekly class quiz.” Assessment & Evaluation in Higher Education no. 32 (4):457-474.

Johnson, Bethany, and Marc Kiviniemi. 2009. “The effect of online chapter quizzes on exam performance in an undergraduate social psychology course.” Teaching of Psychology no. 36:33-37.

Kibble, Jonathan. 2007. “Use of unsupervised online quizzes as formative assessment in a medical physiology course: Effects of incentives on student participation and performance.” Advances in Physiology Education no. 31:253-260.

Kim, Young-Jae, Jun-Hyun Kim, and Ming-Han Li. 2013. Learning vehicle preferences and web-enhanced teaching in landscape architecture construction studios. Paper read at Council of Educators in Landscape Architecture Conference: Space, Time/Place, Duration, March 27-30, 2013 at Austin, Texas.

Li, Ming-Han. 2007. “Lessons learned from web-enhanced teaching in landscape architecture studios.” International Journal on E-Learning no. 6 (2):205-212.

Maki, William, and Ruth Maki. 2001. “Mastery quizzes on the web: Results from a web-based introductory psychology course.” Behavior Research Methods, Instruments, & Computers no. 33 (2):212-216.

Narloch, Rodger, Calvin Garbin, and Kimberly Turnage. 2006. “Benefits of prelecture quizzes.” Teaching of Psychology no. 33 (2):109-112.

NCAT, The National Center for Academic Transformation. 2012. Program in course redesign; The supplemental model 2012 [cited October, 17 2012]. Available from http://www.thencat.org/PCR/model_supp.htm.

Nevid, Jeffrey, and Katie Mahon. 2009. “Mastery quizzing as a signaling device to cue attention to lecture material.” Teaching of Psychology no. 36:29-32.

Urtel, Mark, Rafael Bahamonde, Alan Mikesky, Eileen Udry, and Jeff Vessely. 2006. “On-line quizzing and its effect on student engagement and academic performance.” Journal of Scholarship of Teaching and Learning no. 6 (2):84-92.

Willems, Julie. 2011. “Using learning styles data to inform e-learning design: A study comparing undergraduates, postgraduates and e-educators.” Australasian Journal of Educational Technology no. 27 (6):863-880.

 

About the Authors

Kyle Beidler is an Assistant Professor at Chatham University in the Landscape Architecture Program. His research and teaching interests include design education, neighborhood planning, sustainable site engineering practices and the integration of digital technologies with design communication. Kyle received his PhD in Environmental Design and Planning from Virginia Tech and recently completed Chatham’s Faculty Technology Fellows Program from which this project and article originated.

Lauren Panton is the Manager of Instructional Technology and Media Services for Chatham University. She leads the Faculty Technology Fellow Program, which supports faculty with technology-enhanced projects in teaching, learning, and scholarship. Her academic interests include the scholarship of teaching, as well as technologies related to data visualization, multiple modalities and blended learning.

  1. In the context of this paper, a supplemental model of blended learning is conceptualized as a pedagogical strategy that retains the “basic structure of the traditional course and uses technology resources to supplement traditional lectures and textbooks” (NCAT 2012).
  2. For a complete discussion regarding the relationship between quiz questions and Bloom’s Taxonomy of Educational Objectives, please see Connor-Green (2000).
  3. Google has announced that in 2013 it will be deprecating Gadgets in Google Spreadsheets, however the motion chart type will be incorporated as a regular chart option (to insert one of these charts, from the Insert menu, select Chart.). No specific date has been announced; please check the Google Drive support site for additional information.

Skip to toolbar