Addressing the “2 Sigma Problem”: A Review of Bill Ferster’s Teaching Machines

Gardner Campbell, Virginia Commonwealth University

Review of Teaching Machines: Learning from the Intersection of Education and Technology by Bill Ferster (Baltimore: Johns Hopkins University Press, 2014). $34.95 hardcover, $33.20 e-book.

In Teaching Machines: Learning from the Intersection of Education and Technology (Johns Hopkins University Press, 2014), author Bill Ferster has given us a book that is informative, useful, and charming. Informative, in that even old hands in educational technology may have some gaps in their knowledge of the history of the subject (quick: how much do you know about Sidney Pressey, the “grandfather of the teaching machine movement”?); useful, in that it sparks new thoughts by extending the definition of “technology” back to early textbooks and their use of a quasi-catechistical style that might substitute for a live teacher; and charming in that many of the stories are told with a winning, unassuming flair, including the story of the author’s infancy in a Skinner box (his father was a colleague of B. F. Skinner). Fester uses six “lenses” to view key historical moments in educational technology, mostly to identify why they have “consistently failed to live up to the promises of their promoters”: personal, historical, theoretical, economic and business, political, and technological. He uses these lenses in six chapters ranging from horn-books and quill pens to cloud-based MOOCs, an admirably comprehensive and thoughtful design.

Yet for all its artful thoroughness, Ferster’s book often fails to recognize or examine the ironies of its own analyses and conclusions. Fester’s definition of teaching machines is partly to blame, as it repeats the inimical division between “content” and “pedagogy” that quickly snarls both categories in reductive tangles, leaving the idea of understanding to one side for both teacher and student. There is also a spectre haunting this book’s pages and the conceptual frameworks it proposes. The same spectre haunts contemporary education generally. This spectre is what Benjamin Bloom called “The 2 Sigma Problem: The Search for Methods of Group Instruction as Effective As One-to-One Tutoring.” In this 1984 article, which Ferster quotes several times throughout his book, Bloom (yes, the Bloom’s Taxonomy Bloom) presents findings of research on three learning methodologies: a typical content delivery and testing model with a teacher-to-student ratio of 1:30, a “mastery learning” model with the same teacher-to-student ratio but in which tests are formative and students do not progress until they can demonstrate they have learned the material, and a one-to-one tutorial with “good tutors” (not defined) that proceeds along this same “mastery” model (although Bloom notes that the need for corrective work under such tutoring is “very small“ (Bloom, 4).

Bloom calls the results of these experiments “striking.” It’s hard not to agree:

[I]t was typically found that the average student under tutoring was about two standard deviations above the average of the control class (the average tutored student was above 98% of the students in the control class)….  The variation of the students’ achievement also changed under these learning conditions such that about 90% of the tutored students and 70% of the mastery learning students attained the level of summative achievement reached by only the highest 20% of the students under conventional instructional conditions….   [U]nder the best learning conditions we can devise (tutoring), the average student is 2 sigma [i.e., two standard deviations] above the average control student taught under conventional group methods of instruction. The tutoring process demonstrates that most of the students do have the potential to reach this high level of learning [emphasis Bloom]. (4)

These experiments demonstrated not only that tutoring yielded better learning (at least in the kind of “learning” involved in content-and-testing situations), but that it greatly diminished the gap between learners of varying abilities. Indeed, the graphs in Bloom’s article depicting conventional, mastery, and tutorial learning go from a shallow bell shape to a very steep parabola far to the right of the bell’s midpoint. If similar results were demonstrated for cancer drugs, manufacturing quality control, or successful financial investments, they would be characterized as breakthroughs. Bloom himself comes near to such a proclamation himself: such results, he writes, “would be an educational contribution of the greatest magnitude. It would change popular notions about human potential and would have significant effects on what the schools can and should do with the educational years each society requires of its young people” (5).

But there will likely be no such educational contribution, no such change of popular notions of human potential, no such significant effects on what schools can and should do. Why? Because the social will to effect such change does not exist. In a world where courses are “modularized,” faculty are interchangeable “facilitators,” and branding trumps mission, our societies appear unwilling to make that investment in education, not for one-to-one or even one-to-three tutorials. Bloom notes, as a simple matter of fact, that “an important task of research and instruction is to seek ways of accomplishing this [educational success] under more practical and realistic conditions than the one-to-one tutoring, which is too costly for most societies to bear on a large scale” (5). Without a single note of regret, Bloom substitutes “practical and realistic conditions” for essential social aspirations, aspirations like the ones, for example, on which the idea of democracy depends. Moreover, Bloom implicitly puts his trust in methodologies that appear inevitably to lead to automation of one kind or another, the same philosophy of mass production that in the first three decades of the 20th century aimed to put a Model T Ford in every garage. Where are the societies that can bear the costs of raising students of varied abilities to the height of their potential? Wouldn’t we want our children to be educated there?

Teaching Machines offers detailed and sometimes surprising narratives of approaches to the “2 Sigma Problem.” Two chapters are particularly notable in this respect: “Step By Step,” which examines programmed learning as well as other behaviorist approaches clustering around the work of B.F. Skinner, and “Byte By Byte,” which narrates computerized and typically networked approaches to both mechanistic educational paradigms and the more constructivist attempts to scale education along paradigms favored by Dewey, Vygotsky, Montessori, and Papert. In both cases, however, the problem remains. Mechanistic paradigms reduce learning to linear relationships within concepts that are complexly related or even formulated along multiple dimensions, while constructivist paradigms are rejected for failing to meet reductive, tautological criteria for easily-measured “learning.” The “Logo” experiment, pioneered by Seymour Papert is an interesting case in point of the latter outcome. Papert believed that students could learn mathematical abstractions more easily and thoroughly if those abstractions were made concrete and observable by means of a turtle whose motions were programmed by students manipulating mathematical relations. In essence, Papert brought Maria Montessori’s idea of “manipulables” into the world of computers. Both idea and execution were brilliant and effective, but as Papert observed later in The Children’s Machine, “before the computer could change school, school changed the computer.” As Ferster writes, “Papert’s Logo had a spark of interest from educators in the 1980s, but its constructivist/constructionist nature, which makes it a great tool for inquiry and learning, also makes it slower to teach with the content specificity required in an ever-increasingly assessment-driven school environment.” Ferster’s logic is genuinely hard to follow here. Does he mean to criticize that environment for neglecting inquiry and learning, or to criticize Papert’s ambitions for Logo because it falls short of content specificity? Even more seriously, does Ferster mean to suggest that “content specificity” has meaning or value outside a context of inquiry and learning? For that matter, what can teaching and assessment possibly mean outside that context?

While Teaching Machines tells an engaging story of everything from correspondence schools to constructivist MOOCs (what he calls, somewhat misleadingly, “Canadian MOOCs,” as opposed to “California MOOCs” like Udacity and Coursera) and the Khan Academy, the chapters examining pedagogical philosophies underlying these approaches fail to engage with the abuses and absurdities born of mechanistic paradigms. As with Bloom’s analysis, Ferster stops short of deeper and more self-aware examinations of the assumptions underlying the directions pursued by the swarm of venture-capital infused “educational technologists” who meet universities’ business needs by providing commodified, Skinner-based programmed learning under the guise of “personalized” or “adaptive” computer-aided instruction. Only at the end, in the chapter “Making Sense of Teaching Machines,” does Ferster mount a compelling analysis of the politics and economics of the often monstrous teaching-machine inventions of our day. And even here, Ferster’s welcome insights regarding the special interests served by each wave of mechanized educational “innovation” appear alongside conclusions that appear to back away from the serious social failings at the heart of these attempts, whether labor issues (the low-paid workers who marked student work in correspondence courses) or even more insidious issues of social control by means of “workforce preparation” and so-called “competency-based learning” judged by direct assessment, i.e., tests. The book draws to a close by noting the rapid, indeed exponential “pace of change in technology,” and then immediately concludes that, “[b]ecause of this multiplicative rate of change, the use of technology is likely to provide useful solutions to previously intractable problems” (175-176). Yet no rate of technological change, by itself, can address fundamental questions of meaning and value. Such questions require careful, detailed examination of conceptual frameworks and the assumptions they reflect—and we must always ask as well, cui bono?

For every Bret Victor, Maria Montessori, Seymour Papert, or Jerome Bruner (sadly, absent from this book) who thinks deeply about human potential and the distinctive character of human cognition as they might be more beneficially addressed within what Bruner calls “the culture of education,” there is a B. F. Skinner, Sebastian Thrun, Rob Abel, or Carol Twigg (of the “National Center for Academic Transformation,” also missing from this book) who advocates often brutally reductive approaches to “learning” that simply redefine that word to fit the “practical and reasonable” conditions they choose not to examine or attempt to change. Thus the “2 Sigma problem” is addressed not in terms of the problem, but only in terms of what social engineers find convenient to manage. It is as if we had redefined flying as a certain kind of rapid walking, and thus declared our yearning for flight as a “mission accomplished.” Will we finally learn from what Clay Shirky aptly observes as the Internet’s unprecedented many-to-many capabilities, and include those affordances (woefully underexamined in this book as “discussion forums”) in the way we think about the culture of education? Will we think seriously about learner-centered education within what John Seely Brown (who does appear in this book, albeit briefly) calls the “hyperexponential” interactions of network effects within a community of learners? Or will we persist in our rush to redefine learning in terms of what we can easily do, easily measure, and easily replicate?

Although Ferster’s book has much to recommend it, especially when Ferster is telling a story, one is left wishing for a deeper engagement with not just a series of events and inventions, but with the faulty assumptions and self-deceptions—and worse—that also haunt these technologies and the institutions that support them. One must reluctantly conclude that Ferster’s analytical and conceptual frameworks are not robust enough to answer the questions his narratives raise. If they were, we might have seen Alan Kay and Adele Goldberg in the index along with Andrew Ng and Peter Norvig. We might have seen Ted Nelson there, whose Computer Lib / Dream Machines tirelessly imagines a world in which “computer-aided instruction” is about learners who are eager to explore and inquire, and in which “instructional design” is not about modules but about “thinkertoys” that enable rich encounters with sophisticated representations of complexity.

Bloom’s article concludes with the unexamined contradiction at the heart of this book, one that sadly eludes Bloom as well as Ferster, and one that may mean an end to the Deweyan vision of education as a means of democratic soulcraft: “[I]f the research on the 2 sigma problem yields practical methods (methods that the average teacher or school faculty can learn in a brief period of time and use with little more cost or time than conventional instruction), it would be an educational contribution of the greatest magnitude” (5) No contribution of the “greatest magnitude” to any aspect of human existence has come easily or cheaply, yet Bloom imagines that some kind of methodology can effect substantial, beneficial change without enormous commitments of time, energy, or determination. He seems to imagine the same kind of “practical” learning for teachers that the long and often dismal history of teaching machines continues to promise for students.  But garbage in, garbage out. In the wake of higher education’s wholesale adoption of so-called “learning management systems” as they morph into “next-generation digital learning environments” and other totalizing “solutions,” it is difficult to see how we can avoid the logical consequences of such social impoverishment. The rich will continue to solve their 2-sigma problems by their access to the complex and complexly effective models of deeper learning, while everyone else gets the teaching machines, and are thus, in their turn, taught exactly that: to be machines themselves, happily perpetuating an increasingly unequal and unjust world.

Bibliography

Bloom, Benjamin S. “The 2 Sigma Problem: The Search for Methods of Group Instruction as Effective as One-to-One Tutoring.” Educational Researcher, Vol. 13, No. 6 (Jun. – Jul., 1984), 4-16.

Ferster, Bill. Teaching Machines: Learning from the Intersection of Education and Technology. Baltimore: Johns Hopkins University Press, 2014.

Kay, Alan and Adele Goldberg. “Personal Dynamic Media.” Computer 10(3): March, 1977, 31-41. Reprinted in The New Media Reader, ed. Noah Wardrip-Fruin and Nick Montfort. Cambridge, Massachusetts: MIT Press, 2003, 393-404.

Nelson, Ted. Computer Lib / Dream Machines. Self-published. 1974.

About the Author

Gardner Campbell is Associate Professor of English at Virginia Commonwealth University.



'Addressing the “2 Sigma Problem”: A Review of Bill Ferster’s Teaching Machines' has no comments

Be the first to comment this post!


Would you like to share your thoughts?

Your email address will not be published.

*
To prove you're a person (not a spam script), type the security word shown in the picture. Click on the picture to hear an audio file of the word.
Anti-spam image

Images are for demo purposes only and are properties of their respective owners. ROMA by ThunderThemes.net

css.php
Need help with the Commons? Visit our
help page
Send us a message
Skip to toolbar