Future Tense

MOOCs Need to Go Back to Their Roots

They were supposed to be educational communities, not hypertextbooks.

Collective studying outdoors
There’s a dirty little secret at the heart of education: We don’t really know what learning is, how people best do it, or how to measure it.

Photo by William Perugi/Shutterstock

This back-to-school season has also brought a wide range of developments in the online education space known as MOOCs: massively open online courses. While MOOCs vary in the details, most are free, taught by professors, and are solely for the edification of the student—not for credit. In recent weeks, we’ve seen announcements for the Open Education Alliance, a partnership between the state of California, Udacity, and a host of major tech companies, and Google combining its Course Builder software with Ivy League MOOC consortium EdX, making it easier for top notch professors to use the curriculum development equivalent of Gmail or Blogger

But announcements are not results, and MOOCs, hailed as the saviors of higher education when they burst into public awareness in 2012, have had trouble living up to the hype. San Jose State’s experiment with MOOCs for many introductory and remedial classes was suspended after two semesters, with approximately one-half as many students passing the same classes taught traditionally. That sort of retreat is part of the natural lifecycle of new technologies, the trough of disillusionment. But in another way it’s due a deeper weakness in the design of MOOCs—choosing to substitute buzzword-based disruption for an actual model of what an online, open, and user-driven education should be like. To fix MOOCs, we have to go back to the beginning.

There’s a dirty little secret at the heart of education: We don’t really know what learning is, how people best do it, or how to measure it. Many theories have been developed over time, and they can be separated into two major categories. The standard approach is cognitive-behaviorism, which argues that knowledge can be measured by giving students an identically administered test. Cognitive-behaviorist approaches developed with the rise of public schooling and the accompanying challenges of allocating resources efficiently in a diverse and changing industrial economy. The problem with cognitive-behaviorist is that, as anyone who has ever wielded a No. 2 pencil knows, it results in boring drills and anxiety-provoking tests. Perhaps even more worryingly, the design and development of such assessments is far from an exact science. If done poorly, they can lead to irrelevant educational practices in the name of teaching to the test. And finally, to make meaningful rankings of students, this perspective demands failures.

The main alternative is constructivism, which believes that education is essentially ineffable, but is achieved when a student incorporates a new idea into her mental toolbox. Constructivist approaches to education emphasize dialogue, creativity, and independence. It makes an intuitive case for explaining mastery of complex skills and bolsters the experiences of teachers and students. The problem is that constructivist education is inherently individualized and requires painstaking effort from all parties involved, with commensurate financial costs. Finally, the instructor’s word is the only proof that the student has learned anything, making the whole system dependent on trust in the skill and probity of teachers—and impossible to scale up.

All this theoretical incoherence means that there’s plenty of room for experimentation. Online education is one such grand experiment, replacing all the complex infrastructure of classrooms and scheduling with a few inexpensive and reliable servers. Unfortunately, most of today’s MOOCs are poorly conceived from both constructivist and cognitive-behaviorist approaches. Traditional assessments that measure recall or proficiency with clearly defined concepts are irrelevant when students can freely consult books, notes, Google, and their friends while completing homework and tests. MOOC providers can try to lock down their tests, buying space by randomizing questions and setting up proctoring centers in strip malls, but the fact remains that any assignment that can be graded by a computer can also be easily solved by a student with a computer—and it encourages looking for the right search terms rather than working out an original solution.

While great teachers can deliver lectures to tens of thousands of students worldwide via video, there’s no way for them to have a conversation with all those students. And while platforms like Coursera have made it easier for professors to put together online classes, the end result is a hypertextbook, not a virtual classroom that builds discipline. No wonder MOOCs have an average completion rate of just 7 percent. By and large the material is no more compelling than a textbook, and certificates of completion aside, there’s no reward for finishing the class. The virtual classroom has to compete for attention with Facebook, Netflix, and the real world. Interaction between teachers and students in MOOCs is so minimal that two professors from Duke teaching a class on reason and argument using Coursera promised to shave their heads on camera for a pass rate above 25 percent, providing a practical example of an appeal to pity. The cheap tricks of edutainment substitute for the hard work of learning.

Cyberspace plays by different rules than the real world, so we need to start designing online classes from the ground up instead of trying to cut and mangle the internet so that it looks like a 20th-century classroom. Contrary to popular belief, MOOCs didn’t originate with Sebastian Thrun and Peter Norvig’s heralded 2011 class on artificial intelligence, which developed into the startup Udacity. Rather, Stephen Downes and George Siemens, a pair of Canadian academics, developed the MOOC in 2008 as a proof of concept of their connectivist theory of education. Drawing from neuroscience and computer networking, connectivism postulates that knowledge is distributed across human and nonhuman nodes in a network. Downes and Siemens argue that in the 21st century, education is the ability to navigate this network, link disparate fields, and contribute to the understanding of other people.

Connectivism is a somewhat flaky utopian idea, a technological metaphor more than a practical method, but it works with the strengths of digital technology, rather than against it—and MOOC designers should try to hew closely to the original model as much as possible. Rather than pouring effort into making thousands of glossy but ultimately stagnant hypertextbook “classes,” MOOC developers should be designing platforms that work for traditional scholarly fields and the new skills of the global economy. Twelve-week courses, video lectures, and mostly empty discussion boards should be replaced with an ongoing discussion that encourages participants to share what they know with one another, rather than perform for some distant grader. Professors would set the broad terms of the discussion and subtly guide it toward productive and interesting topics, instead of presenting a fixed curriculum. The hardest part of MOOC design, and the one that deserves the most attention, is making a space for engaged education that rewards helping others as a prelude to learning, not one that replicates the most tedious parts of today’s classrooms.

The current strategy of MOOC developers is one of downmarket disruption, using technology to replace the costly capital and labor of education. This is the standard Silicon Valley game plan, and historically it works even when the product is markedly inferior. Half of a traditional education at 10 percent of the cost would be a bargain, albeit one at the expense of an educated society. A representative argument of college faculty against MOOCs is that “real” constructivist-style learning is impossible in an online environment. To which I can only be reminded of Clarke’s First Law, that “When a distinguished but elderly scientist states that something is possible, he is almost certainly right. When he states that something is impossible, he is very probably wrong.” 

This article arises from Future Tense, a collaboration among Arizona State University, the New America Foundation, and Slate. Future Tense explores the ways emerging technologies affect society, policy, and culture. To read more, visit the Future Tense blog and the Future Tense home page. You can also follow us on Twitter.