This article is part of Future Tense, which is a partnership of Slate, the New America Foundation, and Arizona State University. On Wednesday, April 30, Future Tense will host an event in Washington, D.C., on technology and the future of higher education. For more information and to RSVP, visit the New America Foundation website.
With characteristically good timing, I started preparing the lectures for my first-ever MOOC in early December of last year—a few days before the Washington Post ran a piece titled “Are MOOCs Already Over?”
Here is what the Post reported:
New data from a University of Pennsylvania Graduate School of Education study raises big questions about the future of MOOCs. The study, which, looked at the MOOC behavior of 1 million people who signed up for courses offered by the university on the Coursera platform from June 2012 to June 2013, found that only 4 percent completed the classes and that “engagement” of students falls dramatically in the first few weeks of a course.
Now that my MOOC, titled Buddhism and Modern Psychology, is wrapping up, I can report that the impending death of MOOCs—massive open online courses—is greatly exaggerated.
Not that I’m predicting they’ll revolutionize education. I’m not qualified to opine on that, in part because no one expects a course like Buddhism and Modern Psychology to revolutionize education in the first place. The great expectations are mainly about courses that impart knowledge of greater vocational value than, say, the Buddhist idea that the self doesn’t exist. You know: courses in computer science or math or accounting—courses that give poor kids in Africa and South Asia a chance to become anecdotes in a Thomas Friedman column.
No, my aim here is just to make one simple point: that the much-lamented and undeniably high “attrition rates” of MOOCs don’t really matter at all.
I don’t deny that, for the first few weeks of my six-week course, looking at my stats was depressing. Each lecture consisted of three or four segments, and the viewership for each segment was lower than that for the previous segment. So the bar graph I was seeing at midterm looked like this: down, down, down, down, down, down … and so on.
But then I realized: One reason it’s depressing to see a graph go down is that we like our graphs to go up—and this particular graph can’t go up. After all, lectures are viewed sequentially. A student starts with Segment 1 of Lecture 1 and goes from there. So even if there were zero attrition, the most you could hope for would be a flat line. And since some attrition is bound to happen, a downward slope is inevitable.
The fact that sequentially presented content pretty much always sees a declining participation rate is a grim truth that we’re in some contexts shielded from. I love to reflect on the fact that some of my books have sold in six figures, but one thing I’ve never seen is a chapter-by-chapter graph of actual readership. And I think I’d rather not, thanks. In 1985 Mike Kinsley, who later founded Slate, did an experiment to test the hypothesis that “much-discussed” books in Washington, D.C, don’t actually get read. He had an assistant visit local bookstores and insert, about three-fourths of the way through various books, a card with Kinsley’s phone number and the promise of a cash reward to anyone who called him. No money changed hands.
Of course, with an academic course there are ways you can limit the drop in participation. Here’s one: Make people pay a ton of money to attend your college and tell them they have to complete a given number of courses to graduate—which means that if they drop out of a course after five weeks, that’s five weeks of extra work they’ll have to do at some point before graduating. This is an effective incentive, and most college professors are familiar with one result: students in your class who would rather not be in your class. Looking at a downward-sloping bar graph, disconcerting though it is, is no more disconcerting than looking at such students.