The Problem With the School of One
Can technology make education too customized for the student?
The same data-gathering revolution that led to Google’s personalization of the news or Amazon’s customized recommendations is leading to a revolution in individually tailored education.
Photo by BananaStock/Thinkstock.
For decades, I resisted the lure of video games. Then I had a son. When Sam was 6, he enjoyed a game called Pajama Sam, which encouraged me to explore the world of video games for adults. At first, I was amazed that people paid good money for this degree of difficulty in the name of entertainment. But after much frustration and persistence, I came to love playing titles like Half-Life, Deus Ex, The Elder Scrolls, Rise of Nations, Halo, Grand Theft Auto, Chibi-Robo, and From Dust. Such games are problem-solving spaces, I eventually realized. As such, they must do a good job at teaching the player to master the problem-solving skills necessary to play and win the game. But, more importantly here, such video games are designed to challenge players and make them work hard to succeed. This realization prompted me to begin researching how video games can be used to create good learning.
One problem video game designers have is a tendency seemingly inborn in human beings to optimize their chances of success. Gamers will often seek all possible advantages and use any tactics they can to win. They will, for example, engage in what gamers call “cheats”—pieces of code or hacks that can make the game easier or advantage the player in some way. The problem is this: Gamers will often seek to optimize their chances of success up to the point where they undermine the game’s design and even ruin it by making it too easy.
Good game designers encourage optimization up to a point, as a creative and proactive activity of the gamer. However, they must forestall it from undermining the game and ruining the player’s experience. It is a tricky balance and part of the art of good game design.
This human urge to optimize is, of course, old, and it applies much more widely than just to video games. Faced with significant challenges in the “state of nature,” humans who survived were good optimizers. They did all they could to increase their chances of success (survival) and lower the level of difficulty they faced. Those who did not optimize in this way were selected out of the gene pool for good Darwinian reasons. In the state of nature, one could optimize only so far. The level of difficulty always remained high. One could not cheat death. Ultimately, every human “lost” the game.
Modern technologies allow the human urge to optimize and lower the level of challenge full rein and near endless application. In modern times, the human urge to optimize takes the form of customization. Modern technologies increasingly allow each of us, if we wish, to customize many things to fit with our skills, styles, desires, and beliefs in such a way as to leave us less challenged and feeling more “successful.” This process goes ever forward with each new technological advance.
For example, today there are adaptive, artificial (computer-based) tutors to teach algebra. Based on how the learner is faring, these tutors (which do quite well) customize presentation, problems, and the order of problems to each individual learner. They can also be equipped with sensors that tell the system when the learner is bored, confused, or frustrated and adapt instruction accordingly. Each learner proceeds based on his or her favored style of learning in a way that lowers the level of frustration as far as possible. Artificial tutors do not care where you start, how long you take to finish, or how smart or stupid your initial answers are. They are far more tolerant than most humans.
There is nothing wrong with, and lots right about, such artificial tutors. They are just one device among many that seek to transform education into “a school of one.” But they represent a perfecting of the human urge to optimize that can go too far and end with bad consequences. People who never confront challenge and frustration, who never acquire new styles of learning, and who never face failure squarely may in the end become impoverished humans. They may become forever stuck with who they are now, never growing and transforming, because they never face new experiences that have not been customized to their current needs and desires.
School of One learns about the specific academic needs of every student and then accesses a large bank of carefully reviewed educational resources, using sophisticated technology to find the best matches among students, teachers, and resources.
School of One’s learning algorithm helps to ensure each student is learning in his or her educational “sweet spot.” As it collects data, it learns more about the students and becomes more effective at predicting the playlist that will be most effective for each.
The same data-gathering revolution that has led to Google’s personalization of the news or Amazon’s customized book recommendations is leading to a revolution in individually tailored education. Advances in artificial intelligence have helped here, but so has the ability to mine massive data on learners of math, for example, so as to be able to predict which trajectory of learning will work best for different individuals based on what similar learners have done and how they have fared under various conditions. At the same time, a revolution in sensors means that we can know when learners are bored or confused and quickly adapt to the problem. An artificial tutor can gently lead us down out paths of least resistance. All of this can be good, of course, but have you noticed that after you have bought lots of books on Amazon, the titles it recommends all come to sound pretty much alike?
As a gamer I want a video game to hold my hand when I begin, but I do not want it to customize the boss battle for me. I want the boss battle to test me, and I want to feel a sense of growth and accomplishment when I slay him. In the real world and in our lives, the “bosses” (e.g., global warming, growing inequality, bad jobs, transformative change, and worldwide poverty) are not going to adapt to us. We must enter the fray and let the battle make us better than we were before.
Success in the 21st century at work and in life requires collaboration, collective intelligence, and smart teams using smart tools. In our fast-changing world, a world that faces many serious crises, being able to cope with challenge, to persist past failure, to learn in new ways, and to adapt one’s skills and style to other team members are all 21st-century skills. Yet new technologies and the Internet allow us to enter our own customized echo chambers and identity niches where we can comfort ourselves with what we are and do not have to confront ourselves with what we can be and, indeed, must become as fellow citizens in a diverse and complex global world. This is particularly dangerous for students.
What happens when people with different “sweet spots” have to learn, solve problems, and collaborate with others who have different “sweet spots,” as people so often have to do in modern workplaces? I wonder what would happen should, God forbid, children run into learning situations in the world that cannot be optimized for them individually. What if the world changes and the problems that arise just do not afford solutions that fit their sweet spot? What if their sweet spot is just no good for certain types of learning and problem solving?
Adapted from The Anti-Education Era by James Paul Gee. Copyright © 2013 by the author and reprinted by permission of Palgrave Macmillan, a division of Macmillan Publishers Ltd.
James Paul Gee is the Mary Lou Fulton Presidential Professor of Literacy Studies at Arizona State University. He is the author of What Video Games Have to Teach Us About Learning and Literacy and more recently, The Anti-Education Era, both from Palgrave/Macmillan.