Why Johnny Can’t Add Without a Calculator

What's to come?
June 25 2012 6:00 AM

Why Johnny Can’t Add Without a Calculator

Technology is doing to math education what industrial agriculture did to food: making it efficient, monotonous, and low-quality.

(Continued from Page 1)

Math and science can be hard to learn—and that’s OK. The proper job of a teacher is not to make it easy, but to guide students through the difficulty by getting them to practice and persevere.  “Some of the best basketball players on Earth will stand at that foul line and shoot foul shots for hours and be bored out of their minds,” says Williams. Math students, too, need to practice foul shots: adding fractions, factoring polynomials. And whether or not the students are bright, “once they buy into the idea that hard work leads to cool results,” Williams says, you can work with them.

Educational researchers often present a false dichotomy between fluency and conceptual reasoning. But as in basketball, where shooting foul shots helps you learn how to take a fancier shot, computational fluency is the path to conceptual understanding. There is no way around it.

The fight between those who seek a way around hard work (a “royal road to geometry,” in Euclid's famous phrase), and those who realize that earned fluency is the only road to understanding goes back millennia and became particularly acrimonious in America in the last half-century in the so-called math wars. On one side are education researchers like Constance Kamii, at the University of Alabama, who argues that teaching children to add and subtract is harmful. This camp says it has insights into the way children learn that warrant departure from traditional ways of teaching math. On the other side is the consensus of working scientists and mathematicians as well as teachers like Williams, who notes that it took very smart adults thousands of years to develop modern mathematics, so it makes sense to teach it to students rather than get them to “discover” it themselves.

Advertisement

What is new to this fight is the totalizing power of technology. A 2007 congressionally mandated study by the National Center for Educational Evaluation and Regional Assistance found that 16 of the best reading and mathematics learning software packages—selected by experts from 160 submissions—did not have a measurable effect on test scores. But despite this finding, the onslaught of technology in education has continued. The state of Maine was the first to buy laptops for all of its students from grades seven to 12, spending tens of millions of dollars to do so, starting with middle schoolers in 2002 and expanding to high schools in 2009.

The nation is not far behind. Though no well-implemented study has ever found technology to be effective, many poorly designed studies have—and that questionable body of research is influencing decision-makers. Researchers with a financial stake in the success of computer software are free to design studies that are biased in favor of their products. (I’m sure this bias is, often as not, unintentional.) What is presented as peer-reviewed research is fundamentally marketing literature: studies done by people selling the software they are evaluating.

For instance, a meta-analysis of the effectiveness of graphing calculators from Empirical Education Inc. reports a “strong effect of the technology on algebra achievement.” But the meta-analysis includes results from a paper in which “no significant differences were found between the graphing-approach and traditional classes either on a final examination of traditional algebra skills or on an assessment of mathematics aptitude.” In that same paper, calculators were marginally helpful on a tailor-designed test. The meta-analysis included the results of the specially made test, but not the negative results from the traditional exam.

Take this gem from researchers at SRI International. They say that standardized tests don’t capture the “conceptual depth” students develop by using their software, so the “research team decided to build its own assessments”—and, of course, they did relatively well on the assessments they designed for themselves. Another example: A recent study by the Education Development Center compared students who took an online algebra 1 class with students who took nonalgebra eighth-grade math.* The online students did better than those who didn’t study algebra at all (not exactly surprising). But the online students weren’t compared with those who took a regular algebra class.

Despite the lack of empirical evidence, the National Council of Teachers of Mathematics takes the beneficial effects of technology as dogma. There is a simple shell game that goes on: Although there is no evidence technology has been useful in teaching kids math in the past, anyone can come up with a new product and claim that this time it is effective.

I tried using one such product, Cognitive Tutor from Carnegie Learning, which claims to be “intelligent mathematics software that adapts to meet the needs of ALL students.” One problem asked me to calculate the width of a doorframe, given the frame’s height and a diagonal measurement of the door. After 30 seconds’ work with pen and paper, I submitted my answer: 93.7cm. But Cognitive Tutor wouldn’t accept it. It wanted me to go through an elaborate and cumbersome series of steps to get its answer: 93.723. This isn’t teaching math—it’s teaching how to use a particular software package. The supposed “real-world applications” don’t even reflect the real world. Show me a tape measure that allows you to measure to one-hundredth of a millimeter.

  Slate Plus
Slate Picks
Nov. 21 2014 1:38 PM What Happened at Slate This Week? See if you can keep pace with the copy desk, Slate’s most comprehensive reading team.