Teaching Computers To Predict Whether a Tumor Is Malignant

What's to come?
Nov. 1 2011 3:09 PM

Blogging the Stanford Machine Learning Class

Teaching computers to predict whether a tumor is malignant.

111101_FT_watsonFW
IBM's Watson artificial intelligence computing system, the advances from which are now being used for medical diagnosis

Ben Hider/Getty Images.

Last week, I complained that Stanford’s machine learning class, which I’m taking online with thousands of others, is training me to be an actuary, not the robot overlord I aspire to. So far, the course has dealt primarily with problems where we have a bunch of data—mostly housing prices—in which our job is to find an algorithm to “fit” the data as best as possible. Are you bored yet? I was a little bored.

Advertisement

Most of what we had been doing was fitting lines to data. It was the sort of stuff that, as I said, I did in Mrs. Burke’s Algebra II class in high school. (Thanks for now following me on Twitter, Mrs. Burke!)

Mercifully, though, this week we shifted to a different kind of problem. No more predicting house prices—we now are teaching machines to predict one of two discrete outcomes: Is a tumor malignant, given its size? Will a student pass a class, based on incomplete grading information? These are the sorts of questions that computers can be “taught” to predict with great accuracy, if done correctly.

I say “taught” in quotation marks because, to this point, it’s still not entirely clear to me why we describe this process as teaching computers to learn as opposed to just solving problems using their great processing power. We’re only on Week 3, so I don’t expect this to be entire obvious yet, but it’s already feeling not so different than the way humans learn, when all we have is a bunch of examples of something from which to formulate a theory: We use trial and error to hone in on the correct answer to a problem. In some of the programming exercises we’ve been doing, we’ve allowed the computer 1,500 attempts to get the very best possible values for an algorithm, making sure that it’s getting a little closer each time.

As it turns out, when you have data that can be classified as true or false—pass/fail, malignant/benign, or even one of several discrete states—then straight lines frequently don’t do much for you. You have to start getting into fancier algorithms, with advanced concepts like squaring a variable, to find a good way to predict future cases based on the historical data we already have.

In the case of breast tumors, Stanford researchers have trained computers to take in a variety of information about a newly discovered tumor and predict, very precisely, whether it is malignant. In the first class we were thinking of the problem just in terms of the size of the tumor, but in reality there are a tremendous number of factors that go into any problem. The resulting algorithm, if it’s good, can guess not only whether a tumor is bad, but the probability that it is.

We want computers making these calculations so that humans—doctors, in this case—can make informed decisions about how to act free of whatever bias or faulty intuition they make bring to the table. In the case of Watson, everyone’s favorite robotic Jeopardy! champion, this idea that algorithms can predict an outcome with a specific degree of certainty was one of the biggest advances that project made in artificial intelligence. Watson didn’t just have a guess. He knew how confident he was in that guess as well. In fact, this is exactly what Watson’s brain is now being repurposed to do: produce diagnostic information for physicians, complete with degrees of confidence in the diagnosis.

I am only three weeks into the discipline. But to me, this seems to be the great promise of learning machines: not that they will guide our lives, but that they will act as consiglieres in a world that operates on knowing the odds.

Grades:

Logistical regression: 4.5/5

Regularization (a way to handle huge numbers of variables): 5/5, but -20 percent for being late again.

Coding grade for linear regression: 114/100, thanks to some extra credit questions.

TODAY IN SLATE

Politics

Blacks Don’t Have a Corporal Punishment Problem

Americans do. But when blacks exhibit the same behaviors as others, it becomes part of a greater black pathology. 

I Bought the Huge iPhone. I’m Already Thinking of Returning It.

Scotland Is Just the Beginning. Expect More Political Earthquakes in Europe.

Lifetime Didn’t Think the Steubenville Rape Case Was Dramatic Enough

So they added a little self-immolation.

Two Damn Good, Very Different Movies About Soldiers Returning From War

Medical Examiner

The Most Terrifying Thing About Ebola 

The disease threatens humanity by preying on humanity.

Students Aren’t Going to College Football Games as Much Anymore, and Schools Are Getting Worried

The Good Wife Is Cynical, Thrilling, and Grown-Up. It’s Also TV’s Best Drama.

  News & Politics
Weigel
Sept. 19 2014 9:15 PM Chris Christie, Better Than Ever
  Business
Moneybox
Sept. 19 2014 6:35 PM Pabst Blue Ribbon is Being Sold to the Russians, Was So Over Anyway
  Life
Inside Higher Ed
Sept. 19 2014 1:34 PM Empty Seats, Fewer Donors? College football isn’t attracting the audience it used to.
  Double X
The XX Factor
Sept. 19 2014 4:58 PM Steubenville Gets the Lifetime Treatment (And a Cheerleader Erupts Into Flames)
  Slate Plus
Slate Picks
Sept. 19 2014 12:00 PM What Happened at Slate This Week? The Slatest editor tells us to read well-informed skepticism, media criticism, and more.
  Arts
Brow Beat
Sept. 19 2014 4:48 PM You Should Be Listening to Sbtrkt
  Technology
Future Tense
Sept. 19 2014 6:31 PM The One Big Problem With the Enormous New iPhone
  Health & Science
Medical Examiner
Sept. 19 2014 5:09 PM Did America Get Fat by Drinking Diet Soda?   A high-profile study points the finger at artificial sweeteners.
  Sports
Sports Nut
Sept. 18 2014 11:42 AM Grandmaster Clash One of the most amazing feats in chess history just happened, and no one noticed.