Technology

To Save Everything Click Here: What Farhad Manjoo gets wrong about my book.

What Farhad Manjoo gets wrong about my book.

Google workers ride bikes outside of Google headquarters in Mountain View, Calif., April 12, 2012.
Google headquarters in Mountain View, Calif., in 2012

Photo by Paul Sakuma/AP

Dear Farhad:

I’m glad there was something—my combative attitude!—that you liked about the book. I’ll try not to disappoint with this entry.

I’m also sorry to hear that you don’t recognize the Silicon Valley of your columns in my work. Thankfully, I have no complaints of this sort: the Silicon Valley described in my book—oh, I see it so clearly in your first post! Could both of us be right? I suspect not, but then you would probably disagree: In Silicon Valley—or so I am told, now that I’ve left sunny Palo Alto for chilly Cambridge, Mass.—they prefer win-win situations.

Take your opening salvo—the idea that “we have very different visceral reactions to technology.” Nonsense, I say! I don’t have a coherent attitude to technology—nay, Technology, with a capital T, which is what you mean—and, most likely, neither do you. Moreover, this is a good thing, as trying to find—and actually require—consistence in our attitudes toward very different technologies is a recipe for ethical, moral, and practical disaster. We have to work out how to treat them on their own terms, not based on some general attitude toward Technology. Can we please bury this Technology idea, if only to prevent the likes of Kevin Kelly from writing deeply confused books with titles like What Technology Wants? There’s no “Technology,” and therefore it can want nothing.

Please let me linger on this seemingly obscure linguistic detail, for I think it reveals something of a philosophical chasm between us. As I hope I make clear in the book, I reject the idea that “Technology” adds up to anything other than a confused analytical concept—this is an argument well-made in one of my favorite essays, Leo Marx’s “Technology: The Emergence of a Hazardous Concept.” What I hate about such “Technology talk” is that it sustains the technophobe-technophile poles of the current debate, making it much harder to engage in substantial critiques of individual technologies—if only for the fear of being labeled a Luddite. I find those poles suffocating, for, as I know too well from my own case, any deviation from blind worship to each and every technology automatically usually confines me to the “technophobe” pole. “Technology” as a thought category suppresses complex feelings toward individual technologies; in this sense, I do hate it—but I hate the ambiguous label, not every single artifact that it refers to.

Of course, I know why Silicon Valley—or, perhaps, “Silicon Valley” (let’s just put quotation marks around everything to remember how tentative all these generalizations are!)—would prefer the debate conducted on such terms. But why would so many smart technology reporters fall into this trap? More specifically, how did you come to believe that securing my router cable and iPhone in a safe with a timer is an act of defiance against Technology, a cardinal sin that could get me into some serious trouble in Silicon Valley. (Perhaps I’ll be required to watch TED talks until my last breath?) But I hold my ground: In using my safe, I’ve committed no grand sins against Technology—because there are no such sins once you let go of this big idea of Technology. 

First of all, my safe is also a technology, but, unlike you, I don’t think it’s ironic that I rely on one technology to mitigate the effects of another—I find it perfectly normal. Once you don’t believe that there’s a logic to technology or that our reactions to it should be guided by some grand ideology of technophobia or techophilia, this is perfectly OK. Second, what you neglect to mention is that I’m hiding neither my Kindle Touch nor my iPad. In fact, with my router cable and my iPhone jailed, I enjoy uninterrupted hours, occasionally days, of reading on my iPad (which, thankfully, doesn’t have a data package).

My opposition to “technology” is similar to my opposition to “the Internet”: I find the idea that either of them contains coherent answers to the many challenges they pose to be bunk. Resolving those challenges will require a lot of deep, contextual, philosophical thinking about individual technologies (and the stuff of which “the Internet” is made) that has nothing to do with some abstract truths about “Technology” or “the Internet.” That’s why I like the safe idea so much: It traps so many otherwise smart people into thinking that I’m a technophobe!

Now, as you courageously disclose, you are one of the many people who I go after in the book. I wish you had pointed out why—it directly connects to your criticism that I can’t see through all the “vaporware” of Silicon Valley. In the book, I mention you several times, but my most substantial critique has to do with a certain fatalism that you propose as our appropriate reaction to technology. More specifically, in one of your columns on facial recognition technology, you write that:

“technology marches forward and ordinary people—people who will be stalked, thrown in jail, or otherwise harassed on the basis of a facial identifi[cation]—will be collateral damage. … Soon, though, we’ll learn to live with it. … It’s too late to turn back now: If your face and your name are online today, you’ve already made yourself searchable.”

One of the dominant narratives that I’ve identified in tech debates is this constant tendency to assume that some technology (in this case, facial recognition) is already here, its further spread is inevitable, and all we can do is accept it and adjust our norms—a rhetorical approach that I also describe as “technological defeatism.” Alas, I don’t find “technological defeatism” an appealing option, on either historical or moral grounds.

In the specific case of facial recognition technology—which I studied at quite some length—most of the projects and initiatives that are currently presented to us as fully functional and effective were some kind of “proof of concept” or “vaporware” just 10 years ago. I don’t recall many efforts to oppose such efforts in the technology press (which, for my money, has never bothered to develop a strong normative dimension to its own work, retiring simply to the boring and almost stenographic function of what I call “trend spotting”—more on that later on). But there was nothing inevitable about all these efforts achieving the kinds of functionality and buy-in that we ascribe to them today; in fact, much of that functionality was achieved with public funding bestowed upon them by the Department of Defense fighting the global war on terror—while the CEOs of big facial-recognition firms were making lots of unsubstantiated claims on TV and in the newspapers. God, how I wish someone would engage such “vaporware” while it was still “vaporware”!

So you criticize me for engaging with projects that are only at a prototype stage—but then, once the projects are out of that stage and become ubiquitous, you say that, hey, these gadgets are here to stay and we can’t do anything. What’s a good time to scrutinize them, then? Many of the projects that I like—those caterpillar-shaped extension cords—are also at prototype stage. But so what? One of the reasons for getting involved in these debates is to precisely make certain prototypes—those whose underlying philosophies I share—more likely, and some such prototypes—those whose philosophies I reject—less likely. If it’s all up to the market (or, as you put it, the “users”) there’d be no point in technology criticism, which, on my very ambitious reading, can amount to so much more than its current function: gadget reviews.

The alternative position that you articulate—that we should just stand back and wait for these projects to fail on their own—is morally irresponsible. Trend spotting as a dominant form of analysis and critique has not served us well. It’s time to get reporters to take it easy with stenography and start dabbling in philosophy and ethics. If it means paying more attention to the moral and ethical implications of some projects that may or may not succeed, so be it.

Maybe my bar for finding things interesting is much lower than yours, but I think that understanding how someone could even think of the smart fork—the gadget telling you when you are eating too fast—is an intellectual project worth pursuing on its own terms. Many of these prototypes become “thinkable” only once certain other assumptions—about human nature, about politics, about morality—are in place, and I see it as my task to understand and scrutinize those assumptions.

Of course, I already anticipate your objection: Why worry about smart trash bins and smart kitchens if they are not likely to take over? First of all, as you well know, that augmented kitchen paper that I quote is a drop in the sea; as you have documented in your own column earlier this year, the Consumer Electronics Show in Las Vegas is full of similar trivia, from smart washing machines to smart fridges. One reaction to such stuff—I think it describes your attitude—is to simply say: “Ah, but ‘users’ are never going to want that!” Where this idea of “users” comes from is an interesting philosophical question in itself. I suspect you believe that they—we?—are trained to see bullshit for bullshit and just move on. 

I, on the other hand, view “users,” along with their wants/needs/aspirations, as far more plastic. Given enough persuasion, they might as well accept the idea that some of the devices that are currently marketed as “smart” actually are smart.

And the reason why I believe in this great plasticity (perhaps even malleability) of “users” is that such changes happen all the time. If in 1996 I told you that by 2006 you’d surrender control over your email, calendar, and hard drive to one company and you will not be running any of those things on your desktop anymore, you’d probably think that I was crazy. But the nice and rosy banner of “cloud computing” somehow made it seem sensible and appealing. In the trade-off between efficiency and control, we chose the former—but this may not have been the best choice. 

Given that technology companies usually have millions to spend on marketing; that most tech reporters have consigned themselves to trend spotting, gadget reviewing, or some other form of eyeball hunting; and that our debate about technology is drowning in meaningless terms like openness and disruption, I think we can’t just expect that best ideas will prevail on their own—not least because it’s mostly the industry (through TED talks, SXSW, uncritical press coverage, and tech blogs that double as newsletters for venture capital firms) that sets the terms of the debate. So if your critique boils down to “users are smart and they will see through these mountains of bullshit,” well, I’m far less optimistic. And I don’t see why, every now and then, we shouldn’t try to reduce the amount of bullshit rather than continue adding to the pile.

Now, many of these “smart” devices are selling us much more than convenience—they are also selling us all sorts of aspirational visions of ourselves as responsible citizens. Take the smart trash bin I discuss at the start of the book. Its main selling point is that it could make you a better citizen by allowing you to recycle more effectively. This is not just a matter of users-consumers voting with their feet and wallets; there’s a strong political and moral argument to be made as to why using such a smart trash bin is better than using a dumb one. In fact, I can perfectly imagine our policymakers subsidizing such bins or mounting some campaign to promote them in the near future.

In the particular example I describe, I’m opposed to the idea—but not because it opens up new avenues for moral engagement in which none were available before. That’s one aspect of the bin that I like. I oppose it because it relies on the consumer register of gamification to get us to recycle without even bothering to inquire into the political and moral costs of using gamification as a strategy. It’s these sort of cases where there are good, legitimate pressures to embrace such devices—where the temptation is really great—that I’m most worried about.

I know there’s a fair amount of narcissism in Silicon Valley—not to mention the tech press that covers it!—but I think you are mistaken if you think that my goal was to produce a comprehensive portrait of Silicon Valley or to indict it in all its entirety. That project I find quite boring. So I don’t buy your “summary” of my “central” thesis.

It’s not from some abstract theorizing about Silicon Valley that I start—I leave the job of bashing it to Andrew Keen and Jaron Lanier—but from the giant new infrastructure for problem-solving that Silicon Valley has built in the last decade or so. By this “infrastructure,” I primarily mean the proliferation of sensors and technologically mediated interfaces (think Google Glass) and the ability to carry your entire social network with you anywhere you go, so your friends can be invoked to put peer pressure on you or to engage you in games or competitions that were impossible before.

These two developments—new sensor-based forms of technological mediation and the advent of the “social”—have given us (and especially our policymakers) an unprecedented ability to solve problems in new ways. Take something like binge eating and the obesity that it might cause. Someone who’s binge eating while wearing Google Glass connected to his social networking account is a very different political subject than the same person in his pre-cyborg version. For one, Google Glass can recognize the food his is eating. It can recognize what he’s been eating all day. It can send alerts and notifications to stop eating. It can make his behavior more visible to his peer network and try to get him to eat less.

The kinds of questions that I’m interested in stem from the sudden appearance of this problem-solving infrastructure. Here are some of them:

a) What problems are worth tackling, and what problems look like problems only because we have technologies for solving them?

b) Which of the “real” problems should be solved by governments and which ones by technology companies—given that it’s technology companies that run much of this new infrastructure? What do we lose or gain once it’s private technology companies that are tackling problems like climate change or obesity?

 c) If we do decide that, at least for some problems, technological fixes are OK and that private companies can be allowed to help with solving them, what should be the principles and values guiding problem-solving? Tech companies, like all companies, have a bias toward efficiency, but efficiency may not be the value to optimize in attempts to solve important public problems.

My present book would make for a terrible TED talk (I must confess this delights me), and I’m not sure I actually give satisfying answers to the three questions that I raise. But at least I raised the questions and hinted at the kinds of dimensions that our inquiry into these issues should pursue. It might be true that, for presentation purposes, I’ve cut too many corners and invoked a term like “Silicon Valley” the way some people invoke “Wall Street.” Frankly, this doesn’t strike me as a sin worth worrying about. If you start from where I start—i.e., you assume that our thinking about technology and technology companies is very impoverished and, structurally, the deck is stacked against our getting clearer understanding of the concepts/ideas/ideologies at play—getting people more worked up about “Silicon Valley” is not really such a bad thing.

But I can battle you on individual data points, too. So what that FourSquare moves away from gamification, as you point out? Some mayors (that is, mayors in the real world, not on FourSquare!) are very excited about embracing it. Now, FourSquare doesn’t bother me at all, but the embrace of gamification by municipalities does bother me. This is precisely the kind of public-private partnership built around a shoddy concept that I think we need to be far more critical of.

Or take your remark about Apple. Again, if you think that my thesis is to come up with some kind of indictment of Silicon Valley, I couldn’t be bothered. My real interest is in figuring out how/why/if public institutions should delegate problem-solving to Silicon Valley. This is an option that will look increasingly tempting as a) privately run technological infrastructure for problem-solving gets even better; b) problems like obesity or climate change get worse; c) governments have less and less money for problem-solving; and d) behavioral economics, nudging, etc., become even more popular and appealing to policymakers.

Stop reading tech news for a moment and glance at the political pages of the newspapers. (But then, I forgot: You only read this stuff electronically!) Here’s one item to consider: A U.K. think tank that’s close to the Conservative Party has recently proposed to cut health benefits to obese and other unhealthy patients who don’t exercise at the gym. How will the administrators find out? By requiring us to use some kinds of “smart cards” at the entrance to the gym. I can easily see how a similar scheme can be built to use smartphones as the primary monitoring device. Whether users will need to “buy” into this idea is irrelevant, as this will be dictated by cost-conscious governments acting in the time of austerity. Occasionally, we can vote—but no one is going to throw a referendum over such stuff.

So this is the historical and political background to the contemporary rise of “solutionism”; I don’t think we can have a meaningful conversation about whether it exists (or whether it presents any problems worth worrying about) if we don’t examine it as a product of these particular socio-political and historical circumstances. All of this is to say that while I don’t mind continue talking about Silicon Valley (and Silicon Valley, as I’ve noticed, never tires of talking about itself), my argument is hard to understand or even summarize without mentioning some of that non–Silicon Valley background.

And as for Apple itself, I made my case against it in a 10,000-plus-word piece last year. Briefly, here again we have profound philosophical and methodological differences. You think that Apple is not solutionist because, well, it’s not in its DNA. I say that Apple is not solutionist in part because it spends a lot of time marrying technology and liberal arts, through things like Apple University.

That Apple makes a conscious effort not to fall into solutionist traps doesn’t mean that solutionism is not a problem—it means only that Apple does a good job at resisting its many shortcuts and temptations. There are certain rhetorical and political moves in Apple’s approach to invisible and ubiquitous computing—for example, its desire to provide experiences that are “automatic, effortless, and seamless”— that I find problematic.

As for Jeff Jarvis, he is certainly a fool, but alas, his is not an isolated case, so you can’t just throw him off the ship and claim that now that he’s gone, everything is OK! Everything is not OK. Our thinking about technology and “the Internet” is rotten—Jarvis is just one of the many symptoms, and acknowledging that he is a fool doesn’t do anything for my critique of “solutionism,” as my primary problem with Jarvis is Internet-centrism. I’m actually offended that you claim that I “focus” on Jeff Jarvis, as the book also features attacks—some much longer than my attack on Jarvis—on so many other people! Here’s a partial list: Clay Shirky, Ethan Zuckerman, Clay Johnson, Jane and Kelly McGonigal, Farhad Manjoo (!), Don Tapscott, Lawrence Lessig, Tim Wu, Jonathan Zittrain, Peter Diamandis, Peter Thiel, Esther Dyson, Kevin Kelly, Wael Ghonim, Alec Ross, Reid Hoffmann, Steven Johnson, Andrew Keen, Nicholas Carr, Ray Kurzweil, Steven Levy, Marissa Mayer, Eric Schmidt, Beth Noveck, John Naughton, Eli Pariser, David Post, Gordon Bell, Gabe Zicerhmann, Katie Stanton, Cass Sunstein, David Weinberger, Gary Wolf, and Mark Zuckerberg.

Sorry, Farhad: I’m an equal opportunity offender—I attack everyone equally. Or I try to, anyway.

Evgeny Morozov will be discussing To Save Everything, Click Here on Monday, April 15, at a Future Tense event in Washington, D.C. For more information and to RSVP, visit the New America Foundation’s website.