Future Tense

Small Dictators, Big Bots

Yahoo didn’t mean to censor emails about Wall Street protests. The truth is much more insidious.

This article arises from Future Tense,a collaboration among Arizona State University, the New America Foundation, and Slate.

It turns out Silicon Valley wasn’t protecting Wall Street

When I tried to use my Yahoo account Tuesday to send an email mentioning the website OccupyWallSt.org, an ominous, bright yellow notice warned me that the “message was not sent” and “my account was temporarily blocked.” I tried other URLs, including “ilovewallstreet.org,” which zipped along unmolested. It seemed that Yahoo was only blocking messages mentioning the URL OccupyWallSt.org.

“Censorship! Silicon Valley protecting Wall Street,” cried indignant people on my Twitter timeline. The uproar caught the attention of blogs like ThinkProgress. Finally, Silicon Valley was caught with its fingers in the Wall Street cookie jar! Silicon police state!

My tweeps are wrong. The problem isn’t censorship; it’s much worse. A myopic business model is threatening the health of the Internet. Ironically, it’s not even that good for business.

In truth, Yahoo probably isn’t out to block this or that political URL (the red-faced company quickly apologized). Wall Street is part of the problem—not because it imposes content censorship, but rather because it imposes a certain financial logic based more on cutting costs in the very short term as opposed to making money in the mid- to long-term.

Ads on the Web are not very effective. It’s easy to block or ignore most of them. Unfortunately for Internet businesses, advertisers know this and do not pay as much for Web ads. Coupled with the “Sooo, what’re yer numbers this quarter?” method of penny-wise, pound-foolish economics imposed by Wall Street, most Web businesses have turned to “cut costs to the bone” model. And that means fewer human employees, more automation, more algorithmic rigidity, less judgment, less flexibility, and slower response times.

We are ruled, in effect, by small dictators and big bots. And this unelected, inefficient, and sometimes-petty tyranny is throttling the growth of a vibrant, healthy Internet and fuels many problems ranging from inane “real name” policies on sites like Google+—where people can be asked for official proof of identity if their account is flagged as a nickname—to major disruptions in connectivity. This is terrible because the Internet is not just any widget—it’s increasingly the heart of our networked commons. Dominance of a bad business model on the Internet doesn’t just result in bad products; it results in unhealthy social dynamics.

Take the Yahoo hiccup: What likely happened was that the offending URL, https://occupywallst.org, was added to a spam filter master file. (Maybe this was a politically motivated act. More likely, was just random error. In either case, it demonstrates the problem.) Most platforms rely on centralized huge databases of spam originators that are heavily automated and run by very small numbers of actual people. A long time can pass before an error is caught and a human is tasked to intervene and recalibrate the system.

Take another crazy example. My previous university, University of Maryland-Baltimore County, is hardly without resources. Yet for one aggravating, long period I could not use my university account to communicate with anyone who had a Hotmail.com, Live.com, or MSN address—which happens to be the world’s largest Web-based email ecology, with hundreds of millions of users. Why not? A single computer was briefly compromised and used as a spam bot. In response, Microsoft automatically blocked all email from umbc.edu. Despite frantic efforts, I.T. people at UMBC were unable to find a human in Microsoft to fix the error.

A similar dynamic dominates policies of social networking platforms—and you only need to look at the employee/user numbers to understand that it could not be any other way. Facebook has about 2,000 employees to 750 million users; Twitter 600 employees to about 100 million users. That’s only three human employees per million users for Facebook and about six for Twitter. Google is larger, with about 30,000 employees, but an enormous portion of the 2 billion netizens use many of its services every day. It probably has about 50,000 users per employee, but for a broader range of services.

With such ratios, the business model becomes to push work onto the user—for example, have the users flag/report what they consider inappropriate content—and then automate the rest. Community/user moderation and control with appropriate tools can be healthy and desirable, but without sufficient human oversight, and with so little recourse in case of problems, this can and does degenerate into unhealthy scenarios. There are many reports of dissident Web pages being taken down and activist accounts becoming deactivated, often at key moments in protest cycles. For example, in Egypt the election-monitoring Facebook page of ElBaradei supporters as well as the ” We Are Khaled Said” page operated anonymously by Wael Ghonim, the Google executive turned revolutionary, was taken down on election day for using pseudonymous administrators. While that high-profile page eventually got restored, small examples that never receive international attention can linger unresolved. (See more examples here, here, here, here, here, and here.) Even Roger Ebert was not immune to being deactivated by Facebook, which Ebert believed was due to targeted flagging as inappropriate by Ryan Dunn fans angry that Ebert wrote disapprovingly of Dunn’s drunk-driving accident and death.

The drive toward “real name” policies by social networking platforms and others, too, is often a by-product of this push to automate, automate, automate and cut overhead—i.e., employees. However, these policies not only confuse the goal of ensuring that those engaging a platform are “real people” with forcing them to use “driver license” names (the former can indeed help online communities become healthier while the latter is counterproductive); they overlook alternative methods. A better way of dealing with trolls, spam, and other offensive material would be to provide better community moderation tools plus more human judgment and oversight by employees of the platforms. This will cost money in the short term, but will likely pay back by fostering more significant engagement by people. Indeed, a common misunderstanding is that Internet users are unconcerned about privacy or control over content, that the “digital natives” have it all under control, and that non-users are simply luddites, or old people who’ll never get it. In my own research on college youth, I often encounter significant reservations about these platforms and confusion over their policies even as students remain subscribed, fearing social exclusion. And while older adults are flocking to social media, research shows that this population remains guarded against the downsides of the Internet like trolls, spam, and scams—the very issues better community tools and human oversight could help alleviate without throwing more babies out with the bathwater, as primarily automated systems are apt to do.

All this likely requires financing models beyond the relatively anemic flow of ad dollars. Corporations with good cash reserves like Google and Microsoft can afford to dip into their significant resources and lead the way. But users will also have to pay—and I think they will eventually accept, even embrace it. We have been induced into shelling out $3 for a cup of coffee and $1.99 for a bottle of fizzy brown sugary water. (It’s also just not true that ad-supported platforms are “free”—the ad budget is baked into the price of that $1.99 bottle of fizzy brown sugary water.) We’ll pay for good Web platforms once we have clear benefits, a smart, well-designed “freemium+” model—maybe some content for free, some behind a paywall, a la the New York Times. Success won’t only depend on the pricing scheme though; people are not passive consumers of online social platforms but also their co-creators—and models that engage them as participants are more likely to gain traction. This is a win-win: It can liberate Internet-based businesses from this shortsighted crunch and save our public sphere from crude algorithms that cripple functionality.

In 1988, Shoshana Zuboff wrote the prescient book In the Age of the Smart Machine, in which she argued that we face two roads with information technology: We can “informate” or “automate.” The high road would be to “informate,” which would mean using information technologies to better understand a particular process (for example, how to diagnose an illness or how to tell fake reviews from real ones), and then using that knowledge to support more complex applications rather than pushing out human role and judgment from the process. In contrast, to “automate,” the low road, involves replacing human work and skill with machines so that narrow, well-defined tasks are carried out at a lower cost and with more control, but at the cost of nuance, flexibility, and judgment.

Unfortunately, thoughtless automation is driving the day. If we don’t get off this train, it might have the same results it has had in other sectors of the economy: an unsustainable economy with high unemployment—and a lot of cheap, plastic crap.