Webhead

Why Is the Internet So Infuriatingly Slow?

Plus, two horrible things your Internet service provider wants to do to make it speedier.

Everyone hates their Internet service provider. And with good cause: In the age of ubiquitous Internet access, Web service in America is still often frustratingly slow. Tired of being the villain, telecom companies have assigned blame for this problem to a new bad guy. He’s called the “bandwidth hog,” and it’s his fault that streaming video on your computer looks more like a slide show than a movie. The major ISPs all tell a similar story: A mere 5 percent of their customers are using around 50 percent of the bandwidth—sometimes more during peak hours. While these “power users” are sharing three-gig movies and playing online games, poor granny is twiddling her thumbs waiting for Ancestry.com to load.

The ISPs are certainly correct that there’s a problem: The current network in the United States struggles to accommodate everyone, and the barbarians at the gate—voice-over-IP telephony, live video streams, high-def movies—threaten to drown the grid. (This Deloitte report has a good treatment of that eventuality.) It’s less clear that the telecom companies, fixated as they are on the bandwidth hogs, are doing a good job of managing the problem and planning for the future. The ISPs have put forward two big ideas, in recent months, about how to fix our bandwidth crisis. We can arrange these plans into two categories: horrible now and horrible later.

Plan One: Feed the meter. Category: Horrible now. In January, Time Warner announced it was rolling out an experimental plan in Beaumont, Texas, that charged users by the gigabyte. Thirty dollars would get you 5 gigabytes a month, while a $55 plan would get you 40. Each extra gigabyte over the limit costs a buck. In succeeding months, this data-capping idea has caught on. Comcast recently announced that it’s drawing the line at 250 gigabytes per user per month. Once you’ve used that much bandwidth, you can get your account suspended.

A limit of 250 gigs a month is plenty enough for most of us, at least for now. Silicon Alley Insider has a nice rundown of what it would take to hit that limit, to the tune of two HD movies a day and a lot of gaming on the side. But that assumes your connection is speedy enough to stream high-quality video in the first place. It’s a chicken-and-egg problem: People use less bandwidth when their connection is crawling from congestion.

A reasonable argument can be made that this is a sound way to clear up congestion. It is rather unfair that people who barely use the Web have to pay the same or similar rates as people who use BitTorrent all day. The “meterists”—and there are a few of them out there—think systems like Time Warner’s are inherently fairer, as they end the practice of forcing light users to subsidize heavy users. The rosiest scenarios even suppose that a pay-as-you-go Internet could give telecoms the financial incentive to expand their networks.

The criticism is easy to condense: No one joyrides in a taxi. A plan like this, as its many opponents have noted, will cramp the freewheeling, inventive nature of the Internet. The Internet owes its success to two pillars of human activity: masturbation and procrastination. (Seriously: We have the porn companies to thank for pioneering all sorts of technologies, from VHS to secure credit-card transactions online.) Is the Internet really the Internet if people don’t use it to waste time?

Widespread deployment of capped or metered plans would also cripple businesses that have invested in high-bandwidth products, like videoconferencing. And if people start pinching bytes, it could also pose problems for security—if you hear the meter ticking, you’ll probably be less eager to install large operating-system updates and new virus-definition files.

Beyond that, capping data transfer is simply a crude way to get people to curb their data appetites. Imposing limits on gigabytes per month is as sensible as replacing speed limits with a total number of miles you can drive in a given day. A more reasonable scenario—though one that’s still decidedly unfun—would be to charge for Internet access as we charge for cell phones, running the meter during peak hours and letting people surf and download for free on nights and weekends, when there’s far less competition for bandwidth.

Plan Two: Blame BitTorrent. Category: Horrible later. In addition to capping data transfer, Comcast is taking a second anti-hog initiative. Rather than charging more, the company plans to slow or cut off peer-to-peer traffic during peak times. Last October, the Associated Press caught Comcast deprioritizing traffic from BitTorrent and other file-sharing protocols. The company received a slap from the FCC for singling out a specific type of traffic, which violates the FCC’s policy statement on network management. Comcast now says it will pursue a more compliant strategy that slows the connections of power users during peak times without singling out specific types of traffic. This tactic is similar to the more general practice of “traffic shaping”: prioritizing data packets for applications like video that shouldn’t lag at the expense of something like e-mail, which can wait in line an extra few seconds without anyone noticing—except that it’s deprioritizing users, not data packets. (People who hate the concept of traffic shaping prefer to call this “throttling” or “choking.”)

This plan is “horrible later” because it fails to account for the natural evolution of the Web toward larger file sizes and higher bandwidth activities. While it isn’t a God-given right to be able to downloaded pirated DVDs all day long, the ISPs should not adopt a long-term strategy that penalizes high-bandwidth activity. As FCC commissioner Robert M. McDowell pointed out in the Washington Post a few weeks ago, this is not the first time we’ve reached a crisis level of congestion. If Time Warner and Comcast had structured their networks around anti-bandwidth-hogging policies, say, 20 years ago, revolutionary services like YouTube and BitTorrent might not even exist.

Now let’s take a step back and sympathize with the ISPs. On the one hand, power users and Web entrepreneurs brand them as anti-innovation for going after bandwidth hogs with regressive tactics. On the other, there are oodles of home users who get infuriated when it takes forever for a page to load in their browser. On top of that, they have to deal with net-neutrality advocates who often seem more interested in policing the ISPs than in proposing ways to fix our bandwidth crunch (though Columbia law professor and Slate contributor Tim Wu runs down some good possible fixes in this New York Times op-ed). So let’s help the ISPs out and look at a few promising technologies that could help us all surf quickly and happily.

The high-fiber diet. If bandwidth demands do continue to scale, we could get to the point where anyone who wants a decent connection to watch a 100-gigabyte holographic movie—or whatever we’re watching five years from now—will have to get a fiber-optic cable directly to their home. Verizon has bet on this solution with its FiOS service. These “fiber to the premises” connections are still very expensive and aren’t yet widely deployed—and the commercials also make you want to retrofit your entire neighborhood with copper, just out of spite—but it looks as if they’re only getting more necessary. (Some researchers believe that the same technology that may someday lead to invisibility cloaks might also be deployed to route fiber-optic signals through today’s existing networks. That effort is fairly nascent.)

Cold, hard cache. Shortly before the start of the 2008 Olympics, some commentators feared the global network wouldn’t be able to handle all the demand for streaming Web video. The fact that the Internet didn’t “melt,” as one ZDNet author feared, set tongues wagging about NBC’s use of third-party “content-delivery networks.” To deliver nonlive content, these companies can store popular content on many different servers around the country—a method of ensuring that data packets don’t have to travel as far to reach their destination. In general, your machine will retrieve information much faster from a “nearby” server on the network than from one across the globe. If a copy of the movie you want is stored by your ISP on a local server, you’ll both get it faster and hold up fewer people in the process. Just as NBC did, companies may need to turn to these content-delivery companies—essentially, large private networks—to help distribute both cached and live content. Still, it feels a little defeatist; taking customers off the public Internet is great for reducing congestion, but the fact that it’s necessary is a problem we need to fix head-on, not work around.