Future Tense
The Citizen's Guide to the Future

May 12 2016 6:23 PM

Tiny Flecks of Debris Can Do Real Damage to the International Space Station

The International Space Station isn't going to fall apart because of this one little chip, but it is an impressive crater considering what made it. The European Space Agency estimates that the quarter-inch divot came from an impact with a tiny fleck of space debris.

British astronaut Tim Peake took the photo above inside the ISS's Cupola, an observatory and workspace for astronauts that juts out into space. The Cupola has extra strong fused-silica and borosilicate-glass windows, but that nick was caused by something miniscule, "possibly a paint flake or small metal fragment no bigger than a few thousandths of a millimetre across," ESA writes.

Debris is a constant threat to satellites and space agencies have to take extensive precautions to protect their equipment, while also trying to avoid adding more detritus. The agency writes, "An object up to 1 cm in size could disable an instrument or a critical flight system on a satellite. Anything above 1 cm could penetrate the shields of the Station’s crew modules, and anything larger than 10 cm could shatter a satellite or spacecraft into pieces."

Good thing that glass is specially reinforced.

May 12 2016 2:54 PM

Instagram Bans Hashtags to Hide Pornography. So Why Did It Go After #Easter and #Kansas?

If you’re anything like me, you mostly use Instagram to the follow the ordinary adventures of friends and family. Scrolling through my feed, I find lovely photos of foreign destinations, adorable cats, and, of course, elegantly composed plates of food. All that food porn aside, I don’t see much explicit content, but that doesn’t mean it’s not out there. Indeed, Instagram is reportedly full of pornography, despite the site’s best efforts to censor it.

If you want to examine Instagram’s naughty bits (and you’re not willing to actually follow its naughtier users) your best bet may be to look up certain hashtags. The trouble is that those search terms are constantly on the move, partly because Instagram itself is constantly blocking those hashtags from its results. Instagram’s reasons for taking this approach are clear enough: Because it can’t individually review every image posted, it has no way to guarantee that some problematic content won’t find its way to the network. Instead of censoring individual posts, then, it tries to make sure that the most troubling ones are hard to see in the first place.

Refused access to a more explicit vocabulary, many users have reportedly resorted to suggestive emoji. Others have embraced other languages, leading to the temporary pornographic appropriation of the Arabic word for movies. Some, meanwhile, still hew to a more conventional vocabulary, though they’ve occasionally shifted away from clearly objectionable words with bizarre results.

And so we come to the strange state of #kansas. Nick Drewe of The Data Pack recently delved into Instagram’s API to round up the latest batch of hashtags that Instagram has obscured from its search results. Many of the terms have been hidden outright from search results, while others, as Drewe explains, reveal only a limited set of “Top Posts,” presumably to keep tackier content from showing up.

Some that fall into the latter group—including #swole, #snap, and #rack—might well have benign uses, even if they also have more euphemistic qualities. But others will be puzzling to all but the initiated. Good luck, for example, trying to celebrate #easter, #newyears, or #valentinesday on Instagram. Likewise, know that you’ll wind up frustrated if you’re looking for help with #publicrelations via the app. And while it makes sense that they might cover up #asiandick, banning #asia outright seems a little extreme.

And then there’s #kansas, arguably the most peculiar of the bunch. Little in the limited array of accessible top posts for the hashtag explains why the vast majority of other images have been hidden. Instead, as with many of the censored terms that Drewe identifies, we’re treated to a brief note reading, “Recent posts from #kansas are currently hidden because the community has reported some content that may not meet Instagram’s community guidelines.” As Drewe notes, this “message implies that the censorship of these hashtags is only temporary,” but it’s no less mysterious for it.

This is, of course, neither the first time Instagram has censored content nor the only way they’ve done so. In the past, Drewe’s investigations have revealed a handful of predictably banned terms, including #birthdaysex, #jailbait, and that most explicit indicator of explicit content, #pornography. Even then, a few oddballs were scattered in the list, including #iphone and #jamesmotherfuckingfranco, suggesting that there’s nothing new about this trend.

Users and journalists have also identified arbitrary patterns of what does and does not count as problematic by Instagram’s standards. The dynamics are sometimes strangely gendered, and even when they’re not they can be frustrating. (See, for example, Instagram’s removal of The Game’s dick pic, which led Slate’s Christina Cauterucci to name the app one of the greatest villains of 2015.) Though porn purveyors have colonized attempts to push back against the app’s structural sexism on at least one occasion, as Yahoo’s Nora Crotty explains, turning the socially conscious #FreeThe Nipple hashtag into a repository for “so much more than some emancipated areolae,” it’s rarely evident why it censors what it censors.

All of that is to say that this is hardly the first time Instagram has made confusing choices about what is and isn’t acceptable. It’s possible, of course, that “Kansas” is the name of some depraved sex act or that the ban has nothing to do with sex at all, but you’d never know that from the evidence that the site proffers now. Like Facebook, its corporate parent, Instagram’s probably most interested in maintaining its status as a social network by ensuring that the best user-submitted content stands out. But, again like Facebook, Instagram could stand to be clearer about how and why it has its finger on the scales.

May 12 2016 12:30 PM

This Video Proves That Apple Actually Is Deleting Music Files Off Some People’s Hard Drives

Earlier this week, I wrote about the nightmare that is iTunes and Apple Music. My cri de coeur was prompted by a widely circulated blog post by a guy named James Pinkstone who thought Apple Music had systematically and deliberately deleted 20 years’ worth of his MP3s and replaced them with proprietary streams. It was a chilling story, but a hard one to evaluate because Pinkstone didn’t provide quite enough details about his situation. My conclusion, after talking to Apple Music expert Serenity Caldwell, was that insofar as Pinkstone's music files had been removed from his hard drive, it wasn’t by design, and that most likely, he had inadvertently caused the problem himself as a result of the insanely hard-to-understand iTunes interface.

I’m no longer confident in that conclusion. Thanks to a startup founder named Robert Etropolsky who emailed me after reading my story—and some intense detective work from Caldwell—I now believe Apple Music actually is deleting some people’s MP3s, most likely because of a bug.

Etropolsky used to have 90 gigabytes of music on his hard drive; he now has just 30, and it’s not because he deleted anything himself. As you can see in this extremely clear and persuasive video he posted on YouTube, there are many songs in his collection that iTunes has “matched” to tracks that exist in its cloud-based Apple Music library. This would be more or less OK if Etropolsky could still listen to the MP3s that, in many cases, he imported to his computer years before Apple Music even existed. But as Etropolsky demonstrates, those files are not on his hard drive anymore: Using the TimeMachine feature on his computer, he makes it clear that this was not the case just a few months ago.

In February 2016, Etropolsky points out, he had four full-length Portishead albums on his hard drive. Those albums—which he is using as an example of the larger problem—remain accessible to him through his iTunes library, but only as streams from Apple Music. When he looks in the folder where the Portishead albums used to be, he finds that they are mostly empty.

The big problem here is that if Etropolsky ever decides to stop paying for Apple Music, he will lose access to the Portishead albums altogether, because the files Apple has provided to him in lieu of his originals—the ones he bought on CD years ago and ripped onto his hard drive—will disappear as soon as he ceases to be an Apple Music subscriber. The bottom line: Etropolsky used to have some things, Apple took them away and gave him different things, which it will take away from him if he stops giving them money. 

Why did this happen? And could it happen to the rest of us?

Some Apple diehards will undoubtedly respond “it didn’t” and “no, it won’t.” Surely Etropolsky screwed up in some way, they will say, and the terrible thing that happened to him is surely his own damn fault. To them I say, watch the video! If you can figure out what he did wrong, I’m sure he’d love to hear from you, since he himself is an Apple fan who keeps a picture of Steve Jobs as his desktop wallpaper. As he told me in an email, he doesn’t want this to be true.

Another person who doesn’t want it to be true: Serenity Caldwell, the author of an unofficial manual to Apple Music and the managing editor of Apple news site iMore.com, who wrote an article in response to last week’s alarming blog post entitled “No, Apple Music is not deleting tracks off your hard drive—unless you tell it to.”

Caldwell was initially skeptical when she heard Etropolsky’s story, but after asking him a series of detailed questions and looking through a stack of screenshots he had provided for her perusal, she was forced to conclude that much of Etropolsky’s MP3 collection had indeed gone up in smoke, and the cause was not human error. Why this took place, she could not initially determine, but as she explains in a new blog post that just went up on her website, she now believes that a bug is probably to blame. Caldwell writes:

Based on several Apple Support threads, it appears that the most recent version of iTunes 12.3.3 contains a database error that affects a small number of users, and can potentially wipe out their music collection after the update. The error has been mentioned a few times, primarily on the Windows side, in the weeks since the 12.3.3 update, but appears to be rare enough that it hasn't previously received major press. Apple did put out a support document shortly after the 12.3.3 update that walks you through some fixes if you find that your local copies of music are missing.

I can't state for certain that Etropolsky and Pinkstone fell victim to this bug, but based on their descriptions and screenshots, it seems likely that [they did.]

After running through her theory of the "database error" that resulted in the files being deleted without the help of any clueless user, Caldwell adds:

I don't want to incite mass panic, here: This bug appears to have affected a very small number of users, and if you didn't have local files disappear after updating to iTunes 12.3.3, your library is likely just fine. You can check to see if your library is locally-stored by turning on the iCloud Status and iCloud Download icons; if you've been affected, I suggest restoring from a backup or following Apple's Support document.

At the end of my piece from earlier this week, I expressed some tentative optimism about learning to start trusting iTunes again after years of feeling betrayed by it. Reflecting on that now, my mind goes to a Russian saying that I was taught as a child: "We thought maybe it would be better this time, but it turned out the way it always does."

A spokewoman for Apple did not respond to a request for comment.

May 12 2016 9:39 AM

Pornhub Will Pay You $25,000 to Report Bugs When You’re Not Watching Other Stuff

Pornhub had a quality April Fools’ prank this year, but it’s also had some trouble with malware. It’s the plight of most websites whether they offer porn or banking services. Pornhub is taking steps to protect itself, though, by creating a bug bounty program that will pay out up to $25,000 to people who discover and submit information about site vulnerabilities.

Bug bounty programs have become popular among tech companies, with many digital services happily taking any help they can get in defending themselves. Pornhub is offering its bug-vetting and reward program through HackerOne, a startup founded in 2012 to create a community of ethical hackers and the companies they might help. Apple is known for resisting the bug bounty trend, and some companies like Facebook have had tense interactions with volunteer hackers at times. But the programs are largely positive.

In March, during the height of the Apple/FBI fight, then-HackerOne chief policy officer Katie Moussouris told the New York Times that, “Especially with the stakes being as high as they are, if Apple wants to continue to compete in the modern world, they have to modernize their approach” to cybersecurity. (Moussouris is currently a cybersecurity fellow at New America, which is a partner with Slate and Arizona State University in Future Tense.)

Pornhub, which is owned by Canadian company MindGeek, seems to be making an earnest effort to lay out its expectations. The company writes on HackerOne, "Our bug bounty program is limited strictly to technical security vulnerabilities of Pornhub services listed in the scope. Any activity that would disrupt, damage or adversely affect any third-party data or account is not allowed." And one more thing. "Thank you for helping keep Pornhub safe!"

Millions of people visit PornHub every day and they may not want to admit that they were there. This is all the more reason to keep the site safe and stable. It may not seem high-stakes, but after what happened with Ashley Madison it’s clear that the danger is real and the stakes are high.

May 11 2016 2:00 PM

This Arcane Rule Change Would Give U.S. Law Enforcement New Power to Hack People Worldwide

Imagine a bank in Manhattan receives a number of strange online requests for access to its accounts. After investigating, the bank’s security team suspects an attack by a botnet originating in Eastern Europe. The FBI then seeks a single warrant from a U.S. judge to hack into the devices of victims of the botnet wherever those devices are located. It turns out there are 100,000 computers in the botnet, and one of them is yours.

So the victims become the target again—this time by the U.S. government, which at this very moment is granting itself the unprecedented power to hack into your computer without your knowledge or consent. Your personal data may be accessed and stored or your computers might be rendered unusable. And all of this will likely make you even more vulnerable to further attacks.

In late April, the Supreme Court approved a rule change that will allow U.S. law enforcement to get a warrant to hack into users’ computers and phones anywhere around the world. This is just the latest effort by the U.S. government to expand its hacking operations. The Department of Justice tried to use the courts to force Apple to undermine the security of its own devices, before paying a hacker a $1.3 million bounty to do its dirty work for them. The National Security Agency plans to eliminate its firewall between its hacking and defensive operations, creating a powerful cyber-surveillance behemoth. These heavy-handed efforts don’t even include the Obama administration’s nearly $20 billion budget for cybersecurity in 2016. And now the Supreme Court has just stepped into this minefield.

The change concerns Rule 41 of the “Federal Rules of Criminal Procedure,” which govern how the U.S. government pursues an investigation. The update to the arcane rule has three parts. The first allows a judge to approve an order for hacking to extend to any jurisdiction regardless of the location of the device, so long as the end user has attempted to obfuscate that location. This would include people who use a virtual private network, or VPN, to protect their data and people who use the Tor browser. Some criminals do use these sorts of tools to hide their location—but so too do human rights defenders and marginalized people who seek to protect themselves from harm. The second part allows a single order to issue for an entire network of computers, such as devices belonging to victims infected with botnet malware. Finally, the change modifies the notice requirements for court orders.

This update not only tacitly blesses the government’s ability to hack into devices, but permits operations that reach thousands or millions of computers with a single court order. The change allows the U.S. government to hack into machines without geographic restriction, which means that it could inevitably affect hundreds of millions of innocent users outside of U.S. borders.

One of the biggest problems with the changes to Rule 41 is that it is difficult to anticipate how methods to infiltrate user devices will perform in the real world. To understand the unpredictable nature of government malware, you need only look at the wildfire spread of Stuxnet (and its spawn) to an untold number of non-target devices. Documents received by Wired under the Freedom of Information Act further demonstrate this fact—the documents show that in various investigations, the FBI was confused by the behavior of its own software.

At Access Now, the digital rights organization that I co-founded, we’ve seen hacking powers frequently misused around the globe when repressive governments utilize sophisticated intrusion tools, such as those offered by Blue Coat Systems and Hacking Team. And we know from our free 24-hour Digital Security Helpline that the most frequent victims of such hacks are often users at risk—LGBTQ people, journalists, and marginalized communities.

Government hacking also broadly undermines the security of the global internet. Many forms of hacking rely upon vulnerabilities in commonly used commercial software. State-sponsored hacking—especially of the type that could result from this rule change—discourages a government from disclosing a discovered vulnerability to someone who can patch it. Patched vulnerabilities keep users’ data secure against data breaches or other unauthorized access, but aren’t useful to governments looking to break into user systems. We already know the NSA likely undermined basic encryption standards used in developing secure software in order to maintain its own hacking capabilities. It’s not unlikely that the FBI would make the same decisions.

Now that the Supreme Court has approved the rule change, the only way to stop it is for Congress to pass a law to amend or render it invalid before Dec. 1, 2016. At that point the rule enters full force and effect with massive implications for human rights around the world.

The rule change comes at a time when there needs to be a discussion about what sort of authority we give to our governments to hack their citizens, and the citizens of other nations. Hacking comes with unique risks, just like other forms of surveillance that Congress has limited with additional safeguards. Yet Congress has never spoken on the issue of government hacking.

That should soon change. Sen. Ron Wyden announced at RightsCon earlier this year that he will fight this change; he plans to introduce legislation to knock it back. Meanwhile, governments around the world—including the United Kingdom and the Dutch governments—are debating how to authorize government hacking. The United States should be setting an example by initiating a public debate in Congress—not by quietly slipping through major changes in procedural documents.

May 11 2016 1:09 PM

Why It’s So Funny That Republicans Are Upset With Facebook for “Censoring” News

America’s right wing is in a froth this week following allegations that Facebook has tweaked its “trending news” feed to reduce the visibility of conservative news sites. Maybe it’s true, maybe not. As of now, this report from Gizmodo, which is owned by Gawker Media, is based on anonymous sources, which makes it impossible to trust.

Nonetheless, conservatives and Republicans in Congress have seized on the report as only the latest evidence of overall liberal bias against their cause. Sen. John Thune, the Republican chairman of the Committee on Commerce, Science, and Transportation, has demanded answers from Facebook and, no doubt, will invite Mark Zuckerberg and/or his minions to explain themselves.

But the deeper issue is undeniably real: Facebook is the dominant member of a small number of giant entities—corporate and governmental—that are gaining control over the flow of news, freedom of expression in general, and a lot more in our digital lives. Yet the conservatives who dominate the Republican Congress and big-business groups done their best to thwart policies that would encourage the kind of competition we need to challenge that increasingly centralized control.

Most relevant to the current uproar, almost no one wants to address that Facebook is becoming a monopoly in the antitrust sense of the word. No, it doesn’t control all conversation, at least in the United States. But Facebook is by far the most widely used venue for these conversations, and its power grows daily. Along with Google, it dominates online advertising; Facebook especially does on mobile devices, the way many people connect to the internet. If you offer news and information online, you have almost no choice but to play on Facebook’s field, because so much of your audience is there. (In some parts of the world, Facebook essentially is the internet, because mobile devices are pretty much the sole means of online access and in some cases the company has made deals with local telecommunications companies and/or governments.)

Facebook has been buying everything that presents even a whiff of competition: Instagram, WhatsApp, Oculus, among others. This is smart—no one can dispute that Zuckerberg et al are brilliant technologists and strategists—but it’s also a red flag. As Zuckerberg famously said several years ago, he wants Facebook to be “like electricity” in terms of ubiquity and people’s needs. Well, electricity is a utility. And we regulate utilities.

Monopolies and cozy oligopolies—think internet service providers for the latter—never turn out well in the long run for anyone but the monopolists or cartel members. They end up controlling markets and do their best to thwart genuine competition. It’s their nature.

Which is why capitalism, plainly the best system when it’s working right, needs rules to promote competition. It’s why we have antitrust laws and other processes, including regulation, designed to blunt the dominant companies’ normal predations. Yes, the dominant players tend to capture the regulators, but that’s a failure of function, not of pro-competition theory.

Yet Republicans in general think the government should play little to no role in promoting competition. They consider antitrust inquiry and enforcement to be counterproductive, at best—except, of course, when a powerful constituent (a corporation, usually) is in danger from predatory behavior.

That attitude accounts for the GOP’s cheerleading for corporate dominance of internet access, which is the key to controlling the internet overall. Republicans in general are fine with the idea that one or two companies should control access in most communities, and utterly opposed to a remedy—what we call network neutrality—to ensure that people at the edges of networks, not dominant internet service providers, should decide what information they want and at what priority.

I don’t want the government to tell Facebook what it can publish and don’t look forward to much more than posturing from Thune and his compatriots. But I do want the government to start paying extremely close attention to the way the company is becoming a monopoly, and what it means for freedom of expression when a single company has so much power over what people say online. I want government to use antitrust and other pro-competition laws to ensure that Facebook doesn’t abuse its dominance in a business sense. I want government(s) to work in broad ways to promote open technology and communications, and fierce competition at every level.

And I also want everyone to wake up to the threat Facebook poses to freedom of expression. If you use the service, its terms of service, not the First Amendment, determines what you can say. If it decides to downplay speech it doesn’t like, it has the right to do so.

So I’m glad that conservatives are concerned, even if the allegations prove wrong in this case. (On Tuesday, Facebook modified its outright denial from Monday to a “we’re looking into it” stance; stay tuned.)  I’d be even more glad if conservatives realized that government does have a role in promoting genuine competition—and that we’re in uncharted information-freedom territory under the new control freaks of Silicon Valley.

May 11 2016 11:23 AM

Future Tense Newsletter: Secret Identities and Public Drones

Greetings, Future Tensers,

As we’ve already seen in this month’s Futurography course, drones are unnerving in part because we tend to think of them primarily in military terms. That may be because nonviolent applications are surprising new. A study conducted by Austin Choi-Fitzpatrick shows that nonviolent experimentation with drones began to spike massively in 2012. We’ll likely see public opinions continue to change as these devices become a more and more central part of our lives.

Though Choi-Fitzpatrick acknowledges that this uptick in drone usage creates new privacy concerns, you may not have to worry as much as you might think. Faine Greenwood touches on that point in an article on common misconceptions about drones, noting, “A consumer drone shooting still photographs or video with a DSLR camera still isn’t capable of collecting aerial information much more detailed than that freely available in Google Earth–hosted satellite and Street View images.” She also explores a variety of other important issues, including the surprising lack of concrete, nationwide regulation surrounding unmanned aircraft.

Meanwhile, the internet thrummed last week with conversations about the real world identity of Satoshi Nakamoto, the pseudonymous creator of bitcoin. Surveying those conversations—and the longstanding debates that prompted them—Future Tense’s Lily Hay Newman argues that unmasking Nakamoto would do a disservice to bitcoin itself. “The whole point of the technologies Nakamoto created is to be transparent and iterative, allowing them to transform beyond their original forms and original creators,” Newman writes. If we’re going to delve into the lives of mysterious inventors, we’re probably better off sticking with dead ones like Leonardo da Vinci, whose genome researchers are now trying to sequence.

Here are some of the other stories we found through services the CIA isn’t allowed to use:

  • Invasive species: In Nepal, a non-native plant has been put to productive ends, suggesting new strategies for dealing with the realities of climate change.
  • Forecasting: Panasonic has developed a weather model it claims is the world’s best. Why won’t it let others make use of this technology?
  • Cyberhygiene: We’ve heard of hospitals struggling against malware, but a new report indicates that a medical device actually shut down during surgery because of anti-malware software.
  • Broadband: The government is trying to bring subsidized internet to rural areas, but the services provided actually fall well below Federal Communications Commission standards for broadband.

Jacob Brogan

for Future Tense

May 10 2016 5:43 PM

Group of Senators Says Subsidized Broadband Program Doesn't Actually Offer Broadband

There's been controversy about whether it's a good idea to subsidize Internet connectivity for disenfranchised Americans, but President Obama made the White House's stance clear in July when he announced the ConnectHome program to bring affordable broadband access to rural communities and public housing. Now four senators are raising doubts, though, about the quality of the Internet that these types of programs provide.

The U.S. Department of Agriculture runs the Community Connect Grants program and other programs to bring broadband to the rural U.S., but Sens. Shelley Moore Capito, R-W.V.; Kirsten Gillibrand, D-N.Y.; Angus King, I-Maine; and Jeanne Shaheen, D-N.H., wrote a critical letter to the agency last week.*

Spotted by Ars Technica, the letter particularly calls out the download and upload speed standards defined by Community Connect. There isn't one central definition of "broadband," but the Federal Communications Commission started defining broadband as 25Mbps download speed and 3Mbps upload speed in January 2015. Community Connect offers 4Mbps download as broadband (this number matched the FCC's definition of broadband before 2015), while USDA's Rural Broadband Access Loan program just upgraded its "broadband" to 10Mbps download.*

Given the demands of Web browsing in 2016—videos, multimedia-rich websites, etc.—4Mbps down is dicey for one person, much less a family. The senators wrote in their letter:

Federal policymakers must ensure that taxpayer-supported infrastructure is sufficiently robust to handle demand. It is not only a matter of fairness that rural Americans can fully utilize broadband-enabled resources, but also a matter of ensuring that taxpayers are receiving the full economic development return on their investments.

By contrast, Comcast Internet Essentials offered 5Mbps down in 2014 and now seems to offer 10Mbps. The FCC also offers a program called Lifeline that provides 10Mbps down for low-income Americans. Though these options don't exactly offer blazing fast speeds either, 10 is definitely better than 4.

Part of the problem is that internet service providers resist increasing the threshold for calling something "broadband" so they can report higher proliferation numbers. Additionally, most government internet subsidies allocate $9.25 per household for broadband, which doesn't leave a big cushion for increasing speeds. Still, the senators make a good point. If taxpayers are investing in these subsidies, recipients should at least get adequate connectivity out of them.

Correction, May 11, 2016: This post originally misstated that the ConnectHome broadband subsidy program is run by the U.S. Department of Agriculture. It is run by the U.S. Department of Housing and Urban Development in collaboration with the White House. USDA runs the Community Connect Grants program.

May 10 2016 9:15 AM

One of the TAs in an Artificial Intelligence Class Was Actually an A.I.

IBM’s Watson has long served as a splashy representative for the venerable computing company, starring in commercials with the likes of Carrie Fisher and trouncing human stars at Jeopardy! Now, however, A.I. researchers at Georgia Tech have found a subtler application for Watson’s capabilities, using them to simulate a teaching assistant for a large master’s-level course. What’s more, they did so without alerting the students in the class.

Fittingly, the class in question was Ashok Goel’s Knowledge Based Artificial Intelligence, which aims to help students learn “to build AI agents capable of human-level intelligence and gain insights into human cognition.” As Melissa Korn reports in the Wall Street Journal, the class’ 300-some students “typically post 10,000 messages a semester” to an online message board—more than the course’s human TAs can respond to.

But as Goel notes, while “the number of questions increases if you have more students … the number of different questions doesn’t really go up.” The vast majority of those requests deal with straightforward questions of course logistics, which means that they have simple, objective answers, but they can still take time to answer. So Goel and his team set out to create a system that could respond to the sort of queries that cropped up over and over again, and then they released it onto the message board. Though Goel and his collaborators didn’t alert students to the artificial nature of their ninth teaching assistant, they did give it a name that winks at its origins: Jill Watson.

By the end of the semester, “Jill” was reportedly answering questions with a 97 percent success rate, having learned to parse the context of queries and reply to them accurately. As Korn writes, students apparently hadn’t suspected anything was unusual about the helpful interlocutor, and at least one claims he was “flabbergasted” when he learned its true nature. (Another tells Korn he had “wanted to nominate Jill Watson as an outstanding TA,” which may well be a joke.) Goel apparently plans to repeat the performance next semester and says that he’ll give his creation a new name to extend the fun (though presumably his next batch of students will having an easier time distinguishing real from fake).

Gizmodo’s George Dvorsky observes that Goel’s stunt is disconcerting, since “it’s upsetting to hear that bots could replace yet another job.” But as Dvorsky also writes, for now Jill and her kin are mostly working to eliminate tedious labor—answering questions that Goel’s human TAs were too busy to field. As Korn notes, by automating some of the work of teaching, Watson may actually give teachers the opportunity “to tackle more complex technical or philosophical inquiries” that they’d otherwise be incapable of engaging with at length.

In this sense, Jill Watson may actually represent the practical future of artificial intelligence. While there are legitimate concerns about A.I. and job loss, computers are unlikely to truly replace workers in most industries. Instead, as Madeleine Clare Elish has argued in Future Tense, they’re more likely to function as helpmates, whispering advice into our ears or eliminating annoying responsibilities. It sounds like a bright future to me: Having wasted countless hours during my own years as a TA answering emails about due dates and exam topics, I know that there are only so many times you can type, “It’s on the syllabus” without going a little crazy.

May 9 2016 7:19 PM

Google Might Turn Its Search Results From Blue to Black and People Are Scandalized

Things change quickly on the internet and apparently nothing is sacred. The Telegraph noticed Monday that Google is testing black search result links instead of the traditional blue. Not surprisingly, internet denizens have ... thoughts.

That’s right, there’s already a hashtag, #bringbacktheblue, and active discussion on Google’s forums about how people experiencing the black links can switch back to blue. Google is known for design obsession, so the link color experiment isn’t very surprising. Neither is the company’s coy statement given to Engadget and other outlets: “We’re always running many small-scale experiments with the design of the results page. We’re not quite sure that black is the new blue.”

Google is presumably tracking how the color change affects user interaction with search results, especially click-through rates. If black links subtly motivate people to click on more results, they could ultimately earn Google more money in ad revenue.

Of course, Google is far from the only site to use blue for links. Hyperlinks have been blue since the early days of the internet. A popular theory says that web founding father Sir Tim Berners-Lee chose this color because it was the darkest option at the time aside from black, and he wanted links to stand out from all the gray backgrounds and black text in early browsers.* Lance Ulanoff notes on Mashable, however, that an old FAQ Berners-Lee did on the World Wide Web Consortium’s site doesn’t totally agree with this account.

There is no reason why one should use color, or blue, to signify links: it is just a default. I think the first WWW client (WorldWideWeb I wrote for the NeXT) used just underline to represent link, as it was a spare emphasis form which isn’t used much in real documents. Blue came in as browsers went color—I don’t remember which was the first to use blue. ...
My guess is that blue is the darkest color and so threatens the legibility least. I used green whenever I could in the early WWW design, for nature and because it is supposed to be relaxing. Robert Cailliau made the WWW icon in many colors but chose green as he had always seen W in his head as green.

Even if Google eventually forsakes blue, it won’t necessarily go with black. Maybe instead it will go with green—saluting both nature and money.

Update, May 10, 2016: In this sentence, the word ​"internet"​ has been changed to ​"web"​ to clarify that Sir Tim Berners-Lee's contribution to the development of the modern internet was creation of the World Wide Web.

READ MORE STORIES