Future Tense
The Citizen's Guide to the Future

May 20 2015 4:02 PM

Internet Providers Said Net Neutrality Rules Would Ruin Everything. Let’s Check in on That.

The telecom industry has long maintained that it supports free and open Internet and wants to protect net neutrality. But when President Obama came out in support of reclassifying broadband as a utility under Title II of the Telecommunications Act and the FCC planned a vote on proposed reclassification rules, Internet service providers (ISPs) had to draw the line.

“We do not support reclassification of broadband as a telecommunications service under Title II,” Comcast wrote in a November 2014 statement. “Doing so would harm future innovation and investment in broadband.” That month Time Warner said, “Regulating broadband service under Title II ... will create unnecessary uncertainty, lead to years of litigation and threaten the continued growth and development of the Internet.” And a July 2014 statement from AT&T said that Title II reclassification “would actually impose barriers to broadband infrastructure investment.”

Notice any common threads?

It’s been almost three months since the FCC voted in favor of Title II reclassification, so things should be falling apart by now, right? But it seems that the new rules haven't been as big of a deterrent as ISPs said they would be. BGR points to a CNBC interview AT&T CEO Randall Stephenson did on Monday. He said, “We’re going to invest around $18 billion this year. That will allow us to deploy a wireless broadband solution to 13 million homes around the U.S.” Yeah, sounds brutal.

Stephenson wasn’t completely ignoring the Title II debate, though. He said AT&T is confident that the courts will strike down the new regulation, and that’s why the company is comfortable moving forward with infrastructure investment. AT&T did cut infrastructure spending in November 2014 after Obama’s net neutrality statement. But the company is less adversarial now, either because its confidence in litigation is well-founded or because it secretly knows that Title II won’t be the end of its profits. After all, any company that also offers landline phone service (like AT&T) is already familiar with operating under Title II regulation.

Verizon CFO Francis Shammo addressed this, perhaps with more candor than he intended, at the UBS Annual Global Media and Communications Conference in December 2014. “This does not influence the way we invest. We’re going to continue to invest in our networks and our platforms, both in Wireless and Wireline FiOS where we need to,” Shammo said. “I mean if you think about it ... we were born out of a highly regulated company, so we know how this operates.” The man makes a good point!

When Verizon announced first-quarter earnings in April, it said that it was working to migrate customers from the copper network to fiber (presumably for both DSL Internet and phone service). Forty-seven thousand users switched during the quarter, and the 2015 goal is to transition a total of 200,000 customers. Verizon is also purchasing AOL for about $4.4 billion. Though the deal has more to do with digital content than physical infrastructure, it shows that Verizon doesn't seem to be anticipating a lean year.

Meanwhile, Comcast announced in April that it is moving forward with rolling out 2 gigabit–per-second connectivity for 1.5 million customers in Atlanta. The Title II debate certainly isn’t over, but things pretty much seem like business as usual since the FCC reclassification.

Video Advertisement

May 20 2015 10:30 AM

Should the Internet Trust You? This Browser Extension Will Be the Judge.

Wired logo

Three years ago, in a TEDGlobal talk, sharing-economy guru Rachel Botsman shared her vision of a “reputation dashboard”—a kind of credit report that tracks your online behavior across services like Airbnb, TaskRabbit, and Dogvacay and compiles it into a portable measurement of your trustworthiness. Amassing that data, Botsman proposed, would make reputation into a kind of currency. “In the 21st century,” she predicted, “new trust networks and the reputation capital they generate will reinvent the way we think about wealth, markets, power and personal identity in ways we can’t yet even imagine.”

It’s a compelling vision, but so far it hasn’t been realized. That’s because, as I noted last year, the companies that have amassed the most reputation data aren’t eager to share it. “We’re in an early and competitive stage,” Monroe Labouisse, Airbnb’s director of customer service, told me at the time. “That asset—the trust, the data, the reputations that people are building—is hugely valuable. So I’m not sure why a company would give that up.”

A new company is trying to do an end-run around that intransigence by scraping publicly available information from various sharing-economy services and compiling it into a trust score between 0 and 100. Called Karma, it works as a browser extension—any time you pull up a supported site (which currently includes Airbnb, Craigslist, Dogvacay, Ebay, Etsy, RelayRides, and Vayable) a pop-up window will ask if you want to link your account to your Karma score. That score is calculated by looking at the reviews you’ve received—both the quantitative ratings (the number of stars, for instance) as well as a textual analysis of written comments. Different services are weighted differently; intimate interactions like those powered by Airbnb and Dogvacay are deemed more relevant than relatively anonymous eBay sales, and more recent reviews also are weighted more heavily. The more services you link, the higher your potential score. (Of course, if you’ve misbehaved on one service, your score could fall—but then, you would probably choose not to link it in the first place.) When you peruse a supported service, you’ll see every user’s Karma score superimposed over their listings. It’s a little bit like the sharing economy’s answer to Klout, that notorious Q rating for social media.

Zach Schiff-Abrams, Karma’s co-founder and CEO, says the company has not contacted companies like Airbnb or Dogvacay. But he thinks they will welcome his service, because it will make it easier for new hosts to attract guests, instead of grinding through the first few months as they attempt to build up a bank of positive reviews. “TaskRabbit’s biggest frustration point is the on-boarding process for new Rabbits,” Schiff-Abrams says. “We think that Karma can act as an arbiter to help these people begin to build a reputation much sooner.”

There’s something compelling and simple about this. It ignores all the complicated behind-the-scenes algorithms and processes that companies like Airbnb use to establish reputation and just collects the publicly available result of those processes. According to Schiff-Abrams, this simple hack—going directly to the users rather than through the enterprise—is what convinced VCs like Great Oaks to support them.

These platforms now represent billions of dollars in commerce, and as such they must be extremely wary of bad actors trying to game the system. And Karma’s system, at first blush, looks pretty gameable. Going through the browser makes it easier for Karma to reach users directly, but it also makes it harder to confirm a Karma user’s true identity; if I’m using a friend’s browser, it would be pretty easy to link his or her Karma score to one of my accounts.

There’s also a weakest-link problem here. It’s easy to imagine using Karma as a Trojan horse—building up a high reputation score on a more easily gameable system and then importing that score into the fortress of Airbnb.

It’s precisely to avoid that kind of scenario that Airbnb has invested so much money in its trust and safety division, an intricate and detailed set of algorithms to sniff out sketchy behavior. (If a new listing is getting a lot of positive reviews from the same account, for instance, the algorithm will flag it.) Of course those algorithms impact what gets posted on Airbnb, so in some ways Karma is freeloading off Airbnb’s expensive and painstaking security infrastructure.

Right now, Airbnb insures its hosts up to $1 million, in part because it trusts its algorithm to guard against the most egregious forms of fraud or malfeasance. But if someone is using a Karma score to determine who to rent to, that means that Airbnb is suddenly assuming the risk for a different company’s security mechanisms. It’s hard to imagine Airbnb will go for that, and I’d expect them to insert some language saying that anyone who uses Karma is no longer eligible for the insurance coverage—which would probably be enough to do serious damage to Karma. (Airbnb declined to comment on a product they haven’t had a chance to use yet.)

If this were earlier along in the sharing economy, Karma might have a bit more time to work all this out. Certainly that’s what happened to Airbnb—as it grew, the company realized it had to get more serious about security, and it had plenty of missteps along the way. But now this is a mature market, and I’m afraid it’s a little late for this kind of experimentation. This is a clever approach to a big problem, and it’s frustrating that competitive pressures are preventing it from getting solved. But ultimately I’m afraid that the problems of identity and trust are too complicated and fraught to be solved with a simple browser extension.

Also in Wired:

May 19 2015 6:48 PM

Microsoft Solitaire Is 25. Join the Tournament!

The best part of encountering an old PC—whether it's your ancient IBM Thinkpad or your great-aunt's Gateway desktop—is playing Solitaire on a long-obsolete version of Windows. Change the card art to the spooky castle and go nuts. And for the beloved classic's 25th birthday, Microsoft is launching two tournaments to identify the ultimate Solitaire addicts.

Microsoft says that the first competition will be internal at the company this month. Then in June it will publicly release the same challenges it gives its employees for an Internet-wide showdown. As Slate's Josh Levin wrote in 2008, "Though on its face it might seem trivial, pointless, a terrible way to waste a beautiful afternoon, etc., solitaire has unquestionably transformed the way we live and work."

Microsoft offers a whole Solitaire Collection for download now, but there's nothing like the original that first awakened pure digital procrastination in each of us. And by the way, if you haven't played FreeCell in a while, it's still a nightmare. 

May 19 2015 3:12 PM

Innocence of Muslims Can Go Back on YouTube. Good.

On Monday the 9th Circuit Court of Appeals reversed an earlier ruling that had forced YouTube to take down Innocence of Muslims, an inflammatory anti-Islam film that may have helped spark the Benghazi attack. Because this is America, the decision did not deal directly with blasphemy—a constitutionally protected form of expression—but with copyright and intellectual property. Yet lurking just beneath the court’s opinion lay a vigorous defense of free speech, individual liberty, and the right to disseminate even hateful, noxious ideas.

The strange case arose after Cindy Lee Garcia accepted $500 to appear briefly in what she believed was an action-adventure thriller set in ancient Arabia. Garcia’s only line was “Is George crazy? Our daughter is but a child?” In postproduction, however, producers overdubbed her line with the words, “Is your Mohammed a child molester?”

In the final cut of the film, Garcia appeared on screen for five seconds. But after the film premiered and spurred riots in the Middle East—and a fatwa against its actors in Egypt—Garcia sued YouTube and its parent company, Google, demanding they take down the film. Initially, Garcia asserted that the film was hate speech and violated her right to privacy. Eventually she settled on the copyright claim, insisting that she held a copyright over her five-second appearance, which gave her the right to force Web hosts to remove the film.

As the 9th Circuit acknowledged on Monday, Garcia’s copyright claim was, in short, ridiculous. The “author” of a film is usually its director, perhaps jointly with its producer and screenwriter. Individual actors can’t “author” a film for copyright purposes; otherwise, every actor would hold a copyright over her individual scenes, creating what Google called a “Swiss cheese of copyrights.”

It gets worse for Garcia. The Copyright Office registers movies as a single “work” and refuses to splinter every film in smaller copyrightable bits. Pragmatism dictates such a rule—otherwise, the court says, each of the estimated 20,000 extras in Lord of the Rings might assert copyright ownership of their individual scenes. And oddly, Garcia’s copyright claim is even weaker than a Lord of the Rings extra’s: While Frightened Hobbit No. 2 might have actually spoken his lines, Garcia’s one line was overdubbed, meaning she didn’t even utter a single word in the film. By manipulating her role, the movie’s director became the indisputable author of even Garcia’s five-second cameo.

All of this stuff is good law, well applied. But luckily the court recognized that there’s more going on here than just a dry intellectual property dispute. At the outset the majority wrote that the appeal “teaches a simple lesson—a weak copyright claim cannot justify censorship in the guise of authorship.” Later on it reprimanded a panel of judges who had previously ordered YouTube and Google to remove the video:

The takedown order was unwarranted and incorrect as a matter of law, as we have explained above. It also gave short shrift to the First Amendment values at stake. The mandatory injunction censored and suppressed a politically significant film—based upon a dubious and unprecedented theory of copyright. In so doing, the panel deprived the public of the ability to view firsthand, and judge for themselves, a film at the center of an international uproar.

In a separate opinion the 9th Circuit’s liberal lion Judge Stephen Reinhardt benchslapped the panel once again, sternly noting, “This is a case in which our court not only tolerated the infringement of fundamental First Amendment rights but was the architect of that infringement”:

[W]e issued an order that prohibited the public from seeing a highly controversial film that pertained to an ongoing global news story of immense public interest. … By suppressing protected speech in response to such a threat, we imposed a prior restraint on speech in violation of the First Amendment and undermined the free exchange of ideas that is central to our democracy and that separates us from those who condone violence in response to offensive speech

Intellectual property experts generally agreed that the copyright ruling was correct. But you don’t have to be an IP professor to know that the Constitution does not permit courts to censor expression through the vehicle of a thinly veiled copyright claim. Innocence of Muslims may be blasphemous, hateful, and inane, but it’s also a textbook example of highly political speech on a matter of fierce public debate. Its controversy demonstrates precisely why it needs constitutional protection. Free speech is a very nice idea for a democracy. But it means nothing when judges can toss it out the window under the pretext of a laughable copyright suit.

May 18 2015 6:31 PM

Gorgeous, Algorithmically Generated Time-Lapses of the World’s Most Popular Landmarks

Time-lapse photography is fascinating because it can reveal changes that transpire too gradually to observe in real time. The problem is that, well, it takes a long time.

Researchers from Google and the University of Washington have found an elegant way around that, at least for some of the world’s most-photographed landmarks and scenes. In a paper published online, the researchers show how publicly available images shot by countless amateur photographers over a period of years can be algorithmically transformed into beautiful time-lapse videos. They call the process “time-lapse mining.”

koh nang NEW

The researchers started by gathering 86 million time-stamped images publicly uploaded by various users of photo-sharing sites such as Google’s own Picasa and Panoramio. They used image-recognition software to automatically pick out thousands of “clusters” of photographs that all showed the same landmark, such as the Salute in Venice or the Mammoth Hot Springs at Yellowstone National Park. Then they developed algorithms to warp a subset of photos in each cluster to a common viewpoint and scale, and ordered those by time stamp. 

Throw in a few image-stabilization techniques and correct for lighting differences, and voila: an automatically generated time-lapse video of each landmark that looks almost as if it were shot with a single camera. At the top of this post is the full video that the researchers published in conjunction with their paper. 

lombard NEW

“Whereas before it took months or years to create one such time-lapse, we can now almost instantly create thousands of time-lapses covering the most popular places on earth,” the researchers wrote in their paper. (Here is the PDF.) “The challenge now is to find the interesting ones, from all of the public photos in the world.”

Figuring out what’s interesting, you see, is a task that’s still beyond the ken of machine-learning algorithms. The Google and UW researchers had to go through the time-lapse videos themselves to determine which were worth highlighting in their paper. They homed in on several categories of subject, including waterfalls, seasonal changes in vegetation, geological changes, construction projects, and city scenes. Sprinkled through this post are a few of our favorites, in GIF form, including the one of Las Vegas' changing skyline below.

vegas-new

And here is the full video that the researchers published in conjunction with their paper, including a slew of other impressive time-lapses. It's very much worth watching.

May 18 2015 5:07 PM

United Offers Reward for Spotting Security Flaws in Its Website, Not Its Planes

United Airlines is offering 1 million rewards miles to hackers who report vulnerabilites in the company's website or apps. The company claims it's the first cybersecurity incentive program in the industry. But noteably, the "bug bounty program" does not apply to "bugs on onboard Wi-Fi, entertainment systems or avionics." Basically, the company doesn't want independent researchers vetting the systems that actually make planes fly.

The program debuted last week, less than a month after security researcher Chris Roberts was banned from flying on United after he tweeted about onboard Wi-Fi security vulnerabilities while on one of the company's flights. Roberts implied in a tweet that he could access navigation systems and control passenger oxygen masks. In response, the FBI met him at the gate in Syracuse, New York, when he landed. Later, in conjunction with TSA, the agency issued a warning to airlines to be on alert for hackers.

The bug bounty program could have been in the works before this incident but either way it certainly speaks to the importance of engaging security researchers, sometimes called white-hat hackers, instead of alienating them. As Dan Gillmor wrote on Slate, "If United and the aviation industry as a whole want to earn customers’ confidence in this situation, they should put Roberts and a bunch of other white-hat hackers on retainer."

That's not exactly what United is doing, though. The company has made good security updates to its Web site, and a beta version that includes default page encryption ("https" at the beginning of the URLs) launched last week. Testing this and other new security features is important, but if bug hunters are discouraged from testing their ability to access flight systems, the bounty won't help with the most crucial (and dangerous) vulnerabilities.

To be fair, it's probably not safe for security researchers to mess around with critical systems while flying to win miles. You can see how that could lead to a tragic accident. Maybe airlines should run a few controlled hacking flights every year and voluntarily give researchers the opportunity to look for dangerous bugs. We all know how much companies love scrutiny of their proprietary systems!

May 18 2015 12:31 PM

Obama Gets His Very Own Twitter Account

The official (verified!) Barack Obama Twitter account has more than 59 million followers. The handle is run by Obama's aides, so tweets personally written by the president are signed "-bo." It seems like a reasonable arrangement for someone who's, well, pretty busy. But Obama apparently wanted an account all for himself, and with @POTUS, which sent its first tweet this morning, he just got one.

Something so public can't just be personal, though. The account bio warns that, "Tweets may be archived: http://wh.gov/privacy," referring to the White House's data privacy policy and, presumably, the scandal over Hillary Clinton's use of a personal email account while secretary of state without adequate public records archiving. The White House wrote in a blog post that, "President Obama is committed to making his Administration the most open and participatory in history."

It's not clear yet whether Obama has snapped up @POTUS for himself, or whether it will be handed down like the official Pope account, @Pontifex

Since the account is new, there was a brief period where the Twitter-famous could honestly boast that they had more followers than the leader of the free world.

But the account is rapidly gaining followers. Duh.

It seems that it wasn't until the home stretch of Obama's presidency that anyone (perhaps himself included) trusted him with a personal account. I think I speak for everyone when I say: What could possibly go wrong?

Update, May 18, 2015, 1:45 p.m.: It seems that Obama sent his first @POTUS tweet from an iPhone, which is weird because he's always been very vocally frustrated about having to use Blackberries as part of the White House's cybersecurity approach.

May 15 2015 12:47 PM

Google’s Fully Driverless Cars Are Ready for the Road. Well, Some Roads.

Google has been testing self-driving car technology on American roads for six years now, with mostly encouraging results. Until now, however, it has used specially outfitted versions of mass-production cars like the Toyota Prius and Lexus RX 450h.

On Friday, the company announced that it is ready to put its own custom-built, fully self-driving cars on public streets for the first time. Yes, those cute little Koala-mobiles are apparently road-ready, just a year after Google introduced them to the world. They’ll start out by tooling around the intimately familiar Mountain View, California, roadways that Google’s self-driving Lexuses have been cruising—and painstakingly mapping—for the past couple of years.

Google’s prototypes are designed to be not just self-driving, but fully driverless: They have no steering wheels, brakes, or gas pedals, just buttons you push to start the ride. As you can see in the video below, they’re designed to be capable of completing their journey without anyone in the driver’s seat at all.

Unfortunately for Google, California’s self-driving car law doesn’t allow that. It requires all autonomous vehicles to be street legal and to have a human behind the wheel. So for the time being, Google says it will outfit its robotic Totoros-on-wheels with removable steering wheels, accelerators, and brake pedals so that the person in the driver’s seat can take over at a moment’s notice. It will also cap their speed at 25 mph, making it unlikely that anyone will die even on the off chance that they accidentally mow someone down. (For the record, while Google’s self-driving cars have been involved in a handful of minor accidents over the years, Google says not once has its autonomous driving system been at fault.)

What’s interesting about all this is that Google doesn’t really believe that putting a human behind the wheel makes its self-driving cars safer. It’s happy to comply with California’s law for now, of course, while it’s still developing the technology and mapping the terrain. But in the long run the Googlers behind the self-driving car project are convinced that driving will be safer once humans are removed from the equation altogether.

That’s a view not widely shared—or, at least, not widely voiced—among mainstream automakers, which is part of why Google had to build its own steering wheel–free prototypes in the first place. But at least a few manufacturers may be starting to come around: At the Consumer Electronics Show in Las Vegas in January, for instance, Mercedes rolled out a futuristic concept car in which four passengers face each other rather than the road ahead. Whether we’ll see anything like that on public streets in our lifetimes will depend in part on the success of the types of tests that Google is doing today. It certainly won’t happen anytime soon.

One other thing that isn’t likely to happen anytime soon: Sighting a Google self-driving car prototype on a random street in Anytown, U.S.A. As the Atlantic’s Alexis Madrigal explained last year, near-perfect information about a given roadway has been crucial to self-driving cars’ success so far. Google has turned parts of Mountain View into a virtual test track by mapping literally every speed bump and stop sign. Google said in a statement Friday that it is starting to send some of its self-driving Lexuses into new territory, including San Francisco, where they’ll encounter fresh challenges like hills and fog. Until further notice, however, the driverless prototypes will stick to the cozy confines of Google’s hometown.

Previously in Slate:

May 14 2015 5:25 PM

Reddit Decides It Might Be Time to Crack Down on Harassment

On Thursday, Reddit announced that it wants to "curb harassment" among users. The company says it has "improved our practices" and that users will be able to report problematic private messages, posts, and comments using Reddit's internal messaging system or through contact@reddit.com.

The site has been going through some changes over the last six months. In February it banned revenge porn, and co-founder Alexis Ohanian said in a statement, “We also recognize that violent personalized images are a form of harassment that we do not tolerate and we will remove them when notified.” At the time, Slate's Amanda Hess noted that though the change was positive, it was not clear how Reddit would authenticate requests or how it would address nuanced fringe cases, like photos of public nudity.

Reddit's new initiative is equally vague. It presents a definition of harassment:

Systematic and/or continued actions to torment or demean someone in a way that would make a reasonable person (1) conclude that reddit is not a safe platform to express their ideas or participate in the conversation, or (2) fear for their safety or the safety of those around them.

And then it explains how people can bring unacceptable behavior to the attention of Reddit employees so they can, presumably, take action. It doesn't say anything, though, about how Reddit will approach or deal with these submissions. Of course these are hard topics to present succinctly in a blog post, but a little specificity would be useful. When asked about how complaints will be evaluated and how harassers will be reprimanded, a Reddit representative referred me to the announcement.

When Twitter expanded its attempts to combat harassment in April, the company talked about how it had created a temporary lock function for user accounts under investigation. Examples of this additional granularity for Twitter included: "An account may be locked for a pre-defined time period. A user may be asked to verify their phone number. A user may be asked to delete certain Tweets. After completing the requested actions, their account is unlocked." There are problems with Twitter's plan, as some of my Slate colleagues pointed out, but at least we know what the plan is.

In its statement, Reddit says, "One of our basic rules is 'Keep everyone safe'. Being safe from threat enables people to express very personal views and experiences—and to help inform and change other people’s views." The link goes to five "Reddit Rules," which have overlap with but are not the same as the classic five. A Reddit representative said that the company requires users to follow both sets of rules, and that they are complementary to each other.

This subtle adjustment to the rules, plus the broader approach in Thursday's announcement, seems aimed at reducing harassment without enraging a certain population of Reddit users who don't want any company intervention and prefer to rely on appointed moderators. "This change will have no immediately noticeable impact on more than 99.99% of our users," the statement says.

Reddit points to a survey it conducted last month of 15,000 users to explain its decision to make changes.* The poll "showed negative responses to comments have made people uncomfortable contributing. ... The number one reason redditors do not recommend the site—even though they use it themselves—is because they want to avoid exposing friends to hate and offensive content." The survey revealed that 20 percent of women versus 12 percent of men would describe themselves as unhappy with the Reddit community.

Reddit's example carries a lot of weight, so the decision to publicly combat harassment is certainy a positive one. The question now is just whether these changes will actually help.

*Correction, May 15, 2015: This post originally misstated the number of users Reddit surveyed. It was 15,000.

May 14 2015 4:49 PM

This AI Engine Promises to Identify Your Photos. It Often Fails Spectacularly.

Here's a riddle: When is a goat a dog?

Answer: When you run a picture of it through the Image Identification Project.

Released to the public yesterday, Wolfram Research’s ImageIdentify promises to accurately describe the contents of any picture that you show it. The setup couldn’t be simpler: Upload a picture and it’ll tell you what it sees. In a lengthy blog post, computer scientist and Wolfram Research CEO Stephen Wolfram describes this program as “a nice practical example of artificial intelligence.” He suggests that it might be used to automatically classify the contents of albums, offering “statistics on the different kinds of animals, or planes, or devices, or whatever, that appear in the photographs.”

The trouble, unsurprisingly, is that ImageIdentify appears to go wrong more often than it goes right. Wolfram acknowledges this difficulty, and gamely offers a handful of interesting errors in his post. Given an image of Indiana Jones, for example, “the system was blind to the presence of his face, and just identified the picture as a hat.” It’s certainly impressive that it recognized and correctly labeled a hat. But such mistakes would seem to constrain the project’s usefulness, at least for the time being.

Like Microsoft’s How-Old.net, ImageIdentify is most interesting when it gets things wrong in spectacular ways. As Wolfram notes, many of its errors make sense. It confused my bike with a bicycle rack, presumably because it saw the primary object correctly, but assumed that it was somehow attached to the old fashioned radiator behind it. Here, the system’s error likely derives from its effort to identify a single subject in each image, a propensity that sometimes leads it to ignore key details (as in the Indiana Jones example) and sometimes leads it to conflate distinct elements (as in the case of the bike).

wolfram image identifier: bike rack

Screencapture of ImageIdentify. Photo by Jacob Brogan.

Sometimes, however, ImageIdentify is just plain weird. When we fed it a picture of a croissant, it told us that we were looking at shellfish. Wolfram claims the system’s mistakes “mostly seem remarkably human.” But that pastry-mollusk confusion feels uncanny—more like a metaphor than an ordinary misapprehension.

ImageIdentify will no doubt improve in time—it can be trained to better understand what it’s looking at—but for now it’s at its best when it’s at its strangest. Here are a few of our favorites:

pollock chicory root

Screencapture of ImageIdentify.Image of Autumn Rhythm by Jackson Pollock.

Feline English setter

Screencapture of ImageIdentify. Photo by Heidi Strom Moon.

Oscar Meyer Bobsled

Screencapture of ImageIdentify. Photo by June Thomas.

lemon shark cat

Screencapture of ImageIdentify. Photo by Abby McIntyre.

wolfram person

Screencapture of ImageIdentify.Photo by Joi Ito/Flickr.

READ MORE STORIES