Future Tense
The Citizen's Guide to the Future

June 28 2016 6:35 PM

Veteran Pilot Loses Simulated Dogfight to Impressive Artificial Intelligence

We've all heard that researchers are currently working to refine self-driving cars and other autonomous vehicles. The revolution is coming. It turns out, though, that they're also setting their sights on using artificial intelligence to navigate situations you may not have expected—like aerial combat in fighter jets.

Fighter pilots undergo extensive specialized training to be able to outwit opponents in battle, and that professional experience seems like it would be hard, even impossible, to replicate. But a new artificial intelligence system, ALPHA, has been besting expert pilots in combat simulations, even when the A.I. is given a handicap.


Given years of discussion about military drones, it seems like a fighter plane piloted by A.I. wouldn't be so surprising. But unmanned aerial combat vehicles are usually remote-controlled by a person, at least in part, and are used for things like attacks and reconnaissance, not one-on-one fighting. This has been changing, though. Last summer, P.W. Singer wrote in Popular Science that, "More than 80 nations already use unmanned aerial systems, or drones, and the next generation is now emerging. They will be autonomous, jet-powered, and capable of air-to-air combat."

ALPHA was developed by aerospace engineer Nick Ernest, a recent doctoral graduate of University of Cincinnati whose company Psibernetix works with the Air Force Research Laboratory. ALPHA has been victorious in numerous simulated battles against top fighter pilots, including a series in October against retired United States Air Force Colonel Gene Lee.

It seemed to be aware of my intentions and reacting instantly to my changes in flight and my missile deployment. It knew how to defeat the shot I was taking. It moved instantly between defensive and offensive actions as needed. ... Sure, you might have gotten shot down once in a while by an AI program when you, as a pilot, were trying something new, but, until now, an AI opponent simply could not keep up with anything like the real pressure and pace of combat-like scenarios.

ALPHA's prowess is impressive, but equally amazing is the tiny computer that runs it. For such a complicated set of decision-making algorithms, ALPHA requires very little processing power, running on a $35 Raspberry Pi minicomputer. ALPHA uses what are called "fuzzy logic algorithms" to form a “Genetic Fuzzy Tree” system that breaks big problems down into smaller chunks so the system can evaluate which variables are relevant to a particular decision and which of those are most important. This allows the system to work more efficiently and rapidly.

ALPHA still flys in a simulated world, but as technology continues to evolve behind combat drones and autonomous vehicles it seems more and more likely that it will converge in something like the real-world version of ALPHA. It's a powerful technology, but it makes you wonder whether we as humans really want to be getting "better" at war. Hopefully these advances will mean fewer human casualties.

June 28 2016 5:24 PM

Redacting Digital Documents Is Easy. Why Do People Keep Doing it Wrong?

What should have been a public relations coup turned into minor fiasco this week when House Democrats publicly released a cache of digital documents related to the Benghazi committee’s inconclusive investigations. Though those documents were supposed to make the GOP look bad, one instead revealed compromising information about Hillary Clinton adviser Sidney Blumenthal. The Los Angeles Times found that seemingly redacted portions of a transcript featuring Blumenthal were actually available if you copied those sections from the PDF and pasted them into another document.

As Slate’s Ben Mathis-Lilley rightly points out, it’s “Embarrassing for House Democrats because they screwed up a process that can be successfully completed with a single black marker.” But it’s also a mistake that’s more common than you might think, one that has everything to do with our fundamental confusion about an increasingly digital world. In fact, it happens because would-be censors act as if they’re using black markers, despite the very different needs of electronic documents.


Here are just a few notable incidents: The Transportation Security Administration made the same error as the House Democrats when it released a screening manual in 2009, sending out a PDF in which, according to Wired, employees “merely overlaid black rectangles on the sensitive text … instead of cutting the text itself.” Among other details, those obscured sections included information about CIA protocols for handling foreign dignitaries. In 2011, a U.S. District Court’s opinion accidentally included redacted information about Apple’s business dealings, accessible by the same copy-paste trick. Though those revelations weren’t especially compromising, that same year, the British Ministry of Defense inadvertently leaked classified details about nuclear submarines that it thought it had censored, a considerably more consequential breach. Other examples abound, especially in legal filings.

This copy-paste workaround comes down the way that PDFs package and present data. Timothy B. Lee explains that PDFs generally work through vector-based graphics, effectively stacking multiple image layers atop one another to create the total picture you see on a given page of a document. (This is why you’ll sometimes watch as the various elements of an image gradually pop into view after you load up an especially complex file.) When you’re working in this format, drawing a black square over the text with the shape tool—much as you would hide sections of a physical document with a marker—may visually obscure information, but it doesn’t actually strip it from the document. The words are still there, even if they’re temporarily hidden when you look at the file in Acrobat or some other viewer.

This problem is hardly unknown, least of all to Adobe, which created the format in the first place. Rick Borstein, who maintains a law blog for the company, writes, “Simply covering up text and graphics [will] black rectangles is not redaction. It is, however, a recipe for disaster.” Wholly aware that that avoiding such catastrophes is necessary, the company includes a robust redaction tool within Acrobat. Though the results are visually the same—you’ll end up with big black boxes over the things you’ve hidden—the tool removes the underlying information from the document.

There’s plenty of information about that tool available for those willing to dig around just a little. In a 2010 article—perfectly timed to prevent some of the most notorious redaction screw-ups, if only anyone had been paying attention—Borstein detailed some of its features, including redactions across multiple pages. As Lisa Needham writes on the blog Lawyerist, the Acrobat redaction tool can also remove metadata from a document—stripping it of information about, say, the computer on which it was written. And though the tool can clearly be hard to find, Borstein has even put together a post showing users how to track it down without digging through menus. All of this is to say that there’s little excuse for ineffective redaction.

There are, of course, plenty of other ways to lazily hide information without really eliminating it: A document of digital redaction guidelines from the District of Columbia Circuit Court lays out a handful of other worst practices that would-be secret sharers should avoid. For example, it genially explains, “Changing the text’s font to white will make it look as though the words disappear, but they don’t!” And there are, of course, other ways to leave unwanted information in a document: Microsoft Word’s track changes feature can, as an article in the Florida Bar Journal suggests, inadvertently convey incriminating details if you don’t delete past revisions and comments before sending a file along. It’s easy, however, to forget that such information is there if you set tell Word to hide markup details, as many writers do while giving their work a final pass.

These are the sort of errors that we make when we refuse to recognize that digital documents are far more complex than their physical brethren. The House Democrats’ humiliating oversight, and other incidents like it, follow from follow a shared misapprehension, the belief that if it looks like paper it must also behave like paper. Think of it as a kind of aspirational skeuomorphism, a fantasy that paper’s qualities persist across the different media that imitate it.

It may be the very ease of copying from one document and pasting into another that helps us maintain this illusion. The feature is all but essential to modern computing platforms, so much so that we’re often baffled when it’s not available. As Lily Hay Newman has shown in Future Tense, however, things are rarely as simple as they seem, not least of all because “Copy and paste … doesn’t just magically interoperate between applications.” Comfortable with the relative ease of dropping, say, a tweet into an email, we forget how much is going on behind the scenes to make that transfer possible. To the contrary, we’d do well to remember just how remarkably complex that common feature is—and guard our secrets accordingly.

June 28 2016 11:34 AM

Facebook Thinks It Has Found the Secret to Making Bots Less Dumb

If there’s one thing we’ve learned about bots over the years, it’s that they aren’t too bright. From Eliza to Tay, the best-known chat bots generally rely on a distinctive personality to cover for their inability to understand what you’re saying. Meanwhile, business-oriented bots such as Microsoft’s Clippy, Slackbot, and Poncho, tend to be inflexible, because they’re hard-coded with preset responses to specific queries. And just when you think you’ve found a bot that’s really impressive—say, Facebook’s M, or x.ai’s Amy Ingram—it turns out that there are humans behind the curtain stepping in to solve the problems the computer can’t.

This year began with a fresh wave of bot hype, which quickly petered out when users found that the new generation of artificial interrogators was only marginally more useful than the last one. Yet there is still reason to believe that the bots of tomorrow will be smarter than today’s—and, more importantly, that they’ll be able to learn and improve over time.


New research from FAIR, Facebook’s artificial intelligence research arm, might help to point the way. Last year the team introduced a new type of machine-learning model for language understanding, called “memory networks.” The idea was to combine machine learning algorithms—specifically, neural networks—with a sort of working memory, letting bots store and retrieve information in a way that’s relevant to a given conversation. Facebook demonstrated the technology by feeding its software a series of sentences that convey key plot points from Lord of the Rings, then asking it questions such as, “Where is Bilbo now?” (The system’s reply: Grey-havens.)

This month, the team pre-published a new paper on arXiv that generalizes the memory-networks approach so that it can better interpret unstructured data sources and published documents, such as Wikipedia pages, rather than just specifically designed “knowledge bases” that store information one fact at a time. That’s important because knowledge bases tightly constrain the information that’s available to a bot, as well as the type of questions you can ask. (Try asking Poncho about something other than the weather.) If Facebook’s algorithms can start to interpret natural language data sources such as Wikipedia in a way that makes sense in a given conversational setting, it opens the potential for bots that can answer all kinds of questions on a vast range of topics. FAIR calls the new approach “key-value memory networks.”

Poncho weather bot
Poncho is good at one thing.

Screenshot / Facebook Messenger for iOS

So far, Facebook’s system can’t answer questions as accurately when reading from a document as it can when working from a structured knowledge base. But Facebook says its method significantly closes the accuracy gap between the two. And the memory-networks approach allows a bot to store not only the relevant source data, but the questions you’ve already asked it and the responses it has given. That way, when you ask a follow-up, it knows not to repeat the same information, or to ask you for information you’ve already given.

Facebook is already using memory networks in M, the do-it-all virtual assistant that lives inside the Messenger app (provided you’re among the handpicked group of beta testers with access to it). They come in handy when, for example, you ask M to make a restaurant reservation.

Rather than simply launching into a predefined list of questions—“What time?” “What kind of food?” “How many people?”—it can extract and store the relevant information over the course of a more natural series of questions and answers. So if you say, “I’m looking for a Mexican restaurant for five people tomorrow night,” it doesn’t have to ask you, “what kind of food?” or “how many people?” And if you suddenly get distracted and ask it, “Who is the president of the United States?” it can quickly reply, “Barack Obama,” then remind you that you still need to tell it what time you’d like to have dinner tomorrow night.

Facebook isn’t the only company that’s working to combine machine-learning algorithms with contextual memory. Google’s artificial intelligence lab, DeepMind, has developed a system that it calls the Neural Turing Machine. In an impressive demonstration, Google’s Neural Turing Machine learned and taught itself to use a copy-paste algorithm by observing a series of inputs and outputs.

Facebook Chief Technical Officer Mike Schroepfer has called memory “the missing component of A.I.” And FAIR research scholar Antoine Bordes, who co-authored the papers on memory networks, told me he believes it could hold the key to finally building bots that interact naturally, in human language. “The way people use language is very difficult for machines, because the machine lacks a lot of the context,” Bordes said. “They don’t know that much about the world, and they don’t know that much about you.” But—at last—they’re learning.

June 28 2016 9:27 AM

3-D Printing Helped This Cancer Survivor Recover Some of What He Lost to Disease

It’s increasingly easy to be cynical about 3-D printing, as a recent Newsweek story on the industry’s disappointments shows all too well. But once you look past the supposedly revolutionary promise of the technology, there are small, meaningful stories about it that are worth telling. One such story comes from Shirley Anderson, a cancer survivor who received a prosthetic jaw thanks to advances in 3-D modeling.

Anderson lost his jaw and Adam’s apple to a series of surgeries and other cancer treatments, leaving him unable to speak or eat solid food. He eventually met Travis Bellicchi, a maxillofacial prosthodontist based at Indiana University. Though Bellicchi was able to make a complex traditional prosthesis for Anderson, the final product was uncomfortable, and Anderson could wear it for a few hours at a time.


In an attempt to find a more comfortable solution, Bellicchi turned to students at the university’s School of Information and Computing who were able to more painlessly create a model of Anderson’s face. According to a blog post from Formlabs—which makes the printer Bellicchi and his collaborators used—the resulting prosthesis “looks more realistic and is much lighter and more breathable so that Shirley feels comfortable wearing it for a longer period of time.”

There are, of course, caveats: The 3-D printed prostheses cannot replace what Anderson lost to his cancer treatments. He still mostly communicates by writing on a white board, for example, and it doesn’t sound like eating has gotten any easier. Nevertheless, it’s a significant reminder that 3-D printing can be a powerful resource, so long as we remember that the advancements it offers are mostly incremental.

June 27 2016 6:32 PM

Border Patrol Wants to Collect Foreign Travelers’ Social Media Account Names

For years, law enforcement groups have been collecting information on social media for investigations, and now U.S. Customs and Border Protection wants to get in on the data. On Thursday, the agency proposed an addition to certain customs forms that would ask for foreign travelers’ social media names and handles.

Spotted by the Hill, the new field would read “Please enter information associated with your online presence—Provider/Platform—Social media identifier,” and it would be optional to fill out. It would appear on forms for ingoing and outbound travelers to the U.S. who aren't required to have a visa. Currently, citizens of 38 countries can visit the U.S. for 90 days for business or leisure without applying for a visa.


The Department of Homeland Security says:

“Collecting social media data will enhance the existing investigative process and provide DHS greater clarity and visibility to possible nefarious activity and connections by providing an additional tool set which analysts and investigators may use to better analyze and investigate the case.”

The proposal comes as recent violent terror incidents like the Paris attacks and Orlando shooting have reinforced the importance of social media data in investigations. The proposal is now in a 60-day open comment period ending Aug. 22. As long as providing the information is optional, it may not seem like a big deal either way. The change would begin to normalize the idea of governments collecting social media data, though. For better or worse.

June 27 2016 12:07 PM

How Terrified Should We Be of This Year’s World Economic Forum Top 10 Emerging Technologies?

Take an advanced technology. Add a twist of fantasy. Stir well, and watch the action unfold.

It’s the perfect recipe for a Hollywood tech-disaster blockbuster. And clichéd as it is, it’s the scenario that we too often imagine for emerging technologies. Think superintelligent machines, lab-bred humans, the ability to redesign whole species—you get the picture.

The reality, of course, is that the real world is usually far more mundane: less “zombie apocalypse" and more “teens troll supercomputer; teach it bad habits.”

Looking through this year’s crop of Top Ten Emerging Technologiesfrom the World Economic Forum (WEF), this is probably a good thing.

June 27 2016 11:27 AM

Having Trouble Pasting Chrome URLs on a Mac? You’re Not Alone and a Fix Is Coming.

Last week, a group of Slate staffers started commiserating about a strange frustration: They couldn’t copy URLs from Google’s Chrome browser into other programs on their Macs. “This damn URL bug is killing me!” Slate’s managing editor Lowen Liu wrote in an email. Weird.

The problem seems to be cropping up on Macs running both OS X Yosemite and OS X El Capitan, and affects some users’ ability to paste URLs into programs like Apple Notes and the Microsoft Outlook desktop application. There are workarounds for the bug, like selecting the “Paste and Match Style” option in some applications, but they take a few extra seconds and that’s not what you want from something as basic as copying and pasting.


Other people are mad, too:

As Seguin and Patel point out, it’s a known bug and people have been talking about it in Google Help forums for a few months. Commenter Parker Johnson wrote, “So glad I stumbled upon this [forum] ... thought I was going crazy.” The bug is called Issue 618771, and Chrome developers were talking on the thread over the last few weeks about which new Chrome build to put the patch into. They originally considered adding it as an incremental update to the current version of Chrome, 51, but on Thursday concluded that it will go into Chrome version 52. A Google spokesperson confirmed that the fix is on its way and that the Chrome 52 beta will be released Wednesday (three days behind Chrome’s development calendar, which projected that 52 would be released Sunday). Issue 618771 has been marked as “Fixed.”

Copy and paste seems like a simple function, but it doesn’t just magically interoperate between applications. Updates to one application can disrupt its ability to paste into other programs. A Chrome developer wrote, “The problem with changes to copy/paste or drag and drop code is that the integration with external apps has poor test coverage.”

If you’re desperate for relief from annoying copy/paste workarounds, download Chrome version 52 beta on Wednesday. The update should roll out to all Chrome users shortly after. A lot of people around Slate will be celebrating.

June 27 2016 11:20 AM

This Program Divided the World Into 57 Trillion Squares and Gave Them Names Like usage.ample.soup

Whether you’re coordinating a delivery of medical supplies to a remote refugee camp, showing up at a prearranged rendezvous with your star-crossed lover, or just waiting for that pad Thai you ordered to arrive, communicating accurate location information is critical. But in many places—in the middle of a wilderness area, for instance, or the twisty maze-like backstreets of a North African medina—street addresses are either nonexistent or just not particularly useful.

The British startup what3words has come up with a new simple way to describe location anywhere in the world.  The company’s algorithm has divided the surface of the world into 3 meter-by-3 meter (a little less than 10 feet) squares. “Small enough to be useful, but large enough so that it doesn’t give you an unfeasible number,” as the company’s chief marketing officer, Giles Rhys Jones, puts it. Each of these 57 trillion squares has been assigned a three-word address. So the table at a Washington, D.C., coffee shop where I’m writing this article is located in a square called “basis.laws.slice.” Slate’s new Brooklyn, New York, offices are located at “chairs.armed.cards.” The center of the Eiffel Tower is “graphics.dad.inched.” The system is currently available in 10 languages with more to come.  (These addresses aren’t translations between the available languages. In Swahili, my table is at “kaskazini.ncha.mchanga,” which translates to “north.tip.sand.”)


The addresses assigned by the algorithm are permanent, so you don’t need Wi-Fi access to use it, though presumably you might need it to tell other people where you are. Homophones have been removed from the list of words, as have “offensive words,” which is probably wise given what’s happened with some other mapping programs. Longer, more complicated words tend to be located in more remote areas and in the ocean.

The idea came about thanks to CEO Chris Sheldrick’s former career organizing large outdoor concerts, when he found that musicians, vendors, and technical crews were constantly getting lost. At one point a band scheduled to play an outdoor show on a hillside in Rome did its soundcheck on the wrong hillside.

One solution to problems like these is just to use GPS coordinates, but as Jones says, “People aren’t generally wired to remember 18 digits.” Just a small error in one of the digits of a GPS location could send you disastrously off course. What3words, by contrast, is programmed so that similar words are spread far apart. So if I wanted to tell a friend to meet me here for coffee but accidentally wrote “slices” instead of “slice” they’d get a spot in Parsons, Kansas—an obvious error—rather than a more plausible one a few blocks away.

The system could be particularly useful in developing countries where addressing can be inconsistent. According to a report by the Global Address Data Association cited on What3Words’ website, only about 50 to 70 countries “have postal code or address databases which are kept reasonably current.” The problem has only been made more acute with record numbers of displaced people around the world—about one out of 113, according to the most recent U.N. statistics. What3words has already been used to deliver packages in Brazil’s Favelas and by the United Nations in disaster reporting efforts.

Most recently, the company signed a partnership with the national post office of Mongolia, which will begin referring to locations by their three-word addresses rather than street addresses. The system makes a lot of sense for Mongolia, a largely rural country with a significant nomadic (but increasingly wired) population. Jones says the company is also working with e-commerce companies to add three-word addresses to their checkout pages. The full rollout will come once the Mongolian-language version of what3words is complete.

Of course, location confusion isn’t just a “third-world problem.” Jones notes that the United Kingdom is “one of the best-addressed countries in the world, but if you live in many rural areas it’s still, like, ‘oh sorry, wrong farmhouse. You want the next one over.’ ”  Three-word locations have been used to coordinate first aid at Britain’s Glastonbury festival, the world’s largest outdoor music event, and by ski patrol in Lake Tahoe. The system and a mapping app are free for users, though the company charges for access to its API and a development kit.

The system does have a few limitations—like vertical space. If you were trying to find an office in a Tokyo skyscraper or a family living in Caracas’ 45-story squatter community, the Tower of David, what3words’ grid wouldn’t be all that useful. Jones says that for non-ground-floor locations, “We can be used as an addition to a more traditional address, to add a level of specificity.” It also relies on the accuracy of your phone’s GPS system—mine jumped around a few times before it settled on basis.laws.slice.  This is, admittedly, a problem with basically every GPS-reliant app.

The company hopes to one day have its system integrated into major mapping systems like Google Maps and to have businesses listing their three word locations along with street addresses. “We want to become a globally accepted standard for communicating location, so ‘word.word.word’ is recognized by your device and is recognized on a business card and everyone understands that you’re talking about location when you see that,” says Jones.

By then you should be well on your way to exploring all the world’s 57 trillion fascinating places.

June 24 2016 2:49 PM

Introducing “Warm Regards,” My New Podcast About Climate Change

For those of us who think about climate change often—like unhealthily often—there's sometimes a sense that you're missing the story. Climate change is quite possibly the most important thing humans have ever done—I mean, we're altering our planet's atmosphere perhaps at a faster rate than at any point in Earth’s entire history. Yet it can often feel remote, abstract, and lost in a sea of statistics.

To keep sane, you have to learn about the people and personalities involved behind the scenes; those who can help suss out when the latest science is truly freak-out worthy. That's why I made Warm Regards: a new podcast about climate change.


My goal here is not a little lofty: to help humanize those who are working on the climate problem.

Joining me with co-hosts Andy Revkin, a veteran environment writer for the New York Times who has covered climate change for 30 years, and Jacquelyn Gill, a paleoecologist at the University of Maine who is an actual, real-life climate scientist and flawlessly navigates climate Twitter. (If you spend any time in climate Twitter, you know that’s a rare combination.) We’ll regularly invite newsmakers and scientists and listeners and people on the front lines, too.

In our inaugural episode of Warm Regards, we tackle what it means to talk about climate change at this unique moment in human history. I hope you’ll listen and share our podcast with your friends. We’re on SoundCloudiTunes, and Twitter, and we’ll be working our way up to new episodes each week.

June 24 2016 12:47 PM

Why Facebook Is Right to Train Its Employees on Political Bias

Facebook is not a particularly diverse company. The majority of its U.S. workers are young men, more than 90 percent are white or Asian American—an imbalance it shares with many other Silicon Valley technology companies.

It’s a problem Facebook recognizes and has attempted to remedy not only through its recruiting and hiring practices, but also through what it calls unconscious bias training. The idea is to train the company’s employees to recognize their own prejudices and stereotypes, so that they can try to correct for them.


Recently, however, the company came under fire after a former contractor on its trending news team accused his colleagues of a different kind of bias—the political kind. Now, Facebook COO Sheryl Sandberg says the company is trying to correct for that, too.

“We have a ‘managing bias’ class that all of our leaders and a lot of our employees have taken, that I was part of helping to create,” Sandberg said Wednesday in an interview with Arthur C. Brooks, president of the conservative American Enterprise Institute. She went on:

And we’ve focused on racial bias, age bias, gender bias, national bias. And we’re going to add in a scenario now on political bias. So when we think about helping people understand different points of view, and being open to different points of view, we’re dealing with political bias as well going forward.

The video of the interview is below, and the relevant portion starts at about the 16:40 mark.

Brooks called Facebook’s move to include politics in its bias training “really encouraging.” He followed up by asking Sandberg whether Facebook’s efforts would also include trying harder to hire more political conservatives. Her answer was a deft compromise between a “yes” and a dodge.

“We think to build a product that 1.6 billion people use, you need diversity. And what you really want is cognitive diversity, which is what you’re talking about—different thoughts.” She acknowledged that “you can get that by having diversity in the population” without quite committing to anything specific. She then pivoted smoothly to other ways you can achieve diversity of thought, recounting an anecdote about CEO Mark Zuckerberg encouraging employees to speak up when they disagree with him.

So, will Facebook try to hire more political conservatives? Maybe! But somehow I doubt it will be filling its Menlo Park headquarters with Trump supporters anytime soon.

Nor should it have to. A person’s politics are a matter of choice and speak to her judgment and character, in a way that race, gender, age, and nationality are not, and do not. It’s a distinction that seems to elude some of the conservative leaders whom Glenn Beck criticized as seeking “affirmative action for conservatives.”

Yet Sandberg is right that Facebook’s success as a company depends on more than just hiring the cleverest coders and product managers. It also hinges on the ability to avoid needlessly alienating huge swaths of its user base, such as the tens of millions of Americans who are likely to cast their votes this fall for Donald Trump. It certainly didn’t help when Gizmodo reported in April that Facebook employees had voted in an internal poll to ask CEO Mark Zuckerberg what the company could be doing to prevent a Trump presidency. And Facebook didn’t do itself any favors with its initial, dismissive response to the claims of bias in its trending news section. I argued at the time that the company should own up to the reality that its employees have political biases—everyone does, really—and then take steps to address them. After several false steps, it’s finally doing that.

Facebook has no particular legal or moral imperative to hire, accommodate, or appeal to Republicans or other conservatives. But its success as a company relies on achieving and maintaining a level of ubiquity among the populace that would be impossible if it were perceived as strongly partisan. In that respect, teaching Facebook’s sheltered young employees to realize that their political views are unrepresentative would be, if nothing else, a prudent business move.

Previously in Slate: