Future Tense
The Citizen's Guide to the Future

June 29 2016 4:44 PM

Why Did My Period-Tracking App Send Me an Anti-Brexit Email?

Five days after Britain voted to leave the European Union, I received a message from Clue, a menstrual health app, with the subject line “Better together” followed by a peace sign emoji and a message about unity. This is the age we now live in, a time in which the app that tracks my menstrual cycle and reminds me to take my daily birth control pill wanted to belatedly share its viewpoint on the issue with me.

“I think my period tracking app just sent me an anti-Brexit email?” I texted a friend. Apparently this seems so de rigueur in 2016 that all I got was a paltry “lol” in response.

What a time to be alive, I thought, when I can record observations about my shedding uterine lining on my mobile device and in turn receive input on important matters of foreign affairs!

To be clear, I am decidedly pro–period tracking and pro-Bremain. But even I wasn’t sure how I felt about my reproductive health and views on international politics intersecting in this particular way. I grew up in a feminist family that ran an abortion clinic in Texas, and I was raised with the belief that the personal is always political—but also with a deep suspicion of the political invading the personal, especially when it comes to my health.

Although the link between the Brexit and the “blood coming out of [my] wherever,” as Donald Trump likes to say, may have seemed tenuous at first, it actually makes sense that an app to track the latter would want to educate its users about the consequences of the former. Initiatives like Clue and Planned Parenthood’s new Spot On aim to give medically accurate information to a generation raised with science-optional, abstinence-only sex education. Similarly, Team Bremain fought to spread its message as its opponents misled the public with factually inaccurate information—not unlike the Republican Party’s never-ending attack on facts with the dubious motive of “protecting women.”

Unsurprisingly, younger voters overwhelmingly flocked to Team Bremain's message of inclusion and unity, just as they have embraced technology to take control of their bodies and health. So while I may have scoffed a bit at first, I actually appreciate Clue’s effort to make the personal political and vice versa, and I hope my generation will continue to embrace it. We know that just as one missed birth control pill can have consequences, our votes do, too.

And that is why even though Britain may not remain, my period tracking app will. Because—like the U.K. and the EU—I believe we’re better together.

June 29 2016 1:39 PM

Go Update Norton Antivirus Right Now. Symantec Really Screwed Up.

Instead of building up to it gracefully, I'm just going to tell you right now: Make sure any Symantec security products you use are fully updated. On Tuesday, a Google researcher published about serious vulnerabilities in Symantec's offerings, and you need to make sure you have the right patches before you read anything else. You can check this advisory to confirm that you have the most recent updates.

OK. Tavis Ormandy, a researcher in Google's Project Zero cybersecurity analysis group, revealed the "details of multiple critical vulnerabilities" in a blog post Tuesday. He explained that the bugs are in Symantec's "core engine," which underlies all of its products—including brands like Norton Security, Norton 360, and Symantec Endpoint Protection. Symantec said in its security advisory that patches should have downloaded automatically to every private user through the company's LiveUpdate system. For enterprise customers, some updates have to be installed manually.

So far there haven't been any reports of the vulnerability being exploited maliciously, but that doesn't lessen the severity of these security flaws. Ormandy writes that the Symantec Security Team was responsive when Project Zero brought up these vulnerabilities. He adds, though, that overall, "Symantec dropped the ball here. ... These vulnerabilities are as bad as it gets. They don’t require any user interaction, they affect the default configuration, and the software runs at the highest privilege levels possible."

Symantec has millions of individual customers, not to mention its enterprise offerings for everything from small businesses to massive institutions. In this case the problem was in Symantec's vetting system for new code. The "unpackers" in its products screen anything that's being downloaded or any new program that wants to run for malware that's been "packed" inconspicuously. The problem was that Symantec products were doing this unpacking in the operating system kernel itself (the fundamental layer of the operating system which coordinates everything else). As a result, attackers could hijack the unpacking process to take over the kernel, and by extension the entire computer. Ormandy summed it up, "Unpackers in the Kernel: Maybe not the best idea?"

It's been clear for a long time that antivirus products don't offer a complete cybersecurity solution on their own. Symantec even admitted that in 2014. But for now, the programs remain an important line of defense. It's just troubling to be reminded that they can create as many problems as they solve.

June 29 2016 11:51 AM

Future Tense Newsletter: Preparing for the A.I. Tidal Wave

Greetings, Future Tensers,

“The trajectory of A.I. and its influence on society is only beginning,” Microsoft CEO Satya Nadella writes in Future Tense this week. Riffing on a prediction Bill Gates made about the internet more than 20 years ago, Nadella anticipates a “tidal wave” of developments in artificial intelligence research. With that prospect in mind, he proposes an array of industry priorities as we move ahead, insisting that we must protect privacy, guard against bias, and aspire for “algorithmic accountability.” In short, he holds, we need to think both ethically and empathically about emerging technologies—and ensure that those technologies can do the same.

As we’ve seen throughout this month’s Futurography course, driverless cars will almost certainly be one of the most immediate consequences of the A.I. technology that Nadella describes. Though the arrival of such vehicles will likely transform the automotive industry, Stanford-based researcher Stephen Zoepf argues that the coming revolution may underwhelm those of us literally carried along by it. Sarah Aziza writes that such incremental experience of change will be all the more evident, and perhaps all the more frustrating, in countries such as Saudi Arabia, since technological progress is no guarantee of social change.

Despite that, driverless cars will likely remain sources of anxiety for many, partly, Adam Waytz writes, because we tend to overestimate how much new developments will affect us. Nevertheless, Waytz believes we’ll quickly get used to self-driving cars, given “the capacity for our psychological immune system to make sense of negative events.” In the short term, Steve Casner suggests, we’ll probably need to worry more about semiautonomous vehicles than fully self-driving ones, since it’ll be hard for humans to take over in crises. But John Frank Weaver predicts that sooner or later our cars should be safe enough that they’ll be putting traffic police out of work, though there may still be reasons to pull a robot over.

Here are some of the other stories we read while copying and pasting redacted documents:

  • Security: Josephine Wolff explores the persistence of physical signatures in a digital world.
  • Medicine: Registering to be an organ donor is outmoded and needlessly complicated, but one nonprofit thinks we can update that process with the help of social media.
  • Hacking: Does it seem like there have been a lot of data breaches and leaks lately? Lily Hay Newman explains what’s been going on.
  • Trolling: We do ourselves a disservice when we treat Donald Trump like some basement-dwelling Internet monster, Whitney Phillips argues. He’s far more monstrous.

Jacob Brogan

for Future Tense

June 29 2016 10:55 AM

Hillary Clinton’s Technology Policy Is Surprisingly Solid

In an era when most politicians don’t even pretend to understand much about technology and innovation, it’s at least a little refreshing to see a campaign actually treat these critical issues with respect. The one in question is Hillary Clinton’s, and its just-released “Initiative on Technology & Innovation” has a lot to recommend.

I’ll start with a disclosure: Despite some serious reservations about Clinton, I’m so freaked out by even the possibility that Donald Trump could become the next U.S. president that I plan to a) donate to the Clinton campaign and b) volunteer to help in other ways. (I’m also well aware of the all-too-normal disconnect between campaign vows and governing reality.) That said, the Clinton tech/innovation proposals offer a considerable amount of common sense. They suggest a practical worldview, though one that operates—perhaps too much—within boundaries created and controlled by the wishes of big business and government.

In the absolutely key question of internet access, the plan makes a laudable push for competition. It’s unfortunate that America has allowed telecom companies to take such overwhelming control over internet access. We’d have been vastly better off if we’d required the cable and landline phone operators to let other internet service providers use the existing lines and spectrum to offer competitive services, as has been done in other parts of the world. We didn’t, so we were left with insisting on network neutrality, the principle that we consumers at the edges of networks, not a tiny number of telecom giants, should make decisions about what data gets to our devices with what priority. Clinton strongly supports net neutrality.

One ray of hope in the competitive landscape is that in certain jurisdictions, governments, public-private partnerships or (rarely so far) for-profit companies have installed networks, in places like Chattanooga, Tennessee, and Lafayette, Louisiana.* They’re competing with major internet service providers like Comcast, Time Warner Cable and Cox, which in many places have been slow to provide the kind of service we all need in the 21st century. Unfortunately, many state legislatures, doing the bidding of the phone and cable companies, have restricted or banned such competition at the local and regional level, but the FCC has been working to remove those barriers. The Clinton plan endorses the competitive alternatives (however inefficient this may be from an infrastructure standpoint).

The plan pushes for more investment in computer science and science, technology, engineering, and math training, and for greater diversity in the process. Speaking of education, one of America’s nuttier policies is to educate highly skilled people from other countries and then, once they’ve gotten advanced degrees, toss them out. Clinton’s plan would give them green cards.

Given the vital role of entrepreneurship in moving the economy forward, it’s gratifying to see the Clinton campaign also proposed to defer repayment of student loans if borrowers start companies. This isn’t a magic bullet, but it’s at least a creative idea.

There’s some potential good news on the patent front, too. The plan pushes for reform to our dysfunctional system, in which the U.S. Patent & Trademark Office routinely issues poor-quality patents, giving patent trolls fertile ground in which to peddle their sleazy trade. Although the plan doesn’t address the quality issue—a significant lapse—it does aim to give the USPTO more resources so patent examiners can actually do their jobs; Congress has diverted patent fees to other programs in recent years, making the situation worse. The Clinton plan would help address the troll problem by banning forum shopping by patent holders, which brings cases to jurisdictions that are notoriously kind to plaintiffs.

A major section of the plan calls for using technology better to make government more efficient. The Obama administration has made real progress on this in the past several years, and Clinton would expand on those efforts. More, and more useful, open data would be a key part of this.

There are some disappointments here. On copyright, the Clinton proposals are, unfortunately, vague. The system’s woes include grossly unbalanced (in favor of copyright holders) rules and holders’ routine abuse of “takedown” provisions that often remove entirely legitimate content from the web. Clinton’s call for modernization is useful, but it will look like weak tea to copyright reformers.

But perhaps the biggest flaw in the Clinton initiative is her continued belief that we can compromise on encryption and other key privacy matters in the name of overall security. This is apparent in the plan’s rejection of what it calls a “false choice” between security and privacy, when in fact the choice is between more security and less security: If we insist on building government backdoors into our products and services, we guarantee that we all will have less security in our daily lives.

How much of the Clinton initiative would get through a Republican Congress? Probably not much, given today’s poisoned politics. But many of the document’s key points would make excellent policy. If they generate even a serious debate, and lead to at least some change, maybe one of these days Clinton’s most prominent technology narrative won’t be about her home email server.

*Correction, June 29, 2016: This post originally misspelled Chattanooga.

June 29 2016 10:00 AM

What Facebook Thinks Your News Feed Is Really About

Facebook is changing the algorithm that decides what you see in your News Feed—yes, again. And for the first time, it’s publishing a philosophical statement of the values it wants that algorithm to prioritize.

The change is relatively straightforward, although it will probably stir some controversy anyway. Facebook says it is tweaking the settings of its news feed software to give a little more weight to posts shared by actual people—e.g., your friends, family, and others you interact with a lot on Facebook. Inevitably, giving more weight to one type of post means giving less to others. So you might see slightly fewer posts from groups, media outlets, brand pages, and other sources that are not actual humans whom you know in real life.

Less straightforward, and probably more noteworthy in the long run, is the statement of values. Facebook published the statement Wednesday morning, along with the ranking update, in a blog post headlined, “Building a better news feed for you.” From the post:

When we launched News Feed in 2006, it was hard to imagine the challenge we now face: far too much information for any one person to consume. In the decade since, more than a billion people have joined Facebook, and today they share a flood of stories every day. That's why stories in News Feed are ranked—so that people can see what they care about first, and don't miss important stuff from their friends. If the ranking is off, people don't engage, and leave dissatisfied. So one of our most important jobs is getting this ranking right.
As part of that process, we often make improvements to News Feed, and when we do, we rely on a set of core values. These values—which we've been using for years—guide our thinking, and help us keep the central experience of News Feed intact as it evolves. In our continued efforts to be transparent about how we think about News Feed, we want to share those values with you.

Interestingly, the company listed the first three values in a particular order, making it clear which takes precedence. The full statement is here, and it’s worth reading for anyone interested in understanding how Facebook thinks about itself and its goals for the News Feed. But I’ve listed below the seven values the company elucidated, starting with the first three, which together form the company’s definition of “meaningful content.” In parenthesis are my brief thoughts on what each one means.

  1. Friends and family come first. (Despite suggestions to the contrary, Facebook still sees itself a social network first and a media platform second.)
  2. Your feed should inform. (Facebook takes itself seriously as a destination for news, not just fluff.)
  3. Your feed should entertain. (Facebook admits it is also a destination for fluff.)
  • A platform for all ideas. (Even conservative ones!)
  • Authentic communication. (Clickbait, spam, and fake news are bad.)
  • You control your experience. (You can customize your feed … but only up to a point.)
  • Constant iteration. (Facebook knows its algorithm is far from perfect.)

Now, you could be excused for thinking that Facebook’s News Feed has just one value, and that’s to keep you coming back to Facebook, so the company can keep minting money. There’s some truth in that. It’s a for-profit company, and from a business standpoint, the News Feed’s chief purpose is to captures people’s attention, collect data on their interests, and show them advertisements.

But that’s a little too simplistic. After all, every company’s goal is to make money, but their ways of going about it vary widely. In the case of Facebook’s News Feed, there’s no single knob the company’s software engineers could turn to ramp up the profits, even if that were its only goal. Moreover, ramping up profits too aggressively today would almost certainly mean losing users tomorrow. Facebook, more than most other companies, has intentionally developed an ownership structure that allows it to play the long game, sacrificing short-term revenues in pursuit of loyalty. That gives it the leeway to optimize the News Feed algorithm for things other than just clicks, shares, and ad views. The challenge, as I explained in a Slate cover story earlier this year, is figuring out what to optimize for instead, and how to weight various metrics in pursuit of those ends.

Perhaps because it’s concerned that people view the News Feed algorithm as mercurial, mysterious, and perhaps even menacing, Facebook has taken to providing public updates on even relatively minute changes to its rankings. Its decision to compose and publish this value statement probably stems from similar impulses.

But it also reflects, in a real sense, some of the priorities that Facebook’s executives and product managers have developed for the News Feed over the years. They’ve arrived at these partly through their own intuition and biases, and partly by observing what matters to Facebook users. Increasingly, Facebook is also directly asking people, via batteries of surveys, what they like and don’t like about the service, and what they’d like to see more or less of in their feeds.

Adam Mosseri, the company’s vice president of product management, told me in an interview Tuesday that the primary motivation for placing friends and family first—both in the value statement and in the latest rankings update—was that users have repeatedly told Facebook that’s what they want. In many cases, their feeds have been overrun by posts from pages and publishers they follow, some of which post as often as 200 times a day. They may click on and like those posts, but ultimately they don’t want posts from their friends crowded out by all that professionally produced content.

Facebook acknowledges that the latest change to its service could lead to a decline in organic reach and referral traffic—that is, the number of people who see a post, and the number who click on the link—for some pages. That said, Mosseri told me the magnitude of the change is not great, and the average user probably won’t notice a huge difference.

It’s also worth noting that the change isn’t based on the type of content that’s being shared (e.g., personal update, photo, live video, or link to an article). It’s just based on who’s sharing it. So, you may see fewer Facebook posts from a publisher or page you follow, such as the New York Times or I Fucking Love Science. But if one of your friends happens to share their article or video, you’re actually a little more likely to see that in your feed than you were before. For publishers, the upshot is likely to be slightly more emphasis on content that lends itself to being actively shared by Facebook users, rather than simply consumed. In other words, BuzzFeed wins again.

But back to the values statement for a moment. The seven Facebook chose to focus on might seem rather anodyne, even obvious. But they’re revealing nonetheless, both for their order and for what they don’t say.

For instance, media companies and brands are likely to complain loudly about the latest rankings update—perhaps justifiably, since they’ve come to rely heavily on Facebook as a source of traffic. You’ll notice, however, that not one of the values the company published today is, “keep brands and publishers happy.” All things equal, sure, Facebook would like to see publishers survive economically, if only so they can keep publishing stuff that Facebook users want to see. But it’s not about to prioritize that over, say, making sure users feel their feeds are “authentic.”

It’s also noteworthy that the company is so adamant about putting friends and family first. Sure, Facebook has always been a social network, and its ability to connect people online remains its greatest edge over rivals in the tech and media industries. Yet as I and others have noted, Facebook has morphed over time into a media platform, to the point that people are reportedly posting less about themselves, and more about what they’re watching or reading.

Mosseri told me he’s not so worried about what people are sharing with one another: Friends can bond over a news article on a topic of mutual interest, just as they can bond over vacation photos. But he said the company does view that bonding as central to its identity. It doesn’t want to become an impersonal RSS feed.

Few will be surprised by the second and third values on the list—to inform and to entertain—but it’s telling that Facebook puts “inform” first. Ever since the social network become overrun with viral games, listicles, and quizzes several years ago, its leaders have been wary of the perception that it’s fundamentally a waste of time. CEO Mark Zuckerberg, top deputy Chris Cox, and Mosseri all seem to genuinely believe in Facebook’s power as a source of information, whether that takes the form of news, commentary, or how-to videos about cooking. This may be another case where the company is prioritizing what users say they want over what they actually tend to click, like, or watch.

Finally, it shouldn’t escape notice that while Facebook does believe in giving users some control over their experience, that’s not the News Feed’s top priority. As much as Facebook has been surveying and listening to users, it isn’t about to hand them the keys to their own News Feed rankings.

Previously in Slate:

June 29 2016 9:31 AM

Here’s How Hackers Make Millions Selling Your Stolen Passwords

This article originally appeared in the Conversation.

Data breaches are a regular part of the cyberthreat landscape. They generate a great deal of media attention, both because the quantity of information stolen is often large, and because so much of it is data people would prefer remained private. Dozens of high-profile breaches over the last few years have targeted national retailers, health care providers and even databases of the federal government, getting Social Security numbers, fingerprints and even background-check results. Though breaches affecting consumer data have become commonplace, there are other resources that, when targeted, lead to major security concerns. Recently, a hacker claimed to be selling over 32 million Twitter usernames and passwords on an underground marketplace.

But what happens after a breach? What does an attacker do with the information collected? And who wants it, anyway? My research, and various studies from other computer and social scientists, demonstrates that stolen data is usually sold by hackers to others in underground markets online. Sellers typically use their technical prowess to collect desirable information, or work on behalf of hackers as a front man to offer information. Buyers want to use stolen information to its maximum financial advantage, including buying goods with stolen credit card numbers or engaging in money transfers to directly acquire cash. In the case of social media account data, buyers could hold people’s internet accounts for ransom, use the data to craft more targeted attacks on victims, or as fake followers that pad legitimate accounts' reputations.

 

 

Because of the clandestine nature of the online black market, the total number of completed sales of stolen information is hard to quantify. Most sellers advertise their data and services in web forums that operate like any other online retailer like Amazon, where buyers and sellers rate each other and the quality of their products—personal information—being sold. Recently, my colleagues and I estimated the income of data buyers and sellers using online feedback posted after sales were completed. We examined feedback on transactions involving credit and debit card information, some of which also included the three-digit Card Verification Value on the back of a physical card.

We found that data sellers in 320 transactions may have earned between US$1 million and $2 million. Similarly, buyers in 141 of these transactions earned an estimated $1.7 million and $3.4 million through the use of the information they purchased. These massive profits are likely a key reason these data breaches continue. There is a clear demand for personal information that can be used to facilitate cybercrime, and a robust supply of sources.

Getting to the market

Clandestine data markets are, it turns out, very similar to legal online markets like eBay and Amazon, and shopping sites run by legitimate retail companies. They differ in the ways the markets are advertised or hidden from the general public, the technical proficiency of the operators, and the ways that payments are sent and received.

Most of these markets operate on the so-called “open” web, on sites accessible like most websites, with conventional web browser software like Chrome or Firefox. They sell sell credit and debit card account numbers, as well as other forms of data including medical information.

A small but emerging number of markets operate on another portion of the internet called the “dark web.” These sites are only accessible by using specialized encryption software and browser protocols that hide the location of users who participate in these sites, such as the free Tor service. It is unclear how many of these dark markets exist, though it is possible Tor-based services will become more common as other underground markets use this platform.

Connecting buyers and sellers

Data sellers post information about what type of data they have, how much of it, pricing, the best way for a prospective buyer to contact them and their preferred method of payment. Sellers accept online payments through various electronic mechanisms, including Web Money, Yandex and Bitcoin. Some sellers even accept real-world payments via Western Union and MoneyGram, but they often charge additional fees to cover the costs of using intermediaries to transfer and receive hard currency internationally.

Most negotiations for data take place via either online chat or an email account designated by the seller. Once buyer and seller agree on a deal, the buyer pays the seller up front and must then await delivery of product. It takes between a few hours to a few days for a seller to release the data sold.

 

Because of the clandestine nature of the online black market, the total number of completed sales of stolen information is hard to quantify. Most sellers advertise their data and services in web forums that operate like any other online retailer like Amazon, where buyers and sellers rate each other and the quality of their products—personal information—being sold. Recently, my colleagues and I estimated the income of data buyers and sellers using online feedback posted after sales were completed. We examined feedback on transactions involving credit and debit card information, some of which also included the three-digit Card Verification Value on the back of a physical card.

We found that data sellers in 320 transactions may have earned between US$1 million and $2 million. Similarly, buyers in 141 of these transactions earned an estimated $1.7 million and $3.4 million through the use of the information they purchased. These massive profits are likely a key reason these data breaches continue. There is a clear demand for personal information that can be used to facilitate cybercrime, and a robust supply of sources.

Getting to the market

Clandestine data markets are, it turns out, very similar to legal online markets like eBay and Amazon, and shopping sites run by legitimate retail companies. They differ in the ways the markets are advertised or hidden from the general public, the technical proficiency of the operators, and the ways that payments are sent and received.

Most of these markets operate on the so-called “open” web, on sites accessible like most websites, with conventional web browser software like Chrome or Firefox. They sell sell credit and debit card account numbers, as well as other forms of data including medical information.

A small but emerging number of markets operate on another portion of the internet called the “dark web.” These sites are only accessible by using specialized encryption software and browser protocols that hide the location of users who participate in these sites, such as the free Tor service. It is unclear how many of these dark markets exist, though it is possible Tor-based services will become more common as other underground markets use this platform.

Connecting buyers and sellers

Data sellers post information about what type of data they have, how much of it, pricing, the best way for a prospective buyer to contact them and their preferred method of payment. Sellers accept online payments through various electronic mechanisms, including Web Money, Yandex and Bitcoin. Some sellers even accept real-world payments via Western Union and MoneyGram, but they often charge additional fees to cover the costs of using intermediaries to transfer and receive hard currency internationally.

Most negotiations for data take place via either online chat or an email account designated by the seller. Once buyer and seller agree on a deal, the buyer pays the seller up front and must then await delivery of product. It takes between a few hours to a few days for a seller to release the data sold.

 

Because of the clandestine nature of the online black market, the total number of completed sales of stolen information is hard to quantify. Most sellers advertise their data and services in web forums that operate like any other online retailer like Amazon, where buyers and sellers rate each other and the quality of their products—personal information—being sold. Recently, my colleagues and I estimated the income of data buyers and sellers using online feedback posted after sales were completed. We examined feedback on transactions involving credit and debit card information, some of which also included the three-digit Card Verification Value on the back of a physical card.

We found that data sellers in 320 transactions may have earned between US$1 million and $2 million. Similarly, buyers in 141 of these transactions earned an estimated $1.7 million and $3.4 million through the use of the information they purchased. These massive profits are likely a key reason these data breaches continue. There is a clear demand for personal information that can be used to facilitate cybercrime, and a robust supply of sources.

Getting to the market

Clandestine data markets are, it turns out, very similar to legal online markets like eBay and Amazon, and shopping sites run by legitimate retail companies. They differ in the ways the markets are advertised or hidden from the general public, the technical proficiency of the operators, and the ways that payments are sent and received.

Most of these markets operate on the so-called “open” web, on sites accessible like most websites, with conventional web browser software like Chrome or Firefox. They sell sell credit and debit card account numbers, as well as other forms of data including medical information.

A small but emerging number of markets operate on another portion of the internet called the “dark web.” These sites are only accessible by using specialized encryption software and browser protocols that hide the location of users who participate in these sites, such as the free Tor service. It is unclear how many of these dark markets exist, though it is possible Tor-based services will become more common as other underground markets use this platform.

Connecting buyers and sellers

Data sellers post information about what type of data they have, how much of it, pricing, the best way for a prospective buyer to contact them and their preferred method of payment. Sellers accept online payments through various electronic mechanisms, including Web Money, Yandex and Bitcoin. Some sellers even accept real-world payments via Western Union and MoneyGram, but they often charge additional fees to cover the costs of using intermediaries to transfer and receive hard currency internationally.

Most negotiations for data take place via either online chat or an email account designated by the seller. Once buyer and seller agree on a deal, the buyer pays the seller up front and must then await delivery of product. It takes between a few hours to a few days for a seller to release the data sold.

 

June 28 2016 6:35 PM

Veteran Pilot Loses Simulated Dogfight to Impressive Artificial Intelligence

We've all heard that researchers are currently working to refine self-driving cars and other autonomous vehicles. The revolution is coming. It turns out, though, that they're also setting their sights on using artificial intelligence to navigate situations you may not have expected—like aerial combat in fighter jets.

Fighter pilots undergo extensive specialized training to be able to outwit opponents in battle, and that professional experience seems like it would be hard, even impossible, to replicate. But a new artificial intelligence system, ALPHA, has been besting expert pilots in combat simulations, even when the A.I. is given a handicap.

Given years of discussion about military drones, it seems like a fighter plane piloted by A.I. wouldn't be so surprising. But unmanned aerial combat vehicles are usually remote-controlled by a person, at least in part, and are used for things like attacks and reconnaissance, not one-on-one fighting. This has been changing, though. Last summer, P.W. Singer wrote in Popular Science that, "More than 80 nations already use unmanned aerial systems, or drones, and the next generation is now emerging. They will be autonomous, jet-powered, and capable of air-to-air combat."

ALPHA was developed by aerospace engineer Nick Ernest, a recent doctoral graduate of University of Cincinnati whose company Psibernetix works with the Air Force Research Laboratory. ALPHA has been victorious in numerous simulated battles against top fighter pilots, including a series in October against retired United States Air Force Colonel Gene Lee.

It seemed to be aware of my intentions and reacting instantly to my changes in flight and my missile deployment. It knew how to defeat the shot I was taking. It moved instantly between defensive and offensive actions as needed. ... Sure, you might have gotten shot down once in a while by an AI program when you, as a pilot, were trying something new, but, until now, an AI opponent simply could not keep up with anything like the real pressure and pace of combat-like scenarios.

ALPHA's prowess is impressive, but equally amazing is the tiny computer that runs it. For such a complicated set of decision-making algorithms, ALPHA requires very little processing power, running on a $35 Raspberry Pi minicomputer. ALPHA uses what are called "fuzzy logic algorithms" to form a “Genetic Fuzzy Tree” system that breaks big problems down into smaller chunks so the system can evaluate which variables are relevant to a particular decision and which of those are most important. This allows the system to work more efficiently and rapidly.

ALPHA still flys in a simulated world, but as technology continues to evolve behind combat drones and autonomous vehicles it seems more and more likely that it will converge in something like the real-world version of ALPHA. It's a powerful technology, but it makes you wonder whether we as humans really want to be getting "better" at war. Hopefully these advances will mean fewer human casualties.

June 28 2016 5:24 PM

Redacting Digital Documents Is Easy. Why Do People Keep Doing it Wrong?

What should have been a public relations coup turned into minor fiasco this week when House Democrats publicly released a cache of digital documents related to the Benghazi committee’s inconclusive investigations. Though those documents were supposed to make the GOP look bad, one instead revealed compromising information about Hillary Clinton adviser Sidney Blumenthal. The Los Angeles Times found that seemingly redacted portions of a transcript featuring Blumenthal were actually available if you copied those sections from the PDF and pasted them into another document.

As Slate’s Ben Mathis-Lilley rightly points out, it’s “Embarrassing for House Democrats because they screwed up a process that can be successfully completed with a single black marker.” But it’s also a mistake that’s more common than you might think, one that has everything to do with our fundamental confusion about an increasingly digital world. In fact, it happens because would-be censors act as if they’re using black markers, despite the very different needs of electronic documents.

Here are just a few notable incidents: The Transportation Security Administration made the same error as the House Democrats when it released a screening manual in 2009, sending out a PDF in which, according to Wired, employees “merely overlaid black rectangles on the sensitive text … instead of cutting the text itself.” Among other details, those obscured sections included information about CIA protocols for handling foreign dignitaries. In 2011, a U.S. District Court’s opinion accidentally included redacted information about Apple’s business dealings, accessible by the same copy-paste trick. Though those revelations weren’t especially compromising, that same year, the British Ministry of Defense inadvertently leaked classified details about nuclear submarines that it thought it had censored, a considerably more consequential breach. Other examples abound, especially in legal filings.

This copy-paste workaround comes down the way that PDFs package and present data. Timothy B. Lee explains that PDFs generally work through vector-based graphics, effectively stacking multiple image layers atop one another to create the total picture you see on a given page of a document. (This is why you’ll sometimes watch as the various elements of an image gradually pop into view after you load up an especially complex file.) When you’re working in this format, drawing a black square over the text with the shape tool—much as you would hide sections of a physical document with a marker—may visually obscure information, but it doesn’t actually strip it from the document. The words are still there, even if they’re temporarily hidden when you look at the file in Acrobat or some other viewer.

This problem is hardly unknown, least of all to Adobe, which created the format in the first place. Rick Borstein, who maintains a law blog for the company, writes, “Simply covering up text and graphics [will] black rectangles is not redaction. It is, however, a recipe for disaster.” Wholly aware that that avoiding such catastrophes is necessary, the company includes a robust redaction tool within Acrobat. Though the results are visually the same—you’ll end up with big black boxes over the things you’ve hidden—the tool removes the underlying information from the document.

There’s plenty of information about that tool available for those willing to dig around just a little. In a 2010 article—perfectly timed to prevent some of the most notorious redaction screw-ups, if only anyone had been paying attention—Borstein detailed some of its features, including redactions across multiple pages. As Lisa Needham writes on the blog Lawyerist, the Acrobat redaction tool can also remove metadata from a document—stripping it of information about, say, the computer on which it was written. And though the tool can clearly be hard to find, Borstein has even put together a post showing users how to track it down without digging through menus. All of this is to say that there’s little excuse for ineffective redaction.

There are, of course, plenty of other ways to lazily hide information without really eliminating it: A document of digital redaction guidelines from the District of Columbia Circuit Court lays out a handful of other worst practices that would-be secret sharers should avoid. For example, it genially explains, “Changing the text’s font to white will make it look as though the words disappear, but they don’t!” And there are, of course, other ways to leave unwanted information in a document: Microsoft Word’s track changes feature can, as an article in the Florida Bar Journal suggests, inadvertently convey incriminating details if you don’t delete past revisions and comments before sending a file along. It’s easy, however, to forget that such information is there if you set tell Word to hide markup details, as many writers do while giving their work a final pass.

These are the sort of errors that we make when we refuse to recognize that digital documents are far more complex than their physical brethren. The House Democrats’ humiliating oversight, and other incidents like it, follow from follow a shared misapprehension, the belief that if it looks like paper it must also behave like paper. Think of it as a kind of aspirational skeuomorphism, a fantasy that paper’s qualities persist across the different media that imitate it.

It may be the very ease of copying from one document and pasting into another that helps us maintain this illusion. The feature is all but essential to modern computing platforms, so much so that we’re often baffled when it’s not available. As Lily Hay Newman has shown in Future Tense, however, things are rarely as simple as they seem, not least of all because “Copy and paste … doesn’t just magically interoperate between applications.” Comfortable with the relative ease of dropping, say, a tweet into an email, we forget how much is going on behind the scenes to make that transfer possible. To the contrary, we’d do well to remember just how remarkably complex that common feature is—and guard our secrets accordingly.

June 28 2016 11:34 AM

Facebook Thinks It Has Found the Secret to Making Bots Less Dumb

If there’s one thing we’ve learned about bots over the years, it’s that they aren’t too bright. From Eliza to Tay, the best-known chat bots generally rely on a distinctive personality to cover for their inability to understand what you’re saying. Meanwhile, business-oriented bots such as Microsoft’s Clippy, Slackbot, and Poncho, tend to be inflexible, because they’re hard-coded with preset responses to specific queries. And just when you think you’ve found a bot that’s really impressive—say, Facebook’s M, or x.ai’s Amy Ingram—it turns out that there are humans behind the curtain stepping in to solve the problems the computer can’t.

This year began with a fresh wave of bot hype, which quickly petered out when users found that the new generation of artificial interrogators was only marginally more useful than the last one. Yet there is still reason to believe that the bots of tomorrow will be smarter than today’s—and, more importantly, that they’ll be able to learn and improve over time.

New research from FAIR, Facebook’s artificial intelligence research arm, might help to point the way. Last year the team introduced a new type of machine-learning model for language understanding, called “memory networks.” The idea was to combine machine learning algorithms—specifically, neural networks—with a sort of working memory, letting bots store and retrieve information in a way that’s relevant to a given conversation. Facebook demonstrated the technology by feeding its software a series of sentences that convey key plot points from Lord of the Rings, then asking it questions such as, “Where is Bilbo now?” (The system’s reply: Grey-havens.)

This month, the team pre-published a new paper on arXiv that generalizes the memory-networks approach so that it can better interpret unstructured data sources and published documents, such as Wikipedia pages, rather than just specifically designed “knowledge bases” that store information one fact at a time. That’s important because knowledge bases tightly constrain the information that’s available to a bot, as well as the type of questions you can ask. (Try asking Poncho about something other than the weather.) If Facebook’s algorithms can start to interpret natural language data sources such as Wikipedia in a way that makes sense in a given conversational setting, it opens the potential for bots that can answer all kinds of questions on a vast range of topics. FAIR calls the new approach “key-value memory networks.”

Poncho weather bot
Poncho is good at one thing.

Screenshot / Facebook Messenger for iOS

So far, Facebook’s system can’t answer questions as accurately when reading from a document as it can when working from a structured knowledge base. But Facebook says its method significantly closes the accuracy gap between the two. And the memory-networks approach allows a bot to store not only the relevant source data, but the questions you’ve already asked it and the responses it has given. That way, when you ask a follow-up, it knows not to repeat the same information, or to ask you for information you’ve already given.

Facebook is already using memory networks in M, the do-it-all virtual assistant that lives inside the Messenger app (provided you’re among the handpicked group of beta testers with access to it). They come in handy when, for example, you ask M to make a restaurant reservation.

Rather than simply launching into a predefined list of questions—“What time?” “What kind of food?” “How many people?”—it can extract and store the relevant information over the course of a more natural series of questions and answers. So if you say, “I’m looking for a Mexican restaurant for five people tomorrow night,” it doesn’t have to ask you, “what kind of food?” or “how many people?” And if you suddenly get distracted and ask it, “Who is the president of the United States?” it can quickly reply, “Barack Obama,” then remind you that you still need to tell it what time you’d like to have dinner tomorrow night.

Facebook isn’t the only company that’s working to combine machine-learning algorithms with contextual memory. Google’s artificial intelligence lab, DeepMind, has developed a system that it calls the Neural Turing Machine. In an impressive demonstration, Google’s Neural Turing Machine learned and taught itself to use a copy-paste algorithm by observing a series of inputs and outputs.

Facebook Chief Technical Officer Mike Schroepfer has called memory “the missing component of A.I.” And FAIR research scholar Antoine Bordes, who co-authored the papers on memory networks, told me he believes it could hold the key to finally building bots that interact naturally, in human language. “The way people use language is very difficult for machines, because the machine lacks a lot of the context,” Bordes said. “They don’t know that much about the world, and they don’t know that much about you.” But—at last—they’re learning.

June 28 2016 9:27 AM

3-D Printing Helped This Cancer Survivor Recover Some of What He Lost to Disease

It’s increasingly easy to be cynical about 3-D printing, as a recent Newsweek story on the industry’s disappointments shows all too well. But once you look past the supposedly revolutionary promise of the technology, there are small, meaningful stories about it that are worth telling. One such story comes from Shirley Anderson, a cancer survivor who received a prosthetic jaw thanks to advances in 3-D modeling.

Anderson lost his jaw and Adam’s apple to a series of surgeries and other cancer treatments, leaving him unable to speak or eat solid food. He eventually met Travis Bellicchi, a maxillofacial prosthodontist based at Indiana University. Though Bellicchi was able to make a complex traditional prosthesis for Anderson, the final product was uncomfortable, and Anderson could wear it for a few hours at a time.

In an attempt to find a more comfortable solution, Bellicchi turned to students at the university’s School of Information and Computing who were able to more painlessly create a model of Anderson’s face. According to a blog post from Formlabs—which makes the printer Bellicchi and his collaborators used—the resulting prosthesis “looks more realistic and is much lighter and more breathable so that Shirley feels comfortable wearing it for a longer period of time.”

There are, of course, caveats: The 3-D printed prostheses cannot replace what Anderson lost to his cancer treatments. He still mostly communicates by writing on a white board, for example, and it doesn’t sound like eating has gotten any easier. Nevertheless, it’s a significant reminder that 3-D printing can be a powerful resource, so long as we remember that the advancements it offers are mostly incremental.

READ MORE STORIES