Future Tense
The Citizen's Guide to the Future

Jan. 28 2016 5:02 PM

A Computer Has Beaten a Human Champion at Go. What Next?

Sometimes it’s fun to look back at old predictions of technological progress and compare them to present-day reality.

Other times it’s a little terrifying.

On May 12, 2014—less than two years ago—Wired magazine ran an article headlined “The Mystery of Go, the Ancient Game That Computers Still Can’t Win.” It described the ongoing—and, to that point, fruitless—quest by artificial intelligence researchers to build a computer program that could defeat a top human player at Go, a board game that’s orders of magnitude more complex than chess.

The problem: The number of possible moves and configurations in Go is too great for any existing computer to fully analyze. Humans can’t fully analyze them either, of course—which is why the top players rely heavily on intuition, which has never been machines’ forte. The top program at the time, called Crazy Stone, had “no chance” of beating a human champion, its creators admitted. Asked to speculate on how long it might be before anyone accomplished the feat, they offered a guess of 10 years, a number that Wired warned “may prove too optimistic.”

It did not prove too optimistic.

On Wednesday, Google announced that AlphaGo, a program built by its DeepMind artificial intelligence lab, had defeated the European Go champion, without a handicap, in a five-game match last October. The score: 5-0.

The next test will come in March, when AlphaGo challenges the world champion, Lee Sedol, in a five-game match in Seoul. Lee is a legendary master of the game, and by consensus the best player of modern times. It will be the Go equivalent of the famous chess match between IBM’s Deep Blue and Garry Kasparov in 1996.

Google broke the news in an article published this week in the journal Nature, titled, “Mastering the game of Go with deep neural networks and tree search.” The DeepMind team’s approach combined a well-established algorithmic approach known Monte Carlo tree search, which has helped computers defeat humans at many less-complex games, with a cutting-edge approach known as deep neural networks. AlphaGo actually uses two separate neural networks: a “policy network” that limits the scope of its analysis to a handful of attractive options for each move, and a “value network” that peers about 20 moves into the future to see which of those options appears the most promising. The team explains how this works in the Nature video below:

The implications of Google’s achievement stretch beyond the realm of games. Neural networks can be used in all sorts of settings that demand the human-like capacity to evaluate various strategies under conditions of uncertainty. Virtual assistants and medical diagnostics come to mind, and Google mentioned climate modeling in a blog post. At this point, to identify all the possible applications of such a fundamentally potent technology would require a feat of imagination in its own right. It’s no wonder that Facebook’s top AI researchers are hard at work on the very same problem. (They’re a little behind.)

And yet it would be a mistake to conclude, even half-jokingly—as many were quick to do in the wake of Google’s announcement—that machines are now intellectually superior to humans, or anywhere close, really. As complex a game as Go is, it isn’t really “a microcosm of the real world,” as DeepMind founder Denis Hassabis claimed to the New York Times. It is a constrained environment, albeit a vast one, in which both contestants have access to perfect information, albeit too much information to fully process, and share the same perfectly defined goal. Real life is simply not a game in this sense, and no algorithm yet devised could begin to approach the mental flexibility required to navigate it in the way that humans do.

I’ll hold off on predicting that it will never happen, though. At this rate, just about anything is possible.

In the meantime, there remain other games and tasks for computers to conquer. Next on the agenda after Go might be No Limit Texas Hold ’Em poker. A software program in 2014 solved two-person limit hold ’em, but humans still reign at no-limit—for now.

Nor has Go been “solved,” exactly. AlphaGo is really good at the game, but it isn’t perfect, nor is it designed to be. And for what it’s worth, South Korea’s Lee is predicting victory in his March match against AlphaGo. "I heard Google DeepMind's AI is surprisingly strong and getting stronger, but I am confident that I can win, at least this time,” he said in a statement.

He may be right. People sometimes forget that Kasparov won that 1996 match, 4-2. That’s probably because Deep Blue went on to beat him in a rematch the very next year.

Previously in Slate:

Jan. 28 2016 3:48 PM

Do Silicon Valley and Ancient Greece Share a Secret Recipe for Innovation?

Creativity and ingenuity aren't spread evenly. Throughout history, certain locations have become hubs for artistic, business, and technological innovation, for reasons that aren't always readily apparent. Why Silicon Valley right now? Why Florence during the Renaissance?

In search of answers, acclaimed travel writer and former NPR correspondent Eric Weiner traveled the world to investigate the relationship between society's innovative ideas and their surroundings. The result is his new book, The Geography of Genius: A Search for the World's Most Creative Places, From Ancient Athens to Silicon Valley.

Join Future Tense at Civic Hall in New York on Thursday, Feb. 11, for a conversation between Eric Weiner and Dayo Olopade, media partners strategist for Facebook, on why certain places at certain times become the capitals of human progress. For more information and to RSVP, visit the New America website.

Participants:

Dayo Olopade
Media partners strategist, Facebook
Author, The Bright Continent: Breaking Rules and Making Change in Modern Africa

Jan. 28 2016 1:58 PM

Ben Carson’s Cybersecurity Plan Is Terrible. But At Least He Has One.

It’s old news by now that Republican presidential candidate Ben Carson—despite his medical degree—has a tenuous relationship with science. So I didn’t exactly have great expectations for his campaign’s cybersecurity plan, modestly titled “Prescription for Winning the 21st Century Cyberspace Race.” To be honest, I wasn’t expecting a dedicated cybersecurity plan at all, much less an op-ed dedicated to the topic by Carson in Re/code this week.

The op-ed makes several not-very-interesting, not-very-original points: that our society is very dependent on computers, that a hypothetical large-scale attack on the power grid would be devastating, that cybersecurity breaches can have very high costs. (And also that no one has any idea what those costs really are—Carson cites the cost of identity theft as being “anywhere from $25 billion to $50 billion annually.” There are also, of course, identity theft cost estimates out there in the $5 billion and $10 billion range. Take your pick.)

All of this would be perfectly standard and even expected if Carson were selling a cybersecurity product or consulting service. Instead, he’s selling a cybersecurity policy plan—and that alone is pretty unusual for someone running for office.

We take for granted that presidential candidates to have economic plans and education plans and health care plans and foreign policy plans. (We may not want to hear about them in great detail, but it’s comforting to know they exist.) But the question of how people will deal with online threats has rarely—if ever—been a decisive factor in determining who wants to vote for them.

This is probably for the best, as it turns out, because not even Carson, with his eight-page plan, has any idea what he would actually do to deal with online security threats. But he offers a rousing nationalist (and largely nonsensical) analogy to the Soviet-era space race! (Andrea Peterson neatly picked it apart in the Washington Post.) His plan also includes no fewer than seven explicit references to “We the People” in a completely incoherent attempt to link cybersecurity back to the preamble of the U.S. Constitution. And he even has a name all picked out for a spiffy new government agency: the National Cyber Security Administration.

“This is not another federal bureaucracy,” Carson writes of the proposed NCSA (I think it’s safe to assume that it’s supposed to make you think of NASA), except that it would, of course, be another federal bureaucracy—on top of all the others currently vying to have their say on cybersecurity. (See, for instance, U.S. Cyber Command; the Department of Homeland Security National Cybersecurity and Communications Integration Center; the Department of Commerce; the Department of Energy; the Federal Communications Commission’s Communications Security, Reliability and Interoperability Council; and even the Food and Drug Administration, just to name a few.)

Carson claims that “the NCSA will consolidate inefficient government initiatives and offices, eliminating stovepipes and providing a central point for public-private cooperation.” That sounds great, except it’s almost exactly what the DHS NCCIC was intended to do when it was set up in 2009 to be a “central location where a diverse set of partners involved in cybersecurity and communications protection coordinate and synchronize their efforts.”

The weakness of the NCSA model might not matter so much were it not the only concrete thing Carson actually proposes to do about cybersecurity. Here are some of his other ideas: He wants us to “be prepared to defend not only our sensitive information, but also the networks, grids and servers that keep America running.” (Good call! Why hasn’t anyone thought of that before?) He also thinks we should “educate ourselves about the dangers which lurk online, and secyre [sic] our own computers against those who would take advantage of us,” and he’d like all of the government agencies to “work together, along with critical infrastructure providers, to protect not only their own systems, but the American people as a whole.”

Reading his plan, I have no clue how Carson intends to make any of that happen—and more than that, I fear he has no clue how many people, both within the government and outside it, have been working on exactly these issues for many years with relatively little success. I fear he has no sense of how hard it is to do the things he’s talking about. None of these are new ideas—except perhaps his proposal to establish a single phone number for people to call with any complaints or concerns about privacy and civil liberties; I can’t say I’ve heard that one before (or that a hotline sounds like a particularly inspired privacy solution).

Still, at least he has a centralized cybersecurity plan—or, more accurately, a centralized list of broad cybersecurity goals. Other candidates who talk about cybersecurity tend to do so in more modest and fragmented ways but are similarly vague on details. Hillary Clinton, for instance, mentions on her website, as part of her energy plan, that she wants to create a Presidential Threat Assessment and Response Team to help assess cybersecurity threats to the power grid. Her plan for regulating the financial sector also includes a nod to cybersecurity concerns, advocating for “regulators to consider cyber-preparedness as a significant part of their assessments of financial institutions” as well as better information-sharing and a greater emphasis on security in contracts with third-party vendors.

Donald Trump, to no one’s surprise, is the most vague of all when it comes to the specifics of his plan to address online threats from China. “China’s cyber lawlessness threatens our prosperity, privacy and national security,” his website states. “We will enforce stronger protections against Chinese hackers and counterfeit goods and our responses to Chinese theft will be swift, robust, and unequivocal.” (And that second sentence is bolded and underlined.)

So don’t bother voting based on the candidates’ cybersecurity positions because they’re all pretty much the same: Do a better job defending against online threats to critical infrastructure, intellectual property, and sensitive data—just as soon as we figure out how. Only one of them has promised a privacy hotline, however.

Jan. 27 2016 5:34 PM

Netizen Report: Change Is on the Horizon for Iran. But Let’s Not Forget Human Rights.

The Netizen Report offers an international snapshot of challenges, victories, and emerging trends in Internet rights around the world. It originally appears each week on Global Voices Advocacy. Amira Al Hussaini, Mahsa Alimardani, Ellery Roberts Biddle, Weiping Li, and Sarah Myers West contributed to this report.

GVA logo

On Jan. 16, when nuclear sanctions were lifted and Iranian American prisoners such as Washington Post reporter Jason Rezaian were released, it seemed that Iranians might have new hope for progress.

But while the world watched Iran re-establish ties with the West, Iranian blogger-activist Hossein Ronaghi-Malaki was spending his last hours with his family before returning to prison to serve his 15-year sentence for dissident commentary about the 2009 presidential elections.

Ronaghi-Malaki was released June 14, 2015, to receive medical treatment but was recently summoned back. Shortly before returning to prison he tweeted about his determination to get justice.

A report from IranWire also revealed that another prisoner released in the swaps, Nosratollah Khosravi-Roodsari, appears to have been under surveillance by Iranian authorities. Sources close to Khosravi-Roodsari say that his SMS messages were surveilled and were used to arrest him.

Censorship in Pakistan, with or without YouTube
A three-year ban on YouTube recently came to an end in Pakistan after Google launched a local version of the site that gives the government greater capacity to request that certain videos be removed from the site. At the same time, the country’s Ministry of IT is pushing to pass a cybercrime law that would tighten restrictions on online speech considerably, leading to a debate over the impact of censorship. As leading Pakistani digital rights advocate Sana Saleem writes, “With or without YouTube, it is up to each of us to determine whether we would like to think critically and claim our right to unhindered access to information, or embrace a state-crafted version of it.”

Protesters in Poland speak out against surveillance
Polish citizens took to the streets Jan. 23 to protest a draft surveillance law that would grant authorities broad access to citizens’ Internet and telecommunications data without court approval or any other form of meaningful oversight. The bill has passed the lower house of parliament but has not yet been approved by the upper house. Alongside protests, Polish netizens are sharing information about how to protect themselves from surveillance using encryption, virtual private networks, and Tor. Thousands of citizens have also signed an online petition opposing the law.

Kuwait tightens laws on critical speech
A new cybercrime law in Kuwait sets strict limits on free expression online, according to human rights groups in the Gulf region. The law criminalizes online activities that would “prejudice public morality,” without clearly defining what would fall under this category. It also institutes stricter punishments for online journalists who criticize state leaders or the constitution and those who incite breaches of public order or violation of public laws. The law came into force Jan. 12, 2016.

Moroccan free expression advocates’ trial postponed again
Four journalists and three human rights advocates facing prosecution in Morocco on charges of “receiving foreign funding” and threatening the “internal security of the state” appeared in court on Jan. 27, only to have their trial postponed for the second time since the charges were filed. Blogger and activist Hisham Almiraat, a member of Global Voices who is among the accused, tweeted:

Russian activist fined for posting “propaganda of homosexuality” online
Russian LGBT activist Sergey Alekseenko was found guilty of distributing “propaganda of homosexuality among minors” on the Internet, via Russian social network VKontakte. Curiously, the “propaganda” was actually a quote from a report by Roscomnadzor (Russia's state media regulator) about Elena Klimova, the founder of the “Children-404″ LGBT youth support group, who faced the same charges in 2014, Alekseenko quoted the report. The offending sentence read: “Being gay means being a brave and confident person who has dignity and self-esteem.”

“Why I went into exile”: A Bangladeshi blogger tells his story
In the face of threats, intimidation, and multiple assassinations of secular bloggers in Bangladesh, many bloggers have stopped writing, and some have gone into hiding. Blogger and online activist Mahmudul Haque Munshi is among them and recently took exile from the country after receiving multiple death threats. In a recent post on his blog Swapnokothok (Dreamweaver), he described his experience. Global Voices’ Bangla editor Rezwan translated an abridged version of the post. He writes:

I never thought that I would leave my country. … But then I saw that I was being followed everywhere. I became very afraid. I remembered Niloy [another blogger, slain in August 2015] said that they started following him two months before his death. …
Those who wanted me to die in vain, I would like to tell them, you will not win. I will live, I have to do a lot of things. I cannot die before achieving some of that.

The 411 on Facebook’s new privacy feature
Android users of the Facebook app can now connect to the social media service via Tor’s mobile browser, Orbot. With the changes, anyone on Facebook can now use Tor’s anonymity features to hide their geographic location when they log in. The tool will improve users' capacity to circumvent controls in places like China and Iran, where Facebook is blocked. While this does not mean that users can access Facebook anonymously (the social media platform continues to maintain its Real Name Policy), it does expand Facebook’s reach.

New Research
•    “Mobile Censorship in Iran”—Mahsa Alimardani and Datactive
•    “Entanglements: Activism + Technology”—Fibreculture Journal, special edition
•    “DNS-based traffic correlation attacks”—Philipp Winter

Jan. 27 2016 5:21 PM

If You Eat at Wendy’s, Your Credit Card Number May Be Compromised

Big retailers like Target and Home Depot have had memorable data breaches, but lots of fast food companies have dealt with hacks as well. On Wednesday, Wendy's said that it is investigating a possible credit card breach, joining the noble ranks of Dairy Queen and Jimmy John's.

Financial institutions started noticing a pattern of fraudulent activity starting on some credit cards shortly after intended transactions at Wendy's. “We have received this month from our payment industry contacts reports of unusual activity involving payment cards at some of our restaurant locations,” Wendy's spokesman Bob Bertini told Krebs on Security. “We’ve hired a cybersecurity firm and launched a comprehensive and active investigation.”

For now Wendy's is looking at transactions from last year, and the company says it's too early to comment on how many of its 6,500 restaurants are affected.

Adam Levin, the CEO of security company IDT911, said in an email, “Restaurant chains are prime targets for cybercriminals because they store a treasure trove of data on their Point Of Sale systems.” And though the situation at Wendy's may not seem like an all-out crisis in itself, he added that in general, “Consumers should be on high alert and check their accounts on a daily basis for any suspicious charges or debits.”

There's nothing like anxiety to make you really want a Frosty.

Jan. 27 2016 11:17 AM

Future Tense Newsletter: Medical Grade Cybersecurity

Greetings, Future Tensers,

When doctors head into surgery, we expect them to scrub down. But as J.M. Porup warns, hospital computers may not be quite so clean, leaving systems responsible for patient care vulnerable to malware. One factor, as Porup notes, is that hospitals fail to update their computers. Don’t be like those hospitals. Just switch to the latest iPhone operating system already, since the newest version fixes various security vulnerabilities. But while we have some responsibility for our own safety, Josephine Wolff stresses that we shouldn’t describe humans as the “weakest link” in cybersecurity protocols—not until the technologies they’re fumbling with are better, at any rate.

You might make a similar point about sites like Twitter, where the issue isn’t terrible people so much as an interface that encourages terrible actions. As Amanda Hess shows, the perpetuation of old offline standards has made it almost impossible to respond to some forms of harassment online. Allaying that situation might mean broad infrastructural change, which would start with proposals like the one David Auerbach offered this week to make Twitter a little less toxic. Still, it’s sometimes wrong to blame technology over the people that use it: For example, Justin Peters looked at reports of criminals using drones to sneak contraband into prisons and concluded that the real problem is with corrupt prison employees who facilitate such schemes.

Meanwhile, drones faced crises of their own during this weekend’s blizzard—which we insist on calling #DavidSnowie, the Weather Channel’s naming conventions be damned. As the snow fell, however, we were thinking about climate from a different angle as this month’s Futurography unit on geoengineering continued. Science-fiction writers have struggled to adequately imagine these technologies, partly because some of them work on such enormous time scales, as I learned from a conversation with novelist Kim Stanley Robinson. To move beyond such conceptual deadlocks, Christophe Jospe proposes that we try rethinking our understanding of geoengineering, evaluating it more directly in terms of its effects. Want to understand those effects a little better? Check out this video that explains some of the basics.

Here are some of the other stories that we read this week while standing still on the escalator:

  • Meatpacking: Rachel E. Gross reports that butcher bots may be the most ethical option in food production, if only because they alleviate some human suffering. (The animals may have to wait.)
  • Gaming: Donald Rumsfeld slapped his name on a super weird solitaire app that’s supposed to honor Winston Churchill. It’s actually sort of fun.
  • War wounds: Far as military drone pilots may be from combat zones, studies indicate that they’re still subject to post-traumatic stress disorder and other forms of hardship that afflict soldiers.
  • Privacy: Even if you actually read those long privacy policies from tech companies, you might not get all of the information you need to protect your data.

Events:

  • Powerful new technologies are helping us preserve antiquities threatened by ISIS, commercial development, and other forces. Join Future Tense in Washington, D.C., at 6 p.m. on Thursday, Jan. 28, for a discussion of how present technologies are being used to deliver the past to the future. Click here to RSVP and learn more.
  • Want to further your knowledge of geoengineering? On Monday, Feb. 1, at 12:15 p.m., Future Tense will host a lunch in Washington, D.C., where Oliver Morton, author of The Planet Remade, and Katherine Mangu-Ward, the managing editor of Reason and a Future Tense fellow, will discuss geoengineering’s potential as a climate change fix and the many challenges that would come with it. Click here to RSVP and learn more.

Digging out of the drifts,

Jacob Brogan

for Future Tense

Jan. 27 2016 10:27 AM

Three Slightly Crazy Plans Geoengineers Have to Save the World From Global Warming

For the last month, we’ve been exploring geoengineering as part of Futurography, a series in which Future Tense introduces readers to the technologies that will define tomorrow. In the video above, we boil some of that knowledge down to its essence, discussing risks and rewards associated with three of the most viable proposals. Whether geoengineering ends up saving the world or turns out to be a giant dud, this video will help ensure that you’re informed about whatever’s ahead.

Read more from Futurography on geoengineering:

Jan. 27 2016 8:49 AM

Now Historians Can Digitally Preserve What Extremist Groups Would Like to Destroy

This post originally appeared on The Conversation. On Thursday, Jan. 28, Future Tense—a partnership of Slate, New America, and Arizona State University—will host a happy-hour conversation in Washington, D.C., on using technology to preserve antiquities. For more information and to RSVP, visit the New America website.

the-conversation-logo

In March 2001, the Taliban blew up the Bamiyan Buddhas in Afghanistan, two of the tallest Buddha sculptures in the world. This horrific attack on an important and beautiful example of the patrimony of central Asia shocked the world. It also forever changed the landscape of cultural preservation, archaeology, and global heritage.

Even back then, we had some of the 3D-scanning technologies that could have allowed us to digitally document and preserve the Buddhas. We did not yet anticipate the scale of destruction that would leave hundreds of global heritage sites damaged or obliterated in the 15 years since that event.

The loss of this cultural heritage has spurred teams of researchers and nonprofit organizations to race to make 3D scans, architectural plans, and detailed photographic records of heritage sites around the world, knowing they could be destroyed at any time. Advances in 3D-scanning technologies, drone use, and even tourists' online posting of images are giving preservationists a new set of tools to prevent the permanent loss of cultural artifacts.

In the 1990s, several international heritage organizations were created to highlight the importance of cultural heritage to history, tourism and ethnic identity. One such group is UNESCO’s World Heritage Centre, founded in 1992. The archaeological and heritage communities cheered these efforts at preservation of important places, sites, buildings, and landscapes that were being threatened or destroyed by expanding cities, hydroelectric projects, coastal erosion, and other perils.

They also acknowledged that heritage, largely for the first time, had become a target of military campaigns. Once heritage sites became identified with particular cultures, beliefs or histories, those places became vulnerable to people, including the Taliban and the Islamic State group, seeking to destroy those identities.

Just last week the destruction of a sixth-century Christian monastery in Iraq caught the attention of the world. This is just one in a long list of sites destroyed by ISIS that began in 2014 and caught the attention of the world with the February 2015 video release of the destruction of the Mosul Museum, where some of the most important early Assyrian sculptures were housed.

Project Mosul, created one week after the video was released, is the brainchild of Chance Coughenour and Matthew Vincent, Ph.D. student researchers in Europe’s Initial Training Network for Digital Cultural Heritage. They scoured the Internet for photographs of the sculptures and artifacts, crowd-sourced for tourist photos, and collected images from U.S. military personnel who had visited the museum. That material became the basis for the digital reconstruction of the destroyed artifacts using basic photogrammetry. This technique uses photos from multiple angles of the same object to construct a 3D model of it.

The destruction of Buddhist sculptures in Bamiyan led to an early success in digital preservation: Fabio Remondino of the Bruno Kessler Foundation in Trento, Italy, used photogrammetry among other techniques to digitally reconstruct the Bamiyan Buddhas.

The effort is spreading. The Zamani Project from the University of Cape Town has spent the last 12 years documenting Africa’s most important cultural and heritage buildings, sites, and landscapes. Importantly, its data are freely available and accessible.

The Democratization of Science project at the newly formed Center for Virtualization and Applied Spatial Technologies located at the University of South Florida has a similar mission: documenting, preserving, and protecting the world’s cultural and natural heritage through the use of digital visualization and 3D virtualization. And like the Zamani Project, it will democratize science by delivering digital data and heritage resources to the global community.

Our project at the University of South Florida is using 3D imaging to scan entire museum collections, archaeological sites, and ancient landscapes around the world. Sites and collections are chosen based on their research potential and need for preservation. Projects and laboratories with similar missions are beginning in many universities and research centers, especially in the United Kingdom, Italy, and Spain.

New technologies are making this work easier and more comprehensive. Unmanned aerial vehicles are transforming our ability to document large structures and landscapes at extremely high resolution. New methods and software for stitching together photographs to create accurate 3D reconstructions have made the creation of virtual reconstructions affordable for both students and the public.

However, the development of high-resolution 3D laser scanners has made the largest impact. This equipment aims laser beams at surfaces, records the reflected light, and assembles a very sharp 3D image of the space. Combining all these, we now have the tools to digitally preserve what extremist groups would like to destroy.

The attempts to destroy some of the world’s heritage have had quite the opposite effect: an entirely new area of research and scientific practice that has transformed archaeology, heritage, paleontology, museum studies, architecture, and a suite of other disciplines.

Equally relevant is the new emphasis on the democratization of knowledge through the digital availability of these data. Now any student, scholar, or interested individual has access to some of the most important historical and archaeological specimens, buildings, and cities in the world. These efforts bring our global cultural heritage to everyone, while helping to ensure the preservation of our heritage in an increasingly hostile world.

The Conversation

Jan. 26 2016 3:02 PM

This Twitter Bot Uncovers the Secret to Trump’s Appeal

Donald Trump’s appeal derives in no small part from his willingness to present a seemingly unfiltered version of himself. Nowhere is that clearer than on Twitter, where he’s even more more straightforward in his brusque aggression than he is on the debate stage. Nevertheless, his online persona is very much a construct, one built around around his willingness to pander to his supporters, whose messages he cheerfully quotes. In the process, he often amplifies voices that would otherwise go unheard by mainstream audiences. Unsurprisingly, this means that he ends up favorably citing any number of unsavory characters, as was the case last week when he cited a user with the handle “WhiteGenocideTM.”

Now Fusion has made it a little bit easier to keep track of whom Trump is tacitly offering his approval to. The site created a bot called “Trump Retweeps” that quotes the bio of anyone Trump himself has quoted on Twitter. Follow it and you’ll be treated to a steady stream of information about the people backing the GOP candidate. Significantly, this is information that those users have freely posted about themselves—and to which Trump and his campaign have easy access. As Fusion’s Daniel McLaughlin writes, the results help us “better understand the company he keeps.” And though they shouldn’t be a surprise by now, they’re still deeply disquieting.

Trump’s admirers often proclaim their admiration for him in their bios, emphasizing just how powerful his cult of personality has become. Indeed, for many, Trump seems to be a metonym for conservative politics as such (below, bios follow Trump's retweets):

Others provide even starker demonstrations of his appeal, bragging about gun ownership or exhibiting a barely concealed racism:

And some are just needy, begging for attention that Trump seems happy to provide:

It’s this last group that may be central to Trump’s appeal. By reaching out to fringe groups and outlying individuals—whether or not he genuinely embraces their views—he gives those who feel they’ve been neglected and ignored the sense that someone is paying attention. Ultimately, then, Fusion’s bot is unlikely to harm Trump, at least with his most rabid supporters. If anything, it may help to demonstrate that there are others like them out there.

For the rest of us, however, Trump Retweeps offers a reminder about what makes Trump so troubling. The man himself is a mercenary, ready to go whichever way the money takes him. His supporters, however, hew to their views so tightly that they make those positions central to their public identities. Fusion’s bot ably exhibits what some of those people believe—and how comfortable Trump is basking in their desire for his attention.

Jan. 26 2016 2:49 PM

Congress Has a Thing or Two to Learn From These State Privacy Laws 

Wired logo

When Congress feels the need to compromise Americans’ privacy in the name of security—as in the case of the Patriot Act in 2001 or the Cybersecurity Information Sharing Act last month—it moves remarkably fast. When it comes to protecting Americans’ privacy from the inexorable advancement of data collection and law enforcement technologies, on the other hand, it seems to act with no such urgency.

Now a collection of state legislators are tired of waiting. Last Wednesday, 16 states’ lawmakers, with the advice and coordination of the American Civil Liberties Union, introduced bills designed to shore up Americans’ privacy on a long list of issues that federal lawmakers have either ignored or allowed to become paralyzed in Congress’s endless gridlock. That collective legislative push, which the ACLU is calling Take CTRL, addresses everything from student and employee privacy to new police surveillance techniques.

The bills, together, would cover more than 100 million Americans, by the count of the ACLU’s advocacy and policy counsel Chad Marlow. But he also hopes they might spur—or shame—Congress into action at the federal level. “What we were hearing more and more from people is that their privacy is being violated and taken for granted, and the federal government isn’t doing anything about it,” says Marlow. “The impact of these bills [on their own] would be dramatic. But if the states’ acting finally lights a fire under Congress and gets them to move, that would also be extraordinarily valuable.”

On practically every issue they cover, the new state bills would represent new measures limiting the collection, sharing or storage of data types that Congress has yet to address. In some cases, they also highlight how certain states, namely California, are already ahead of the feds on protecting Americans’ personal data. Here are the central ways the 16 states are trying to advance those new privacy protections:

In 2010, a school in the Philadelphia suburbs made national headlines for using a school-issued laptop’s webcam to spy on Blake Robbins, a student, in his home. The school went so far as to try to discipline Robbins for unexplained “inappropriate behavior” it caught via that snooping. Robbins’ family sued and eventually received a $610,000 settlement.

A bill in Minnesota and the District of Columbia would extend the lesson of that ruling to all sorts of data collection, preventing schools from gathering information like Web browsing histories or emails sent from school-issued loaner devices. Another bill in Minnesota, Alabama, and Nebraska would limit how all data that a school collects about a student can be stored or shared. This kind of information is myriad. For instance, schools collect data points ranging from how many gold stars or demerits a student receives to whether they required special counseling or free lunches. Under the proposed law, none of that could be handed over to a third party without the student’s or a parent’s consent.

Since around the beginning of the decade, job applicants have been alarmed to find that some potential employers demand that they share access to their private social media accounts—whether by giving up a password or simply accepting a Facebook friend request. Dozens of states have already considered and in some cases passed legislation to protect job applicants and employees from that kind of intrusive demand. Bills introduced this week in Alaska, Hawaii, Michigan, Missouri, Minnesota, Nebraska, North Carolina, and West Virginia would add those states to the list. Most of those bills, and another in Washington, D.C., apply the same protection to students whose teachers demand access to their social media secrets, too.

The surveillance gadgets known as cell-site simulators or “stingrays” have a controversial reputation: The law enforcement devices, which impersonate cell towers to intercept phone communications, also slurp up the data of any unwitting bystander near the target and have been used to track suspects’ calls and location in many investigations without a warrant. California has already passed a law that would require cops to get a warrant for any stingray use, and the Department of Justice also announced last year that it won’t let federal agencies use the devices without a warrant. Now Illinois and Michigan have both introduced their own state-level bills that would restrict stingrays, requiring a warrant for their use, demanding that all data not related to the target be deleted the same day it’s collected, and prohibiting the devices from being used to install spying software on the target device. A third bill in Nebraska goes further: It would outlaw the use of stingrays altogether.

Police have argued that automatic license plate reader cameras merely collect public information: the digits on a license plate and the car’s location. But with enough cameras installed in enough locations, the systems make it possible to track the car-based movements of entire populations of people in detail. Wednesday’s new bills include legislation in Nebraska and Michigan that would limit the use of the readers, the latest of dozens of states that have introduced laws to either restrict how the cameras are used or put a time limit on how long the data from them can be stored. (New Hampshire has banned the technology outright.) “When you create a record of where every person is driving, that can include an AA meeting, a church, a synagogue, or a mosque. When you know where someone drives, you know a lot about them,” says the ACLU’s Marlow. “You can’t collect data on everyone just because anyone could be a criminal.”

The Electronic Communications Privacy Act is overdue for an overhaul. The 30-year-old law was created to extend protections against unconstitutional wiretaps to digital communications. But ECPA’s still stuck in the ’80s, long before lawmakers imagined today’s cloud-based services full of personal data. Under the law, for instance, any email stored in a third party server—think Gmail or Hotmail—for more than 180 days is considered “abandoned” and subject to collection by law enforcement without a warrant. Last year, California passed its own update to ECPA, requiring that law enforcement get a warrant before it seeks any digital communications or location data from a third-party service. Congress is considering ECPA reform, too, with a pair of bills introduced last year—the second year in a row that the bills have been introduced without reaching a vote. Now Minnesota, New Mexico, New York, Virginia, Massachusetts, New Hampshire, and North Carolina are all proposing their own variations on an ECPA reform bill to protect stored data. If Congress doesn’t fix ECPA, these states may do it themselves. “These are big states. If they move along with California, you’d have significant portions of the country covered by high standards for law enforcement access,” says Chris Calabrese, a policy analyst at the Center for Democracy and Technology. “That begins to change the discussion.”

Also in Wired:

READ MORE STORIES