Elon Musk Outlines His Crazy, Very Real Plan to Colonize Mars
Elon Musk has a plan to colonize Mars. It is weird, and risky, and very real.
In a long-awaited speech at an astronautical conference in Mexico on Tuesday, Musk detailed his ambitions to eventually create a civilization of 1 million people on the Red Planet. The roadmap—spacemap?—starts with a gigantic spaceship strapped to a gigantic rocket. Said spaceship would head to Mars at regular intervals, with 100-200 people on board for each passage. The first landings could happen within next decade, he said, if all goes well. Building a city and a civilization would take decades more.
The cost of a ticket would be quite high for the early voyages, but over time it could come down to less than $200,000, Musk predicted—comparable to the price of a house. Once all the kinks are worked out, he imagines a one-way trip would take about 30 days.
Oh, and it would be “fun and exciting,” he promises—not “boring or cramped.” There would be “zero-G games,” movies, and a restaurant. “It’ll be, like, really fun to go,” he said, chuckling. “You’re gonna have a great time.” (And if you don’t, he said, there’s a possibility you could make a return trip to Earth.)
The goal of Musk’s Mars civilization: “Making humans a multiplanetary species.” That’s crucial, he believes, because if we stick around on Earth long enough, eventually there will be “an extinction event.” He did not elaborate.
This is not a new ambition for Musk. He founded SpaceX in 2002 with the ultimate goal of trying to get people to Mars. At the time, he assumed the odds of success were quite low, but he thought it was worth a shot nonetheless. Nearly 15 years later, SpaceX is a major player in the space industry, shuttling supplies, and soon human crew members, to the International Space Station.
Now Musk believes it’s time to start working on that Mars plan. As is typical of his projects, this one comes with lots of sci-fi references. The first spaceship to Mars, he said, will probably be named Heart of Gold, after the prototype starship in The Hitchhiker’s Guide to the Galaxy. The rocket will have 42 of SpaceX’s new Raptor engines.
His speech on Tuesday doubled as a pitch to NASA and private funders to bankroll the project. “I’m incredibly grateful to NASA for supporting SpaceX,” Musk said at one point. “I’m NASA’s biggest fan.”
At the talk’s conclusion, Musk took questions from the audience, which seemed to comprise exactly the sort of people you’d expect to find at a talk about colonizing Mars. One questioner spoke of his recent trip to Burning Man, compared it to Mars, and asked how Musk plans to handle the question of toilets on Mars. There was some discussion of who should be on the first voyage, given the likelihood of death; a questioner suggested Michael Cera, as part of what was apparently a pitch for the humor website Funny or Die.
Vox and The Verge, among others, have more details on the logistics and the specifications of the spacecraft involved. Suffice it to say there are rather a lot of things that have to go right before Musk’s dream can become a reality.
Musk is well aware of this. “There’s a lot of risk,” he said. “It’s going to cost a lot. And there’s a good chance we don’t succeed. But we’re going to do our best and make as much progress as possible.”
Donald Trump Knows Terrifyingly Little About Computers
Midway through Monday night’s presidential debate, moderator Lester Holt introduced the topic of cybersecurity into the conversation, asking the candidates to discuss who was behind recent hacks—and how the U.S. should respond. The question could have been a tricky one for Hillary Clinton, who’s come under fire for her use of a private email server. Instead, it became a minefield for Donald Trump—who may have never used a computer. Indeed, it revealed he knows so little about cybersecurity that the only expert he can apparently name is his 10-year-old son.
In responding to Holt’s question, Clinton spoke in broad but coherent terms, discussing the danger of state-sponsored cyberattacks and scolding Trump for “invit[ing] Putin to hack into Americans.” She likewise offered an ambiguous, threatening remark about the status of American cyberweapons, observing, “And we’re going to have to make it clear that we don’t want to use the kinds of tools that we have.” This isn’t exactly technical stuff, but it still suggests that she understands the terrain—or, at least, the stakes of the conversation—which can’t be said for her opponent.
The most obvious sign of Trump’s ignorance on cybersecurity issues—as well as computing more generally—may be his comical insistence on using “cyber” as a noun, a rhetorical hiccup that Clinton avoided in her own remarks. But where speaking of “the cyber” may have just been a gaffe, Trump’s remarks on the topic are full of bizarre indications that he has no idea what he’s talking about. They’re worth quoting at length:
As far as the cyber, I agree to parts of what Secretary Clinton said. We should be better than anybody else, and perhaps we’re not. I don’t think anybody knows it was Russia that broke into the DNC. She’s saying Russia, Russia, Russia, but I don’t—maybe it was. I mean, it could be Russia, but it could also be China. It could also be lots of other people. It also could be somebody sitting on their bed that weighs 400 pounds, OK?
The core assertion in Trump’s remarks here isn’t entirely unreasonable. In addressing the difficulty of identifying who hacked the DNC, he may be implicitly referring to what experts in the field describe as the “attribution problem.” Instead of working through the complexities of these issues, however, Trump seems to be proposing that when things are this hard to resolve, we shouldn’t bother.
That Trump acknowledges such challenges is arguably admirable, whether or not it’s intentional, but here he puts that recognition into the service of a defense of Russia—a project that disquietingly aligns with his record on U.S.–Russia relations—seemingly using it to absolve Vladimir Putin’s government of any responsibility. As it happens, though, there’s plenty of evidence that Russia actually is responsible for the DNC hack, but Trump ignores that possibility. He is instead content to lazily body-shame the computer experts he’s clearly not listening to instead, shifting from a real challenge to stereotypical nonsense and offering a South Park–ready image of hackers as morbidly obese shut-ins. None of which, of course, has anything to do with the actual state of cyberwarfare.
Lest we worry, though, that Trump has never talked to—let alone met—an actual cybersecurity expert, Trump went on to identify a very knowledgeable adviser in his corner:
So we have to get very, very tough on cyber and cyberwarfare. It is—it is a huge problem. I have a son. He’s 10 years old. He has computers. He is so good with these computers, it’s unbelievable. The security aspect of cyber is very, very tough. And maybe it’s hardly doable.
As is so often the case, it’s hard to untangle what Trump is actually getting at here. Like a cat batting at house flies, he swings at whatever catches his eye instead of focusing on a single topic. On the one hand, he suggests that cybersecurity is tricky—and it is! But on the other, he wants to reassure us that Barron, his preteen son, is really good with computers.
What, exactly, is Trump getting at here? Is he trying to tell us that his son could solve these “hardly doable” problems, if only he had the chance? Is he proposing that these issues are so irresolvable that even a 10-year-old can’t work them out? Or, most likely, is he just saying the first thing that popped into his head?
Regardless, it’s clear enough that Trump has no idea what he’s talking about. Better that he take advice from his son than, say, John McAfee, but Trump’s failure to come up with a coherent statement on the topic is telling, suggesting that he isn’t really listening to anyone. And, more troublingly, he doesn’t seem as if he cares to learn more. Concluding his remarks on the topic, Trump asserted, “We have so many things that we have to do better, Lester, and certainly cyber is one of them.” You can do better too, Donald. Start by figuring out what you’re talking about.
Netizen Report: Internet Shutdowns Are Ever-Present in Egypt’s Sinai Peninsula
The Netizen Report offers an international snapshot of challenges, victories, and emerging trends in Internet rights around the world. It originally appears each week on Global Voices Advocacy. Afef Abrougui, Ellery Roberts Biddle, Weiping Li, and Sarah Myers West contributed to this report.
Over the weekend of Sept. 17, citizens in Egypt’s North Sinai region weathered a shutdown of phone and internet services that went on for at least eight hours. Al-Masry Al-Youm reports that service has been restored in most areas of the region, but there’s little hope that networks will remain connected for good.
The Egyptian military has controlled the northern zone of the Sinai Peninsula, which abuts Israel and Palestine’s Gaza strip, since mid-2013, when it began in earnest its assault on violent insurgent groups in the region. By early 2014, cuts to telecommunications networks would regularly last throughout the day, in what appears to be an effort to deter insurgents from communicating with one another. This move has brought incalculable damage upon citizens, leaving them unable to communicate, stay in touch with loved ones, and send and receive money, among many other things. The cuts have also helped solidify a de facto media blackout in the region that has resulted from strict punishments for journalists seeking to cover military operations in the area.
In December 2015, Egyptian technologist and Global Voices author Ramy Raoof told Time magazine that security authorities were cutting network connections “indiscriminately,” noting that they have made no effort to preserve basic or emergency services, such as the ability to call for an ambulance. And when networks are down, insurgents can use other unblockable means of communications like roaming foreign (chiefly Israel-based) mobile networks and satellites. Like many others, Raoof reasons: “It doesn’t prevent the bad guys from doing bad things.”
Kuwaiti royal faces jail time for insulting emir on Snapchat
A Kuwaiti court convicted Sheikh Abdullah Salem Al Sabah of insulting the royal family, despite the fact that he is the grandnephew of the emir. He has been sentenced to three years in prison and ordered to pay a fine of $16,500 for sending a Snapchat message in which he criticized the main cabinet, which is occupied entirely by members of the royal family (and his own).
Russian blogger convicted of publishing “extremist statements” about Syria
Russian prosecutors are calling for opposition blogger Anton Nossik to be sentenced to two years in a penal colony for publishing “extremist statements” online. The charges stem from a blog post titled “Wipe Syria from the Face of the Earth,” in which Nossik called for bombing territory controlled by the Syrian government. The post was published just days before the Russian government began a bombing campaign in support of the ruling Assad government. Nossik’s verdict is set to be announced Oct. 3.
Why didn’t the United Arab Emirates have an “Arab Spring”?
Despite a relative absence of government protests, state-sponsored repression in the UAE is commonplace: Tactics like arrests, forced disappearances, torture, unfair trials, deportations, and revocation of citizenship are used to silence dissent in the country. Despite boasts by UAE leaders of the high living standards of citizens, “for the time being ... activists and government critics do not seem to enjoy the happiness, well-being and safety the Emirates offer,” writes Global Voices’ Afef Abrougui.
New research shines light on political censorship in Bahrain
Bahrain is using an internet filtering software called Netsweeper to censor political content, including Shiite websites, local and regional news sources, content critical of religion, and pages related to human rights and opposition politics, according to new research by the University of Toronto’s Citizen Lab. Citizen Lab researchers found that the software was being used on nine Bahrain-based internet service providers during the summer of 2016. The report concludes: “The sale of technology used to censor political speech and other forms of legitimate expression, to a state with a highly problematic human rights record, raises serious questions about the corporate social responsibility practices of Netsweeper.”
More than anyone else, the US is knocking on Twitter’s door
Twitter’s latest transparency report shows that the U.S. government made more requests for users’ personal data than any other government—and that overall, the number of government requests rose 2.1 percent since the last quarter, affecting 8 percent more user accounts. Twitter also revealed more detailed information about who is making the requests. The company said the FBI, Secret Service, and New York County District Attorney’s Office were the top requesters for account information in the United States.
Latin American indigenous language activists promote new emojis
Calls for more emoji diversity have expanded beyond skin color to include more culturally diverse representations, writes GV’s Eddie Avila. In addition to a recent petition to include a hijab emoji, indigenous language activists in Mexico and Chile have begun to create their own emoji sets reflecting traditional dress and linguistic expressions in languages including Huastec, spoken mostly in central Mexico, and Mapudungun, spoken by the Mapuche of Chile.
- “Fearful Silence: The Chill on India’s Public Sphere”—PEN International
- “Information Control 2.0: The Cyberspace Administration of China Tames the Internet”—Mercator Institute for China Studies
- “Tender Confirmed, Rights At Risk: Verifying Netsweeper in Bahrain”— Citizen Lab
More Than Half a Billion Yahoo Accounts Have Been Hacked, Yahoo Confirms
Yahoo confirmed on Thursday that account information for more than 500 million Yahoo users was stolen in a 2014 data breach of epic proportions.
The information may have included names, email addresses, telephone numbers, birth dates, encrypted passwords, and security questions and answers, the company said in a Tumblr post. Yahoo said it is notifying users who may have been affected and asking them to “promptly” change their passwords, among other steps to protect themselves.
That’s sound advice: Changing passwords as soon as you’re aware of a breach is always advisable. Yahoo users should immediately change not only their Yahoo passwords, if they haven’t already done so in the past year or two, but their passwords on any other site where they used the same credentials they were using on Yahoo in 2014. They should also be on guard for spam emails that could include malware, scams, or phishing attempts.
The urgency feels a bit awkward, however, coming from a company that apparently required almost two years to discover, confirm, and notify its users of the breach. Reports of the hack first surfaced on Aug. 1, when a hacker known as Peace began publicly selling alleged Yahoo user credentials online. (Peace told Vice’s Motherboard blog he or she had been trading them privately for some time before that.) Yahoo said at the time that it was “aware of the claim” and its security team was “working to determine the facts.”
That means users’ credentials were out in the open for nearly two months before Yahoo confirmed the breach and notified them. Verizon, which is in the process of acquiring the long-troubled internet giant for $4.8 billion, said in a statement Thurdsay that it was only notified of the issue by Yahoo “within the last two days.”
Yahoo said in its Tumblr post that it believes the information was stolen by “a state-sponsored actor” but it did not get more specific. In a June interview with Wired, Peace identified himself or herself a former member of a team of Russian hackers who had breached and sold credentials from several major online services between 2012 and 2013.
How bad is the news for those whose information was stolen? It’s not great, but it also doesn’t necessarily mean someone’s out there running up charges on your credit card.
Peace told Wired in June that the information from the breaches—which presumably included the Yahoo hack, although that had not been disclosed yet—was being used primarily “for spamming,” i.e., sending spam to the people whose information was stolen. But since such info can often be passed around widely among criminal hackers, it’s always possible it could be used for more nefarious purposes. The good news is that Yahoo says the passwords were hashed, meaning that they’re useless unless someone can decrypt them. Yahoo adds that its ongoing investigation suggests the breach “did not include unprotected passwords, payment card data, or bank account information,” and that there’s no evidence the hackers still have access to Yahoo’s system.
The company’s full post is here.
Google’s New Messaging App Is “Smart.” But Should You Use It?
The first thing to remember about Allo, Google’s new messaging app, is that there are a lot of messaging apps out there. For that matter, there are a lot of Google messaging apps out there, including Hangouts and Duo. (As my former colleague Lily Hay Newman pointed out, there really should be just one—and it should be called Gchat.) The tech press may be abuzz over Allo’s launch, and there are good reasons to pay attention to it. Whether there are good reasons to use it, however, is less clear.
As a messaging app, Allo seems … fine? Pretty good, really, to the extent that the major messaging apps are distinguishable from one another. If your goal is to communicate with someone via text or voice recording or funny animal sticker, it will do the trick. Clean interface, simple controls—it’s straightforward and pleasant to use, which probably means the teens will hate it.
What’s special about Allo is not that it’s supposed to be a better messaging app than all the others, per se. Rather, Google is billing it as a smarter messaging app. “Smarter” refers to the artificial intelligence built into the Google Assistant, a sort of helper bot that lives in the app. You can converse with it directly, like Siri. But it can also slide into your conversations with others by suggesting replies to their messages, offering a restaurant recommendation, or connecting you to relevant info from the Web or other apps. It’s like a genial butler who is also sort of nosy.
This is not a novel idea: Bots are de rigeur in messaging apps these days. But Google has a big head start on rival companies when it comes to both A.I. and search. That comes in handy in figuring out what you mean when you type, for example, “who won the game last night,” or “when’s my next appointment.” Apple’s Siri has long been criticized as useless or unintelligent because it seems to struggle to even hear you correctly, let alone answer basic inquiries. It may be improving but Google’s voice and conversational search technology has long felt superior. “Smart” may still be the wrong word for the Google Assistant, at least by the standards of human intelligence, but “less dumb” is a step in the right direction.
Part of the promise of Allo, then, is that of a messaging app that is not simply a line of communication between two or more people, but a portal through which they can readily draw in information from the outside world. It’s the kind of thing that sounds impressive in a Silicon Valley boardroom or in the pages of a tech blog. It’s an open question whether these sorts of features really matter to those who just want a convenient way to chat with friends or family.
More entertaining is the “smart reply” feature that suggests potential responses to people’s messages. You can have some fun or laughs at its expense by seeing how it responds to a booty call, or a Welsh corgi. But the real utility lies in the convenience of being able to quickly tap on a canned response, rather than typing something from scratch, in the many situations when nothing more elaborate is needed. Google said earlier this year that 10 percent of replies in its email app Inbox begin with a “smart reply,” which users can either send as-is or tweak as needed. As Google gets better at this, the number should go up. And it seems even more appropriate to a chat app than the somewhat more formal realm of email.
A clever-sounding twist is that the Google Assistant will actually pay attention to the language you use in chats with a given person, so that it can tailor its smart replies to the context. If you’re in the habit of saying “yo” to your friend but not to your grandfather, it will adapt accordingly.
This is all quite neat and futuristic, if not exactly essential, from the user’s perspective. But, as we’ve come to expect from Google’s services, there’s a flipside for those concerned with privacy.
Google raised the hopes and expectations of privacy advocates when it announced in March that Allo would offer end-to-end encryption, and that it would delete your messages after a certain amount of time rather than storing them indefinitely. In a surprising reversal, Google disclosed when it launched the app on Wednesday that it actually will store your chats indefinitely by default, until or unless you actively delete them. If you want your chats to be truly private, you’ll have to switch to “Incognito” mode. This will disable the A.I. assistant features that were supposed to be Allo’s selling point. Google told the Verge it made the change because the smart replies wouldn’t work as well without access to people’s chat histories.
There’s nothing inherently evil about a messaging app that stores your chats or makes end-to-end encryption an option, rather than the default. And users of Google services like Gmail should be familiar with the tradeoffs involved in letting the company store and scan your personal communications. But Google’s backtracking was a big strategic blunder, because the big story of the day of Allo’s launch was not its “intelligence” but its privacy risks. Edward Snowden spent half the day on Twitter telling people not to download it.
Eventually the privacy backlash will fade, but it underscores what might be a bigger problem for Google and other big tech companies. As I explained at length in a story about the rise of A.I. assistants, Silicon Valley’s dream of making them your primary portal to the online world is partly about making things easier on users. But it’s also about ingratiating tech companies’ own services more deeply into our lives, so that we come to trust and depend on them—and give them ever more access to our data. When we let a chat bot in on all our daily interactions, we’re letting its creator in on them too.
Allo is a test case for our willingness to participate in that bargain. Based on the press it has received so far, it's probably a test Google wishes it could retake.
Previously in Slate:
Isn’t It Time We Designed an Election for the 21st Century?
The civic ritual of voting in America is an act of nostalgia. Casting a ballot, unlike most things in our society, doesn’t ever seem to change. It’s the same as it was when you accompanied your parents to vote, or when they accompanied their parents. This deference to tradition would be worth celebrating if our elections weren’t riddled with hanging chads, imperfect counts, long lines, and confusion over who’s registered to vote, and if our voter experience didn’t compare so poorly to other, less important 21st-century customer experiences. Countries like Canada, Brazil, and Germany use electronic voting that offers accurate and instantaneous results. Why not the United States? Why not design an election for the 21st Century?
On Wednesday, Oct. 5, at 6 p.m., Future Tense—a partnership of Arizona State University, New America, and Slate—will host a happy hour and brainstorm conversation in Washington, D.C., on how to create a better, more efficient, and more just election system.* Speakers will include Slate’s Jamelle Bouie and Dahlia Lithwick. For more information and to RSVP, visit the New America website.
Program director, IDEO
Chief political correspondent, Slate
Senior Computer Scientist, SRI International
Senior editor, Slate
Deputy director, Democracy Program, Brennan Center for Justice
Director of studies, New America
Director, Political Reform Program, New America
Correction, Sept. 23, 2016: This post originally misstated the date of the event. It is Oct. 5, not Nov. 5.
Future Tense Newsletter: Small Particles and Big Plays
Greetings, Future Tensers,
Spend enough time looking into nanotechnology, as we have for this month’s Futurography course, and you’ll realize that it touches on a vast array of fields and activities. Two pieces that we published this week help capture that range: First, James Pitt discussed attempts to use machine learning to predict how nanoparticles will function in medical applications. Then, on the opposite end of the spectrum, Emily Tamkin interviewed Kate Nichols, an artist in residence at a nanoscale laboratory. For Nichols, nanomaterials are more than a tool for making art; they also change the way that she sees art, reshaping her understanding of properties like color.
That’s the small stuff, but there are big things going on at the juncture of technology and sports this week: Will Oremus reviewed a virtual reality film about the 2016 NBA Finals, finding that it both challenged some “unwritten rules” about VR cinema and was actually worth watching, despite a few shortcomings. Football also made the jump to another medium, with the NFL broadcasting a recent game over Twitter. Laura Wagner watched and found that things went surprisingly smoothly. Meanwhile, new sports are still finding their way to old platforms, with drone racing debuting on ESPN.
Here are some of the other articles we read while wondering about this cow’s true identity:
- Cybersecurity: Everyone agrees that you should block your computer’s webcam, but what’s the best way to cover it? I reviewed some options.
- Communication: The latest iOS update tries to reduce emoji to mere replacements for words, and Zoe Mendelson worries that it’s ruining their expressiveness in the process.
- Emergencies: On Monday, the New York mayor’s office sent millions of citizens a vague, text-only alert about the search for a bombing suspect. Did it make them safer?
- Internet access: There’s an ambitious plan in the works to install free Wi-Fi all across the European Union.
- The 1986 Computer Fraud and Abuse Act remains one of the most controversial federal tech-regulating laws on the books. On Thursday, Sept. 29, Future Tense and New America’s Open Technology Institute will host a lunchtime conversation in Washington, D.C., on the legacy and future of the law—and what lessons it offers for those crafting tech-related legislation. For more information and to RSVP, visit the New America website.
- Is it time we designed an election for the 21st century? Join Future Tense in Washington, D.C., on Oct. 5 at 6 p.m. for a happy hour and brainstorm on how to create a better, more efficient, and more just election system. For more information and to RSVP, visit the New America website.
Thumbs up emoji,
for Future Tense
The Emoji Era Is Over. Thanks, Apple.
Do you remember the first time you swiped through the original set of emoji? It was really weird, right? Because while they presumably served as one-tap word replacements, they were extremely unlikely everyday vocab candidates. Floppy disk. Fishcake. Space invader. Old-school mailboxes. Barely recognizable houseplant cactus. It was deliciously random. This was the joke we were all laughing at.
This joke was the genetic mutation that helped the emoji organism flourish, conquer the world. If they’d simply been a set of pictures of common word-replacing images, the annoyingness of having to scroll through the entire set hunting for the right one would have precluded the one-tap efficiency, and rendered emoji superfluous altogether. But since they failed from day one, since they were always arbitrary, they transcended pragmatic-use demand, becoming more of a cultural fetish than a tool.
Their names suggest that emoji were supposed to have fixed meanings. But most of them look nothing like their names: 😯, for example, is “Hushed Face Emoji” and 😤 is “Face With Look of Triumph.” Supposedly they make more intuitive sense in their home cultural context: Japan. Regardless, few people knew they had names anyway, and since they don’t look like those names to the average American, emoji effectively did not have fixed meanings.
This was also a key to their success. Emoji were fun precisely because of the ambiguousness that rendered their meanings subjective. It allowed for interpretation in using them as signifiers. It made emoji a super creative semiotic game called “Hey, Guess What I Mean by This!” It allowed for that sometimes-baffling, sometimes-glorious moment on the receiving end wherein you either have no goddamn idea what someone meant or you totally do.
But that has changed. The iOS 10 emoji are way too lifelike, literal, objectively interpretable, and well, way less weird. They made emojis prescriptive, in effect, quashing the possibility for play. This ambiguously angsty little man 🙇? Now pathetically apologetic. The silly dancing twins in leotards 👯 now have these weird slender bodies that aren't nearly as cute or celebratory. The lady getting a head massage’s 💆 slack-jawed o-shaped mouth is now closed, cool, relaxed. The human characters got oblong, detailed faces. And their standard, creepy, maniacal grin👴? All but gone. Replaced with what looks more like Xanax-induced contentedness.
During the golden era of emoji, we used their flexibility to transmit hard-to-articulate emotional nuggets of significance. Take, for example, that weird indistinguishable grin-grimace emoji: 😁. According to emojitracker.com, it’s the 15th most popular emoji on Twitter. Its slightly-guilty-slightly-pleased-slightly-embarrassed-but-still-excited expression made it a favorite, I suspect, because we often experience this dynamic maelstrom of feelings in real life. Like when you ask someone for a big, irritating favor that will also deepen the intimacy of your bond. Like asking a new friend to borrow their car for a weekend. Or asking your boss to let you leave early to go to a concert. A lot of asks occur, for better or worse, over text, and the grin-grimace was a trusty analog for this complex shame-joy we experience. Alas, no longer. Now Apple has changed it into an unquestionably happy grin. This is a huge loss. (Couple a huge shit-eating grin with a big ask and you just look like an entitled brat.)
Emoji function best as pragmatic gestures, not word replacements. When you meet people for the first time, you can hug them, shake their hand, politely nod, or kiss them on the cheek. None has a fixed meaning, but each sends a different set of signals, depending on their contexts. This is useful, because meeting someone, like most of life, is loaded with a lot of unknown factors and we need agility to both communicate and hide our thoughts and feelings as we please. A handshake, at its core, simply acknowledges meeting and establishes polite rapport with a possible degree of professional distance. We have freedom to adjust grip strength and number of ups-and-downs in hopes of sending more nuanced messages like perhaps “I’m not afraid of you.” Of course these signals get misinterpreted all the time, but their pliancy is a feature, not a bug. It doesn’t mean we should try to create a comprehensive system for them.
That’s exactly what iOS 10 does to emoji—tries to fix something not broken. These new emoji leave no room for vaguery, interpretation, weirdness. And worse still, iOS 10’s emoji-suggest function now lets you tap a word to replace it with an emoji of Apple’s suggestion. No! That’s not it. Then emoji is just formulaic code. That’s the opposite of what emojis should be. That’s the iOS 0-fun version.
The iOS 10 forms cut off the non-existent legs with which emoji once did their representative gymnastics. Apple's designers fundamentally misunderstood what made emoji great. This is the death of emoji, or at least the beginning of the end. RIP little friends. 💀
The Problem With That Cellphone Alert About the Chelsea Bombing Suspect
A few minutes before 8 a.m. Monday, millions of New Yorkers’ phones screeched almost simultaneously. They all received the same notification:
That’s it. No links, no pictures, no further context—no one to call except 911. I got the alert while feeding my toddler breakfast, and in my pre-coffee haze, I glanced at my phone and mistook it for an Amber Alert. Elsewhere in the city, subway cars full of people must have looked up from their phones and regarded one another warily. Might one of their fellow passengers be Ahmad Khan Rahami? Young men with brown skin might well have wondered: Might one of my fellow passengers mistake me for Ahmad Khan Rahami?
The alert appears to have been the first of its kind, the New York Times reports. That is, it’s the first time the Wireless Emergency Alerts system has been used as a sort of virtual “WANTED” poster as opposed to its more familiar uses in weather emergencies or child abductions. The Times was told that the alert went out throughout New York City, and that the decision to use the system for that purpose came from the office of Mayor Bill de Blasio.
By noon, the suspect in question had been arrested. There’s no evidence, at this point, that the mobile push notification helped authorities find him.
De Blasio’s press secretary, Eric Phillips, said on Twitter that the ability to use mobile push notifications in a manhunt is an “important added capacity” for law enforcement:
First time something like has been done. Important added capacity. pic.twitter.com/9yOLS03JPx— Eric Phillips (@EricFPhillips) September 19, 2016
But others criticized authorities’ decision to use the system in this way. In New York magazine, Brian Feldman called it “an extremely bad push alert to blast across the greater New York area:"
It provides no useful contextual information, warns of no imminent danger. It essentially deputizes the five boroughs and encourages people to treat anyone who looks like he might be named “Ahmad Khan Rahami” with suspicion. In a country where people are routinely harassed and assaulted for just appearing to be Muslim, this is remarkably ill-advised.
Feldman is right that the notification was seriously flawed. And yet I think Phillips is also right that the ability for authorities to reach people on their cellphones could be important, if used judiciously.
It’s a tenet of good crime reporting that you don’t describe a suspect unless you have enough information that people could realistically distinguish that individual from others of similar age, race, build, etc. So to enlist the public in a hunt for, say, a “28-year-old male with dark skin, medium build, and brown facial hair” would be dangerous folly. You’re asking people to go after a stereotype, not an individual.
On the other hand, if you have a clear photo of the suspect’s face, you publish it, while describing the suspect in as much detail as possible. Countless crimes have been solved because a member of the public happened to spot a suspect whose face they had seen in the news. There’s still a real risk of false positives, which has to be taken seriously. But depending on the severity of the crime, it could be outweighed by the public safety interest of catching the perpetrator.
In this case, the notification included neither a description nor a face, but a name. That’s better than a vague description, because it identifies an individual rather than a stereotype. No doubt there are people in New York City and the surrounding area who know Rahami personally but were not aware that he was wanted. If the notification reached those people, it could spur them to provide information that would help authorities track him down.
But if a name is better than a vague description, it’s still precious little to go on for the millions of New Yorkers who don’t happen to know Rahami. Without a face to go with it, it simply encourages people to view any young man who looks like he might have such a name as a potentially deadly terrorist. That’s deeply unfair, and it could lead to innocent people getting hurt.
Granted, the alert did not omit the face out of ignorance or malevolence. As Vice’s Mark Harris recently reported, the Wireless Emergency Alert system is a blunt instrument. Due to various technical constraints, its geographic targeting is poor, and it is limited to text-only messages of 90 characters or less. That means the mayor’s office couldn’t have included the suspect’s face if it wanted to—which it surely did, since authorities intentionally spread the image on social media before sending the notification. This also explains why the notification failed to explain what Rahami was wanted for, or anything else about him.
That leaves two open questions. First, was the mayor’s office right to send this alert, given the aforementioned constraints and the risk of casting suspicion on innocent people? And second: If the system were to allow authorities to convey greater detail, including links or images, would that be a good thing?
I don’t think there are easy answers to either question. But on the first, I lean toward “no,” while acknowledging that it’s far easier for me to criticize such a decision than it was for them to make it.
True, authorities were under tremendous pressure to do whatever they could to find the suspect before anyone else got hurt. But a system this crude, intrusive, and potentially harmful should not be employed in innovative ways on an ad hoc basis. There should be clear, well-thought-out policies in place to ensure that it’s used as carefully, as sparingly, and as effectively as possible. Those policies should be debated in public and codified before the system is used in a new way. And out of that process should come an answer to the second question.
If authorities have the right man, we can all be grateful for their investigative work and thankful that he’s no longer in a position to endanger innocent people. Next time, let’s hope the authorities are a little more careful that they don’t inadvertently do the same.
Search Engines Promise to Ban Ads for Sex Determination Services in India
Three major search engines have pledged to block information in India about sex determination services, which the country has outlawed to prevent sex-selective abortions.
Google, Yahoo, and Bing pledged to ban advertisements for these services, the Indian health ministry told the Indian Supreme Court on Monday. The high court was hearing a case against Google India, Yahoo India and Microsoft Corp. that was filed by a prominent Indian activist. The petitioner, George Sabu, a leading advocate fighting against female feticide, argued that search engines were violating Indian law by displaying advertising for sex determination services.