Future Tense
The Citizen's Guide to the Future

April 24 2015 6:46 PM

Customers Aren’t Happy With Their Data Security. Execs Are the Only Ones Surprised.

There's always something going on with corporate data breaches or new digital security initiatives. And those things look very different to the executives that run companies than they do to the customers who use their products and services. A new study from consulting firm Deloitte illustrates in a few stark (and kind of hilarious) stats just how varied the views are.

The survey polled 2,001 American consumers and 70 consumer product executives, and the consumers were generally less positive or enthusiastic than the executives about corporate security/privacy efforts. For example, 47 percent of execs thought that customers felt it was worth it to share their personal information with companies in exchange for perks like coupons and customized promotions. But 75 percent of consumers disagreed. For product reviews, 47 percent of executives thought consumers found sharing their data worthwhile. Only 18 percent of consumers agreed. Ruh roh.


Consumers don't always seem to know the best way to protect their digital identities and assets, but they know that they don't know. Only 28 percent of consumers surveyed said they thought they knew which companies would protect their personal data. But 83 percent did say that they're aware of retailer data breaches. Only about half said they would be forgiving if a company had a data breach.

The researchers wrote, "Our survey suggests that the field is wide open for consumer product companies to build a reputation for strong data privacy and security practices," which is really just a sneaky way of saying that not nearly enough companies are implementing best practices right now.

Probably the most revealing stat is that 77 percent of execs said they think their comapnies' data privacy policies are clear and easy to understand. Ha! About 73 percent of consumers said they want more straightforward data privacy policies.

Video Advertisement

April 24 2015 1:41 PM

Microsoft Word Spells the Names of Game of Thrones Characters Better Than You Can

Jon Snow may know nothing, but Microsoft Word knows more than you might expect.

When Slate’s Jonathan Fischer sat down to write his review of the latest Game of Thrones season, he encountered an unexpected difficulty: It’s very hard to spell the name Daenerys. Worried that he wouldn’t notice if he got it wrong, he resorted to a manual solution. “I did a lot of copying and pasting Daenerys from my browser,” he told me.


Maybe he shouldn’t have gone to the trouble. Surprisingly, Microsoft Office’s spell-checker recognizes the names of virtually every Game of Thrones character, major and minor alike. Historian Greg Jenner, who first called attention to this oddity on Twitter, shrugged it off with a simple, “who knew?”

Things may be stranger than Jenner realized. This is an honor that the suite of programs seemingly affords to no other popular franchise. While neither Shae nor her spurned lover Tyrion Lannister elicit complaint from Word, Lord of the Rings generates all kinds of trouble: Red lines of judgment run beneath the names of five of the Fellowship of the Ring’s nine members—Samwise Gamgee, Legolas, Gimli, Boromir, Meriadoc Brandybuck.*

Much the same is true for important characters from Star Wars, The Hunger Games, and even the Harry Potter franchise. As with Tolkien’s Lord of the Rings, Office’s spell check recognizes the names of some prominent characters from each while ignoring others. Like old Ben Kenobi, it has heard the name Obi Wan, but not that of his old foe, Count Dooku. Perhaps Microsoft—like most endowed with good taste—simply doesn’t acknowledge that the Star Wars prequels exist. It’s more difficult, however, to explain why it seems friendly with Severus Snape, but not with Harry Potter’s actual friend, Rubeus Hagrid.

There are exceptions to the Game of Thrones rule: Characters who appear only in the novels still tend to get red-lined. Likewise, some place names (Meereen, Dorne, Yunkai, etc.) fail to appear in Microsoft’s dictionary. And for some reason the mysterious assassin Jaqen H’ghar goes unacknowledged, though that may be because “a man is not Jaqen H’ghar,” as he puts it. Nevertheless, the results are striking: You should be able to write your next episode recap with confidence. But prepare for trouble if, for some reason, you’re planning an essay on Lloyd Alexander’s Chronicles of Prydain.


There’s one person who probably won’t benefit from this discovery: Game of Thrones creator George R.R. Martin. In a 2014 interview with Conan O’Brien, Martin explained that he still uses the late-1980s program Word Star 4.0 when he’s writing. “I hate spell-check,” he told O’Brien at the time. Perhaps Word’s surprising familiarity with Martin’s work is a quiet rebuke to his disdain.

Update, April 24, 1:52 p.m.: A Microsoft representative just sent the following statement:

Glad you noticed that we are not just about common words. We regularly update the spellers to keep them fresh, including additions from the latest, most frequent names from movies, books, and TV shows. To do this, we research what people are talking about, what’s trending in the business world, current affairs, and other popular domains. We can’t add everything that comes up, so we reference different sources and determine which words to include. One of the 2014 lexical updates included the addition of characters from the Game of Thrones. Names relating to the TV show surfaced through several data sources which qualified them to be added. Up until 2014 we updated the English speller quarterly with 12,000 words added last year. Since January 2015 we’ve been updating the English speller on a monthly basis, and are on track to add an additional 32,000 words in 2015.

*Correction, April 24: This article originally misstated that Tyrion Lannister’s lover was named Osha Shae. Her name is simply Shae. Osha is a different character. A graphic in the post also misspelled the name of Chronicles of Prydain character Eilonwy. The graphic has been updated.

April 24 2015 11:28 AM

Do Robots Urinate? And Other Questions Raised by This Google Maps Prank.

There are just so many questions about this Android robot urinating on an Apple logo in Google Maps. Is it accurate to depict a robot peeing? If you want to support Google over Apple, isn't this a weird way to do it, since you're just getting runoff urine all over a Google product? We could go all day.

First spotted by Cult of Android, the image shows up near the Pakistani city Rawalpindi. It doesn't show up in Satellite View, so that rules out the possibility of a masterful crop circle.


In a statement, Google explains:

The vast majority of users who edit our maps provide great contributions, such as mapping places that have never been mapped before, or adding new business openings or address changes. We’re sorry for this inappropriate user-created content; we’re working to remove it quickly. We also learn from these issues, and we’re constantly improving how we detect, prevent and handle bad edits.

The Guardian points out that a few miles east of the Android/Apple situation someone has added a "Google review policy is crap :(" message. So, yeah, it seems like someone figured out how to game Google's Map Maker feature and then added these images to make a statement. The whole thing is reminiscent of a similar, but less visually impactful, prank from last week in which users got a snowboard shop called "Edward's Snow Den" verified on Google Maps and then changed the address to make it look like Edward Snowden was inside the White House.

Google says it learns from these situations, but the takeaway seems to be that the company needs a better approval system for Map Maker and verified locations. Otherwise things like this will probably, you know, keep happening.


Screencap of Google Maps

April 24 2015 11:22 AM

Why Don’t Computers Care More About Our Happiness?

This post originally appeared on Wired.

Your computer doesn't care if you’re smiling. Why would it?


Your computer isn’t a person. It’s a computer! A tool. A machine. Computers are logical. They’re rational. They don’t get tired, or sad, or frustrated. They don’t get distracted. They’re oblivious to human happiness. They weren’t programmed to do emotion.

Your computer cares about important, useful things. It cares about your environment, dimming its screen when you turn off the lights. It cares about connecting you to friends and family, lighting up with a buzz whenever they want to get in touch. Increasingly, it cares about context: your next appointment, your next flight, the traffic in your city.

A smile? That’s just obviously not a computer’s concern.

All of which is to say: Smile Suggest, a Chrome extension that uses an open source computer vision library to automatically bookmark and share websites that make you smile—like, actually smile, with your mouth—is obviously a lark. Martin McAllister, the British copywriter who created it, called it “daft” no fewer than three times in our brief conversation, just so there was no confusion. There isn’t. It’s a joke! We don’t control our computers by smiling at them.

Still, McAllister did hazard what I thought was an astute observation about his “daft little B-side.” He began cautiously: “If I can read into it this deeply, Smile Suggest is a slight, flippant way of making a deeper point. When we like or share something, it’s not totally genuine. It’s something we’re putting our name on. A smile is something you do without thinking. You don’t have those layers of cynical thought. It’s just what you like and what’s funny to you.”

To any modern computer user, the idea of having every site that elicits a smile beamed automatically to Facebook is mortifying. But McAllister’s little extension got me wondering: Could a smile be a useful signal for a computer? Might we be able to do something interesting with such a genuine, unfiltered bit of input? Probably. I would like to review every YouTube video that made me laugh in 2012. I’d be delighted if my computer pointed me to a Gchat conversation, long forgotten, that made me crack up in college.

Granted, in a world of presumed total surveillance, it’s upsetting to imagine our computers having access to something as intimate as our unmediated emotions. That’s our last stand against the bureaucrats and the brands, the unquantifiable inner sanctum of self.

But supposing some alternate arrangement in which we could actually trust our devices and the people making them, emotion could be a profoundly powerful principle to design around. The Apple Watch will buzz you with a reminder to stand up if you’ve been sitting too long, perhaps the first time a mainstream consumer electronic device has tried to spur a healthy behavior change by default, right out of the box. Is it inconceivable that someday our gadgets would care about our emotional health in the same way? Smile Suggest is unthreateningly, unambiguously a lark—but what if it didn’t have to be?

A smile is the most basic unit of human happiness, the joy response embedded in our genes, more universal than any language or culture or custom. Shouldn’t that obviously be a computer’s concern?

Your computer already cares about a bunch of dumb things. It cares about your environment, dimming its screen when you turn off the lights, even though its blue light is screwing up your circadian rhythms. It cares about connecting you to the companies whose apps you’ve installed, lighting up with a buzz whenever they want to engage you. Increasingly, it cares about context: your next appointment, your next flight, the traffic in your city—even though, as a high school sophomore in Baltimore, you don’t use a calendar app, aren’t flying anywhere, and are still a decade away from your first rush-hour commute.

Your computer isn’t a person, but as psychological studies have shown, you often can’t help but treat it like one. Nope, your computer is just a dumb tool, a lowly machine. Computers were designed to be logical, rational, like we once thought humans to be, before we knew better. Computers don’t know to help when you’re tired, or sad, or frustrated. They don’t steer you away from distractions. They’re oblivious to your happiness. We don’t design them to do emotion.

Your computer doesn’t care if you’re smiling. It can’t hurt to occasionally ask: Why doesn’t it?

Also on Wired:

April 23 2015 12:35 PM

Swiss Postal Service Will Start Using Delivery Drones in Pilot Program This Summer

Swiss Post, Switzerland's postal service, wants to make drone mail delivery happen. For real. The service is partnering with delivery drone company Matternet to run a pilot this summer in which ONE drones will carry pieces of mail weighing up to 2.2 pounds for up to 12 miles.

Matternet ONE drones autonomously follow routes plotted by the company's cloud software. Light packages and other mail will be eligible for delivery through the project. Matternet said in a statement, “The primary aim of this pilot project is a Proof of Concept to clarify the legal framework, consider local conditions and explore the technical and business capabilities of the drones.”


Matternet has tested its drones in Haiti, but the Swiss Post collaboration is an opportunity to work out the logistics and complications of doing drone delivery in a different range of conditions. TechCrunch reports that Matternet has flown more drone hours than any other company. “We are extremely excited to bring Matternet ONE to Switzerland,” said Andreas Raptopoulos, Matternet's CEO.

Between Amazon putting pressure on the FAA to allow drone delivery testing, and the United States Postal Service considering a delivery transportation bid from a drone developer, it seems like the burrito delivery fantasies of a few years ago may actually be realized in the next few years.

April 23 2015 10:36 AM

Nauseous? Take This Virtual Nose And Come Back In The Morning

This post originally appeared in Wired.

In the 1950s, the Navy introduced a simulator that taught pilots to fly helicopters from the comfort of virtual cockpits. They could take off, navigate bumpy air, and land without ever leaving the ground. It was a breakthrough that allowed an increasing number of pilots to train without the risk of crashing. But the sim wasn’t all that comfortable, and a significant number of pilots felt sick as hell while using it.


It wasn’t motion sickness per se, though the symptoms were comparable: dizziness, nausea, sweating, disorientation. Researchers of the day dubbed this physiological phenomenon “simulator sickness,” an early ancestor of the flu-like symptoms some feel after strapping on virtual reality headsets today.

Eliminating simulator sickness is a major interest of the burgeoning VR industry, but so far there hasn’t been a clear answer. Home remedies include drinking alcohol, while companies like Oculus Rift are exploring better positional tracking and improved display resolution. But researchers at Purdue University believe they’ve found a way to reduce the negative physical effects of virtual reality by using something that’s right in front of your face.

“We’ve discovered putting a virtual nose in the scene seems to have a stabilizing effect,” says David Whittinghill, an assistant professor in Purdue University’s Department of Computer Graphics Technology. That’s right, Whittinghill says placing a schnoz in the lower center of a headset’s screen has been shown to reduce the effects of simulator sickness by 13.5 percent.

Simulator sickness is still being studied, but researchers often point to sensory conflict as a primary cause. This theory states that a dissonance between what your eyes see on screen and the kind of motion your body feels can lead to disorientation and feelings of nausea. Say you’re riding a virtual roller coaster. As you creep up the coaster’s first big hill, your eyes will register an upward incline, but your vestibular system—the tubes of liquid in your ears that help gauge your position in the world—remains unchanged. “Our bodies don’t like that,” says Whittinghill.

Whittinghill and his team of students (Bradley Ziegler, James Moore, and Tristan Case) say anecdotal evidence shows a fixed reference point in a frame, like car dashboards and cockpits, tends to reduce feelings of simulator sickness. This got them thinking about the nose as a natural reference point and how it’s conspicuously absent from gogglelike virtual reality headsets.

In the small study, the team had 41 participants use various VR applications (one simulation of walking around a Tuscan villa, another of riding a roller coaster). Half played the games with the virtual nose, the other half played without. Whittinghill found participants with the nose were able to play the Tuscan villa game for 94.2 seconds longer than those playing without, while time played on the roller coaster game increased by 2.2 seconds.

“That’s not enough,” says Whittinghill. But it is a promising start, particularly because the participants playing with the virtual nose didn’t even notice it was there. “It’s a big honking nose,” he says. “It never occurred to us that they wouldn’t perceive it, but they were almost universally baffled about what we were even talking about.” Whittinghill says this is likely a result of “change blindness,” a perceptual phenomenon that allows our perceptual system to ignore objects that we see over and over again. Whittinghill’s theory is that the nose’s proximity to our eyes leads our brains to filter out its presence. “It’s likely to be hitting those same sensory neurons,” he explains. “I’m just guessing the neurons are saying no, this isn’t a real object, I’m going to subtract this from my perception.”

Regardless of whether that explanation is right, it bodes well for game designers who might be leery of sticking a nose in the center of an assiduously crafted world. At the same time, Whittinghill says the findings raise more questions than they answer. Would the results be better if the nose matched the ethnicity of the user? What happens if you change the position or size?  Does it have to be a nose at all? If nothing else, the study points to the interesting design challenges involved in developing the new medium.

Eventually, Whittinghill wants to compile enough data to make accurate predictions about how sick a game might make any given player. “I can see people going to a website to answer a few questions about themselves to get some idea of their susceptibility,” he says, adding that it would require information about weight, age, and vision. Think of it as a virtual reality addition to sit alongside the pre-existing content ratings system for video games or a more personalized version of the Samsung Gear’s “comfort rating.” Someday next to “Kids,” “Mature,” and “Adult Only,” we might see something like: Nausea Rating: 7/10. Or: Your Nose Must Be This Big to Ride.

Also in Wired:

April 22 2015 4:10 PM

United Should Thank, Not Ban, Researcher Who Pointed Out a Major Security Flaw

I’m about to board a United Airlines 747 in Frankfurt, on my way to San Francisco. Last night, the airline sent me an email saying that the flight would be equipped with Wi-Fi. Until last week I’d have been glad for that, as I have a lot of work to do and could use the roughly 11-hour flight to get some of it done. I’m wishing United would turn off the wireless connection altogether.

Here’s why: Last week, Chris Roberts, a highly respected security researcher, alerted the world to what sounded like incredibly lax digital security on the carrier’s Wi-Fi–equipped planes. He taunt-tweeted this from a United flight: “Find myself on a 737/800, lets see Box-IFE-ICE-SATCOM, ? Shall we start playing with EICAS messages? “PASS OXYGEN ON” Anyone ? :)” Translation: He basically suggested that he could play with the engine indicators and crew alerts, and might be about to deploy the oxygen masks. (I’ve seen nothing to suggest that he had access to critical flight-control systems.)


The feds, who appear to have Twitter and other public networks under massive real-time surveillance, met his plane in Syracuse and grilled him for hours. FBI agents confiscated his computer gear before letting him go. Then United compounded the response by refusing to let Roberts board a flight to San Francisco for—no kidding—a security conference. In fact, according to news accounts, the airline has banned him.

It’s entirely fair to say that passengers shouldn’t be probing these systems during flight. I’d also call Roberts’ sarcastic tweet somewhat ill-conceived, but that’s in part a reflection on our culture, not just his judgment. America is still in the grip of 9/11 paranoia, and officials in government and companies that worry about terrorism usually seem to make their decisions on the basis of one motivation: “Don't let me be blamed the next time there’s an attack.”

Zero tolerance (or the pretense of it; see the Transportation Security Administration’s “security theater”) hasn’t just led to zero sense of humor. It’s also generated zero common sense. If the FBI overreacted—and I don’t think it did in a major way, except by confiscating Roberts’ gear—the airline’s banning of a researcher who was doing it a favor was way, way over the top. (Another carrier, declining to join United’s freakout, took him to his destination.)

United’s explanation for banning Roberts strikes me as just weird. It told the Washington Post: “Given Mr. Roberts’ claims regarding manipulating aircraft systems, we’ve decided it’s in the best interest of our customers and crew members that he not be allowed to fly United. However, we are confident our flight control systems could not be accessed through techniques he described.”

The illogic of this statement is obvious. If the second sentence is true, then nothing Roberts was doing could harm anyone. So why ban him?

If United and the aviation industry as a whole want to earn customers’ confidence in this situation, they should put Roberts and a bunch of other white-hat hackers on retainer. These very smart folks should be invited to probe at the systems, to help prevent the scenarios described in a new federal General Accountability Office report, which noted the very real potential for aviation system penetration by bad people.

The airlines probably do some of this already. Smart companies realize they can be more safe when they look for vulnerabilities instead of hoping that their almost certainly insecure networks can stand up to experts. The industry, increasingly an oligopoly, is making profits now that an improving economy has led to more demand for seats. I hope the carriers will put more of that cash into digital security—one of many American industries that clearly needs to care more about this. And I hope we’ll inject enough common sense back into our society to stop vilifying security researchers who go public with their concerns, often after being ignored when they try to alert the victims privately.

Meanwhile, United’s uninspiring approach to customer information—such as its insistence that its mileage-account holders still have four-digit passcodes—definitely doesn’t give me the warm fuzzies. When it comes down to deciding whether to have confidence in United's “trust us” statement or Roberts’ reputation for knowing what he’s talking about, I know which way I lean: not toward the airline, despite being a longtime (and generally satisfied) customer. The airline should strike a deal with Roberts, who’s now being represented by the Electronic Frontier Foundation, to resume flying and help make its data security the best in the business.

I’ll be heading to the gate in a few minutes. I won’t twiddle with any systems, I promise. (Not that I have the technical ability to do so in any case.) But I find myself wishing the airline would just turn the Wi-Fi off for the time being. When they talk about being cautious, this is one example I’d endorse.

April 22 2015 2:23 PM

Netizen Report: The Spring of Cybercrime Laws

The Netizen Report offers an international snapshot of challenges, victories, and emerging trends in Internet rights around the world. It originally appears each week on Global Voices Advocacy. Ellery Roberts Biddle, Weiping Li, Hae-in Lim, and Sarah Myers West contributed to this report.

GVA logo

A wave of new cybercrime legislation is passing over the globe this April. Newly minted laws in Egypt, Pakistan, and Tanzania aim to curb a wide variety of online crimes—but they introduce just as many, if not more, risks for the fundamental rights of Internet users.


In Pakistan the Prevention of Electronic Crimes bill soon to be tabled at the National Assembly would dramatically broaden the definition of cybercrime. It would criminalize political criticism and expression online and allow authorities to block any information system (websites included) if deemed necessary to “the interest of glory of the religion, security or defence of Pakistan, friendly relations with foreign states, public order and decency or morality.” The law also sets new data retention requirements for online service providers and provides new capabilities for government agencies to obtain and share user data with other governments, including the United States.

In a biting analysis of the bill’s flaws, Pakistan’s Express Tribune argued that it “threatens almost every internet actor rather than protecting them from cybercrimes.” Nongovernmental organizations—including Human Rights Watch, Article 19, Digital Rights Foundation Pakistan, and Bolo Bhi—issued a statement expressing deep concern about the proposed legislation, and they are circulating an online petition asking legislators to seek public input before voting.

Egypt’s government approved a cybercrime draft law that would codify many of the surveillance and Internet-related “security” practices that have become routine within the current government. According to news site Alaraby, the law would make blasphemy and electronic crimes committed for the purpose of disturbing public order, endangering the safety and security of society, or damaging national unity and social peace punishable with lifetime prison sentences. The bill is currently awaiting executive approval.

Meanwhile, in Southern Africa, Tanzania’s Parliament passed a similarly broad cybercrime law on April 1 that outlaws the publication of “misleading, deceptive or false information,” grants police broad search and seizure powers, and criminalizes the sending of information “without prior solicitation” by electronic means. Activists say the bill was rushed through Parliament and gives too much power to authorities without any meaningful oversight.

Blind faith: China outlaws politically controversial avatars
Screen names and profile pictures are the latest targets of Chinese Internet restrictions. New restrictions criminalize the use of avatars or online identities that “violate existing laws, post a national security threat or destroy ethnic unity.” A few days before the regulations went into effect, more than 60,000 user accounts were purged from social media platforms, followed by over 7,000 more in the days following implementation of the rules. Strange as it may sound, the rule speaks to the strong tradition among Chinese netizens of using their online identities and profile images to express political opinions. Last fall, for example, many netizens changed their profile images to an umbrella icon, in an expression of solidarity with pro-democracy demonstrations in Hong Kong. For now, it looks like those umbrellas will have to go into hiding.

Indian companies snub Facebook on net neutrality grounds
Indian technology and Internet companies are pulling out of Facebook’s Internet.org initiative. They argue that it threatens the principle of net neutrality, as it only offers users access to certain websites, rather than the Internet at large. Internet.org self-identifies as a project that seeks to narrow the global digital divide by giving “the unconnected majority of the world the power to connect.” When the organization says “connect,” it doesn’t exactly mean “connect to the Internet.” In exchange for absorbing the costs associated with supporting Internet traffic, Facebook partners with telecommunications companies to provide smartphone users with access to a small set of job, health care, news, and education sites, along with Facebook. The company faced similar criticisms from civil society groups throughout Latin America following founder Mark Zuckerberg’s appearance at the Summit of the Americas.

Mass surveillance is elephant in the Hague
Last week in the Hague, government, private sector, and civil society representatives gathered for the Global Conference on Cyberspace. Despite many conversations about privacy rights, both on stage and in the halls, final outcome documents from the event carried only scant references to the human rights implications of mass surveillance. This came as no surprise to digital rights advocates at the event, several of whom erected a life-size blow-up elephant nearby, a literal representation of the “elephant in the room.”

We see London, we see France. But can we see Google’s algorithm?
Google continues to grapple with formal accusations by the European Union that the search engine has abused its dominance in Web searches, specifically by boosting its own products in its Google Shopping service. The upper house of France’s parliament has taken Europe’s challenge to Google a step further by supporting a draft economy bill that would require Google to reveal the inner workings of its ranking algorithms, a shift that could bring more transparency to the process.

Open-code site Github received somewhere between 0 and 249 national security requests last year
In its first transparency report, Github indicated that it received 10 subpoenas for user information affecting 40 accounts. It also received between 0 and 249 national security letter requests impacting 0–249 accounts. These ridiculously vague figures reflect Github’s compliance with the U.S. government-imposed gag order on national security letters and indicate that it received at least one request for user information based on a national security letter or order from the U.S. Foreign Intelligence Surveillance Court.

New Research

April 22 2015 10:59 AM

Report: Google’s Cheap Wireless Service Is Launching Any Day Now

Google’s plan to offer a new type of wireless service has got to sound appealing to anyone shelling out hundreds or thousands per year on mobile phone costs. Details on the project have been scarce so far, but the company confirmed in March that it was moving forward. Now sources tell the Wall Street Journal that the launch is imminent.

The report indicates that the service could debut Wednesday. It seems like it will allow customers to pay for only the mobile data they use. Currently, the big four wireless companies rely on their old voice-and-text networks to make a lot of their money. They charge for unlimited talk and text (or whatever allotment you want) in addition to charging for data by the gigabyte. But if Google can change the equation so that almost everything runs over data networks (the company’s Google Fiber program is well-known for bringing ultra-high-speed Internet to cities across the United States), it could fundamentally change pricing for wireless service.


The Journal reports that Google will use Sprint and T-Mobile's networks to provide supplementary services over traditional cell networks. Sundar Pichai, who oversees Android at Google, said in March that the service will be operating on a small scale, and isn’t intended to take on the whole wireless industry. But this is Google, so that’s kind of hard to believe.

Update, April 22, 2015, 3 p.m.: It’s official. Google’s Project Fi will cost $20 a month for voice, texting, and even international coverage in more than 120 countries. Then for mobile data, each gigabyte customers add to their plans will cost $10. But if people don't use all of the data they buy they will get credited for the difference on each bill. Google is going to rely on Wi-Fi whenever possible to reduce network costs, but the service can use Sprint and T-Mobile's networks when needed. Project Fi will only be available for Google's Nexus 6 handsets at first.


April 21 2015 6:04 PM

A 3-D–Printed Bottle Clinches a Murder Trial

3-D printing has infinite potential, but it can also be gimmicky. In the case of this British murder trial, though, the technology played a pretty important role.

Lee Dent, 42, was convicted of murdering 17-year-old Alex Peguero Sosa in southern England (Kingsbridge, Devon) in July 2014. Dent stabbed Peguero Sosa in the neck with a broken bottle. And to demonstrate the crime to the court, the Devon and Cornwall Police enlisted a local school, City College Plymouth, to make a 3-D replica of the bottle.


It took 28 hours for the college’s CubeX 3D printer to recreate the murder weapon. “When we were approached by a senior detective who was involved with the murder trial, ... [we] were able to design and produce the weapon using the latest software,” a school spokesperson said in a statement.

It was the first time the college’s 3-D printing equipment was used in this way, and the first time the Devon and Cornwall Police incorporated the technology into an investigation. “This was the first time we had used this technological approach, and the use of it in court helped to fully explain the facts,” said Detective Inspector Ian Ringrose.

Dent will serve a minimum term of 22 years in prison.