Future Tense Newsletter: Drone Parenting Is the New Helicopter Parenting
In the new television series Minority Report—as in the film of the same name—a police force employs “precogs” to prevent crimes before they happen. This week in Future Tense, Patric M. Verrone points out that the show’s own predictions about tomorrow are unlikely to be quite so successful. Sure, some science fiction stories are scarily accurate. But, as I argued in another article, if you really want to know what the future holds, you’d be better off looking at what science-fiction fans are up to.
Here are some of the other stories that had us booting up our 3-D printers this week:
- Aliens: In conversation with Neil deGrasse Tyson, Edward Snowden observed that encryption technologies might be interfering with our ability to contact alien civilizations. In context, it’s a very sensible point. Really!
- Net Neutrality: While men have received much of the credit for efforts to preserve a free and open Internet, Marvin Ammori proposes that overlooked women contributed far more.
- Volkswagen: The VW emissions scandal is just the beginning. Copyright laws and other restrictions make it difficult to detect and prevent corporate criminality.
- Right to be Forgotten: In France, privacy regulators are trying to dictate what does and does not appear in Google search results. The company is pushing back, a struggle that, Mike Godwin insists, couldn’t be more important.
- Parenting: Parents are using drones to pull out children’s loose teeth. This is not a joke: We have video.
Broadcasting from the year 3000,
for Future Tense
Will Apple’s Control-Freakery Turn Personal Computers Into Big iPhones?
When Apple introduced its latest tablet computer earlier this month in San Francisco, CEO Tim Cook called the iPad Pro—a large-screen tablet with a detached keyboard—the “clearest expression of our vision of the future of personal computing.” The general reaction to this, once people stopped tweaking Apple for reinventing the Microsoft Surface, was applause.
Not from me. Cook's assertion reminded me of my declaration in a blog post five years ago, that the MacBook Air I'd just purchased was probably my last Mac. Apple, I said, was becoming more and more control-freakish in how it allowed customers to use the hardware they'd purchased. It seemed clear that the company intended to move its personal computers “into a more iPad/iPhone-like ecosystem, where Apple gives you permission to use the computers you buy in only the ways Apple considers appropriate.”
Is Apple planning to make all its personal computers iOS devices at some point? That's unclear, but this would not be a positive move for user freedom. More than any other major computing platform, iOS limits customer choices to those Apple deems appropriate—in large part by forcing software developers to get permission before selling, or even giving away, the apps that run on the platform. On the Mac, by contrast, users can obtain and install the software they choose, without Daddy Apple's permission.
Apple's control of the iOS ecosystem does offer convenience. Want to get an app for your device? Just log onto the App Store and you can find all kinds of things—and, Apple claims, it carefully vets everything to prevent malware from taking over your phone or tablet (or, soon, tabletlike personal computer).
But Apple's legion of fans might want to take note of several other bits of recent Apple news relating to the iOS ecosystem.
The first is a major hack that led to malware being distributed through the App Store. Software developers unwittingly downloaded and used development tools that had been modified, so when they uploaded their apps to Apple, the apps were infected. Neither they nor Apple caught the hack until some number—it's unclear how many—of users had installed the malware-laden apps, including versions of several hugely popular ones such as WeChat, on their devices.
The Android ecosystem also has its own serious insecurities, but there are several app portals besides Google's. Apple has made itself into what security experts call a “single point of failure”—where whatever goes wrong can affect many other parts of the ecosystem that no one can avoid using. (Example: A massive outage on Amazon's web-services platform this week created an “outage spiral” for some of its customers, and their customers.)
Apple's tight grip on iOS has another worrisome element: the app approval process that goes way, way beyond security and into the content of the app itself. Apple is asserting that it has the right, and the duty, to prevent its customers from seeing things that Apple, in its sole judgment, considers offensive or fotherwise objectionable. Under its vague guidelines, Apple decides if something is too icky to be sold (or even given away), and app developers have no recourse.
This has led more than once to content controls that are offensive to free speech itself. In the most recent case, journalist Dan Archer found himself stymied by the Cupertino content cops when he tried to ship an app that combined virtual reality with politics. As you'll see if you read his post, Apple was at best opaque in its reasons for not permitting the app to go into the store. It wasn't the first time Apple has done something like this, incidentally.
“Either Apple and other platform developers need to be far more transparent in their adjudication process, or they need to give rejected apps more concrete feedback,” Archer wrote. I'd go further. They should not be the world's moral police. If they think someone might be offended by something in their story, assuming it's not illegal in the first place (and very, very little speech is illegal), they should set aside an area for people who want to check out material that others might find deeply offensive. If we end up allowing free expression to depend on Apple's (and Facebook's and Google's, for that matter) terms of service, we're in trouble.
Tech hardware companies are steering us toward a world where the people who buy things don't really own them. Apple has moved harder, and faster, toward this future than the rest. If, as Tim Cook seemed to be suggesting, the future of the Mac is the iOS ecosystem, then people who use Macs should be factoring that into their thinking. Today, you can load whatever software you want on your Mac. Tomorrow, Apple may decide otherwise. This is progress?
Syria Conflict Forces First Withdrawal From Doomsday Arctic Seed Vault
This week, for the first time, scientists in the Middle East have accessed a seed bank designed for the apocalypse.
The Syrian war, which a study earlier this year suggested was sparked in part by a massive drought made worse by climate change, has instigated a horrible refugee crisis. Now, experts fear an important collection of seeds may have also been lost. Even if it remains unharmed, the scientists in charge of the collection say there’s no way to safely access it.
The seed bank was located at the headquarters of the International Center for Agricultural Research in Dry Areas in Aleppo, Syria. The center’s scientists relocated to Beirut after rebels began occupying the site in 2012. But they left behind an important collection of drought-resistant seeds in cold storage. Duplicates of about 87 percent of those seeds were sent to a special Arctic seed vault in Svalbard, Norway, before 2012, once it became obvious that the war posed a threat to the center. But the other 13 percent could become unrecoverable.
The seeds the scientists want back from the Arctic storage facility are specifically attuned to produce high yields of wheat, barley, and other staple crops in especially dry areas and are critical for the region’s future habitability as global warming intensifies. By jeopardizing the Syrian seed bank, fighters may have made the region more vulnerable to climate change in the long run. This is exactly what the Pentagon means when it says climate change is a “threat multiplier.”
"I don't think they're getting back in to Syria any time soon, so they're going to re-establish that gene bank center now," Cary Fowler, founder of the Svalbard Global Seed Vault, told Australia's ABC News.
The Syrian scientists need to restock their new site with the seeds previously sent to the Arctic for safekeeping—and that’s exactly how the process is supposed to work. When disaster strikes, the Arctic seed vault forms a sort of planetary insurance policy. The Norwegian newspaper VG first reported the withdrawal under the headline “historic day for the seed vault in Svalbard.”
The Arctic vault itself was designed to be disaster-proof but has recently struggled with a lack of funding that’s called its ambitious mandate—securing the world’s food supply, forever—into question. Scientists at smaller seed banks around the world, like the one in Syria, must regularly plant stored seeds and harvest new ones to maintain the collection. If they fail, for whatever reason, those varieties could be lost forever.
According to Reuters, the Arctic seed vault has “more than 860,000 samples, from almost all nations. Even if the power were to fail, the vault would stay frozen and sealed for at least 200 years.”
Drone Crash Injures 11-Month-Old Baby During Princess Bride Screening
On Sept. 12, an errant DJI Inspire 1 drone dropped from the skies and crashed to the ground during a public screening of The Princess Bride in Pasadena, California. The ensuing debris hit an 11-month-old girl in the head, and though the child escaped serious injury, the outcome could have been far worse. The incident was but the latest in a series of well-publicized drone crashes, and the Federal Aviation Administration, with good reason, is starting to become “concerned with the growing number of reports about unsafe [drone] operations,” an FAA representative told ArsTechnica. While my sense is that the vast majority of drones are used safely and responsibly, whenever one isn’t, the incident seems to make the news—and that’s a big problem. Drones will never transform the American economy in the way that industry boosters predict as long as they’re primarily known for falling from the sky and hitting infants on the head.
The FAA has announced that it will investigate the Pasadena incident. But I can already predict what its investigation will probably find. When a drone veers off course and fails midflight, operator error is often to blame. The FAA does not require hobbyists to obtain a pilot’s license or take a safety class before launching a drone. This regulatory lacuna means that many novice flyers are content to learn by doing, which is alarming if what you’re doing is operating a tiny helicopter that could fall and hit you on the head. Take this Gizmodo post from last year, which features a video of a man realizing that his drone’s battery is about to die midflight and frantically trying to save it before it falls into a lake. Most drone systems go out of their way to notify the user when their batteries are running low—but all the notifications in the world are useless if the user isn’t paying attention. Sometimes the user error isn’t as obvious as ignoring a flashing “Low Battery” signal. Drones are delicate instruments, and neglecting to keep them clean and well-maintained can increase the risk of component failure.
Anecdotal evidence indicates that many drone hobbyists are blissfully unaware of basic flight safety protocols or of their drones’ operational limitations. For example: The commercial drones used by hobbyists aren’t like military drones, whose operators can be halfway around the world from the drone itself. The website for the DJI Inspire 1 drone says that the device’s remote controller has a range of 2 kilometers, assuming unobstructed outdoor operations. What this means is that it’s very important to keep your drone within your field of vision at all times; when a drone goes out of sight, it might also go out of range. A city cop told the Pasadena Star-News that the drone operator in the Princess Bride incident lost visual contact with his DJI Inspire 1 drone and that the drone subsequently lost its signal.
But the blame doesn’t always redound to operator error. The lawyer for the high school science teacher accused of sending a drone careening into a U.S. Open tennis match earlier this month has claimed that his client’s drone “went haywire,” and he might well be telling the truth. Drone-enthusiast forums are filled with stories of random flyaways, with the devices crashing or disappearing through no apparent fault of the operator, and users speculate on problems ranging from software/firmware failures to GPS glitches. “My bebop flew away itself already 2 times for two weeks,” one sad poster wrote of his Parrot Bebop drone in March 2015. “Unfortunately i realize i will lose bebop soon it's just a matter of time.”
If the FAA really wants to minimize unsafe drone operations, it needs to devise a framework for accountability that involves dronemakers and operators alike. Just as a drone hobbyist should be penalized for reckless flying, manufacturers must be discouraged from marketing devices that tend to go haywire. As the agency considers this issue, it shouldn’t limit its solutions to punitive measures, either—but whatever it does, it needs to act pretty soon.
This article is part of a Future Tense series on the future of drones and is part of a larger project, supported by a grant from Omidyar Network and Humanity United, that includes a drone primer from New America.
Selfies Are Killing More People Than Shark Attacks
Last week a 66-year-old Japanese man traveling in India fell down a staircase while attempting to take a selfie and died. It's increasingly obvious that people choose bad moments to take selfies, so now the conversation is turning to comparisons. And apparently selfies have become more dangerous than a lot of already scary-sounding things.
Take shark attacks. As of Sept. 4, the International Shark Attack File at the Florida Museum of Natural History had recorded six fatal shark attacks worldwide in 2015, according to National Geographic. Meanwhile, a Reuters article published Sept. 3 described dozens of 2015 selfie deaths and injuries in Russia alone. The country has had to launch public safety campaigns to warn people about the potential dangers of snapping a pic. One poster says, "A cool selfie could cost you your life."
In August a man trying to take a selfie was gored to death during a running of the bulls in Villaseca de la Sagra, Spain. And there have been numerous bison attacks following attempted selfies in Yellowstone National Park.
Some groups have been trying to get on top of the wave. In June Disney banned selfie sticks in its amusement parks. And foreseeing the selfie crisis in a very specific way, New York State passed a bill in June 2014 to prohibit people from having their photo taken (or taking it themselves) while "hugging, patting or otherwise touching tigers." There was already a snake selfie-related injury in 2015.
“We’ve actually seen people using selfie sticks to try and get as close to the bears as possible, sometimes within 10 feet of wild bears. … The current situation is not conducive for the safety of our visitors or the well-being of the wildlife,” Denver Water's manager of recreation, Brandon Ransom, said earlier this month about bears at the Waterton Canyon hiking area.
Given the classic advice "never turn your back on the ocean," you'd think it would go without saying that that applies to a bunch of other dangerous things, too. But apparently not!
France’s Privacy Regulators Want to Dictate What You (Yes, You!) Can Find Online
Whether you're an American sitting at a laptop in Hawaii or a Japanese citizen using your smartphone in Kyoto, French privacy regulators believe they have the authority to block search results you otherwise might receive on Google.com or Google.jp.
It's unlikely that even Louis XIV thought French regulatory authority should stretch so far. On Monday France’s data-privacy agency ordered Google to delist certain links (that is, remove them from search results) everywhere it operates and in every service it offers. The French regulator CNIL, for Commission nationale de l'informatique et des libertés, rejected Google's appeal of an earlier commission order that the search giant remove all links to the names of anyone who requests to have them removed under French law.
The decision has potentially disastrous consequences for the Internet we have grown to love—a platform that, because it's administered by standard technologies and protocols, makes it possible for anyone on the globe with Internet access to peek into the publicly available information everywhere else.
CNIL says it is merely applying the language of a May 2014 European Court of Justice decision that vindicated a Spanish lawyer's so-called right to be forgotten. But in practical terms, the agency is extending that precedent far beyond the language of the ECJ decision.
Google's response to the earlier decision was to consider each demand on a case-by-case basis and delist links if the demands are likely lawful, but only on Google's Europe-facing services—not Google.com. Google's running transparency report on "European privacy requests for search removals" reveals that, as of Monday, there were nearly 67,000 French requests for link removal aimed at more than 219,000 Web pages. (France leads the world in demanding “right to be forgotten” takedowns.) Nearly half those URLs have been removed.
The CNIL says that isn't enough:
Google received several tens of thousands of requests from French citizens. It delisted some results on the European extensions of the search engine (.fr; .es; .co.uk; etc.). However, it has not proceeded with delisting on other geographical extensions or on google.com, which any Internet user may alternatively visit.
Anyone who takes the time to figure out some basics of how the Internet works can find out how to circumvent territorially oriented rules about content, based on things like country-level domains or Internet-protocol addresses. As a result, the CNIL has concluded, the only acceptable outcome is for the French and EU rules to apply everywhere in the world—even, in theory, on Google sites serving users in languages like Japanese or Tagalog. Or, of course, English.
But the questions of what Google and other search engines should censor run deeper than a nation's (or a politico-economic union’s) privacy laws. For instance, as David Jordan of the BBC has pointed out, there's the question of preserving history:
Since the advent of Google our news reports are now just a click away for anyone with a computer, as the Spanish man who brought the ECJ case found. Our online news is far more accessible today than the newspaper archives of libraries. But in principle there is no difference between them: both are historical records. Fundamentally it is in the public interest to retain them intact.
The BBC has made a point of listing the URLs it removes in response to right-to-be-forgotten demands. "We are doing this primarily as a contribution to public policy," BBC Managing Editor Neil McIntosh wrote.
Then there's the larger guarantee of freedom of inquiry. The "right to be forgotten" (which is not, in fact, a "right to privacy," but instead a right to limit access to already-public information like news reports) is a new idea whose outer bounds are not yet established. It's heartening that a recent article in the European Data Protection Law Review shows Dutch courts may be more willing to balance freedom-of-expression interests against right-to-be-forgotten demands. Summarizing one decision, the authors write:
The Court says two fundamental rights are at stake. Firstly, the [plaintiff's] right to privacy as protected by the European Convention on Human Rights. … Secondly, Google's right to “freedom of information” (as the Court calls the right to receive and impart information) … [as] protected by the Convention and the Dutch Constitution. … The Court adds that the interests of Internet users, webmasters, and authors of online information should be taken into account as well.
The Dutch court decision relies on the European Convention and the Dutch Constitution, but the global human-rights framework that underlies and informs both, and that protects the rights of people in other nations as well, is the Universal Declaration of Human Rights. That document declares that everyone has the right not merely to "freedom of opinion and expression," but also the freedom "to seek, receive and impart information and ideas through any media and regardless of frontiers." Other human rights documents add provisions for protecting "reputation or rights of others" (typically against false factual statements rather than true ones), and it's generally understood in free societies that these reputational rights don't normally limit the protections for freedom of inquiry.
But does that freedom extend to search engines and their users? It should. Accurate records of the facts aren't nearly so useful to any of us if they are made artificially harder to find. The freedom to seek and to impart information online doesn’t mean much on the internet if internet Web search tools are constrained by broad, vague, (national governments being as different from one another as they are) inconsistent, and unpredictable demands for erasure of facts.
Long before the Internet, of course, we had a vision of what it means when it's easy to alter history, or to hide it. In Nineteen Eighty-Four, George Orwell's protagonist, Winston Smith, had the job of rewriting old newspaper articles to reflect his totalitarian government's current ideological views. Reading the novel as a 10-year-old, I often wondered the point of this job, since (as I thought then) hardly anyone looks up old newspaper articles. Of course, today that's something all of us do online by reflex and with the help of Internet search tools.
The point, of course, is that the Internet has expanded our expectations of what freedom of inquiry means. So why should we let an overreaching government—whether it's French or American or anyone else's—take that away?
The Women Who Won Net Neutrality
Earlier this month, Politico Magazine listed me among the top 50 “thinkers, doers and visionaries transforming American politics” for my work in coalitions advancing net neutrality—the principle that cable and phone companies should not block websites or create online slow lanes and paid fast lanes. Over the course of a year—from January 2014 to March 2015—millions of Americans, hundreds of businesses, and dozens of policymakers weighed in at the Federal Communications Commission in favor of net neutrality. Despite the overwhelming political might of the cable and phone companies that opposed the principle, and despite a prevailing conventional wisdom all last year that it would be “impossible” to beat them, the FCC sided with the public and adopted extremely strong net neutrality rules that should be a global model for Internet freedom. On Monday, dozens of academics, nonprofits, and companies filed legal briefs in court defending that important order.
Because the victory at the FCC is so important for economic policy and was so shocking a political victory, many news organizations have profiled those responsible. Over the past months, in addition to me, many men have received credit—including Federal Communications Commission Chairman Tom Wheeler, President Barack Obama, HBO host John Oliver, and Tumblr CEO David Karp. While these men (and others, especially in the nonprofit community) played critical roles, none deserves more credit than the frequently overlooked women who helped lead the fight. Even if we guys managed to hog the credit afterward, a disproportionate number of women in the public interest, tech, and government communities had the guts and brains to lead the public to victory. They canceled annual vacations, worked around the clock, didn’t see friends and family as often as anyone would want—and ran a brilliant campaign. They should be recognized.
Here are some of the women who worked to preserve the free and open Internet. (Many of them are current or former colleagues: I previously worked on staff at Free Press, was a fellow at New America, am on the boards of Fight for the Future and Engine Advocacy, and have done legal work for Google, Tumblr, and others.)
Barbara van Schewick, a Stanford law school professor who also has a Ph.D. in computer science and expertise in the economics of innovation, had a bigger impact than anyone realizes. She wrote the book on net neutrality and some of the most important articles on the topic. While teaching a full load at Stanford, she flew to Washington almost monthly and had more than 150 meetings at Congress, the FCC, and the White House. No one individual met more often with the White House or FCC on the issue, according to public records. The FCC’s decision (and footnotes) reflect her work. She had a bigger impact than entire institutions. She is not a normal human, but thankfully she’s on the public’s side.
Another top academic, Susan Crawford at Harvard, has led a public debate for more competition and investment among Internet providers, more government- and community-owned Internet networks, and has called for an open Internet to preserve the First Amendment interests of all Internet users. (The good news is that Susan joined scholar Tim Wu and me on the Politico list.)
Alongside Chairman Tom Wheeler, FCC Commissioners Jessica Rosenworcel and Mignon Clyburn cast two of the three votes for network neutrality. At the start of the public process, last May, each clearly signaled her willingness to support strong net neutrality rules. Clyburn publicly championed strong mobile rules, and Rosenworcel argued for a full and open public process. Further, Wheeler’s team included the brilliant Gigi Sohn, who made sure he met with business and civic leaders outside of D.C., and Stephanie Weiner, a top FCC lawyer who drilled in on every aspect of the legal analysis to make sure the order has its best shot of being upheld in court.
The civil rights community
The fights for media justice and racial justice have been intertwined since the 1960s civil rights movement. During the last year’s net neutrality fight, leaders like Jessica Gonzalez of the National Hispanic Media Coalition, Malkia Cyril at the Center for Media Justice, and Brandi Collins at Color of Change explained to policymakers and the public why disadvantaged groups needed an open Internet—to tell their stories, to organize, and to build political movements such as Black Lives Matter. Gonzalez, for example, testified before Congress and was part of a New York Times op-doc video conveying these points.
Congress and the White House
Despite political opposition from the powerful lobbyists at the cable and phone companies, some women across government stuck their necks out for the public. Rep. Nancy Pelosi lent her powerful voice in favor of strong net neutrality rules, as did Sens. Elizabeth Warren and Barbara Boxer. At the White House, the nation’s chief technology officer, Megan Smith, ensured that President Obama met directly with engineers who invented the Internet and World Wide Web. Hillary Clinton, a longtime network neutrality supporter, spoke out in favor of the FCC’s plan in the critical days before its adoption.
The tech community
The New York startup community defended net neutrality with visits to Washington, op-eds, legal filings, and joining activists in a day of mass action driving 300,000 calls to Congress. Althea Erickson led Etsy’s engagement, speaking for Etsy sellers, 88 percent of whom are women. The creative Liba Rubenstein led Tumblr’s policy team; the aggressive lawyer Michal Rosenn and savvy communications head Julie Wood led Kickstarter’s legal and communications engagement; Jessica Casano-Antonellis and Andrea Allen supported Vimeo’s efforts to spread the word with videos and user education. They worked with Engine Advocacy’s Julie Samuels, Liz Simon of General Assembly, New York Tech Meetup’s Jessica Lawrence, Shana Glenzer of the D.C. Tech Meetup, the Internet Freedom Business Alliance’s Lauren Culbertson, the Computer and Communications Industry Association’s Cathy Sloan, and Comptel’s Angie Kronenberg. Over at the larger companies, Sheryl Sandberg at Facebook filed to support network neutrality, and Johanna Shelton, Susan Molinari, and Rachel Whetstone led Google’s thinking and advocacy on the topic.
The Washington public interest community
Sarah Morris, a senior lawyer at New America’s Open Technology Institute, was a key thinker and organizer—filing legal comments, testifying before the FCC, coordinating various coalitions, and interfacing with Congress. (Disclosure: New America is a partner with Slate and Arizona State University in Future Tense.) At Free Press, a media reform organization, Sandy Fulton and others persuaded congressional staff and the public. Valarie Kaur of Groundswell and Cheryl Leanza of the United Church of Christ organized communities of faith. Nonprofit foundation leaders encouraged a coordinated strategy, including Helen Brunner, Yolanda Hippensteele, and Amber French of the Media Democracy Fund, who leaned partly on Jennifer Calloway of Spitfire Strategies.*
The FCC received almost 4 million comments (and 1 million signatures) and Congress 2 million emails and hundreds of thousands of calls. Someone had to organize all of that on behalf of average Americans. Those people include Tiffiniy Cheng, the creatively brilliant co-founder of digital rights group Fight for the Future; Evan Greer, that organization’s relentless campaign director and press genius; Becky Bond, the fearless political director of Credo Action, the activism arm of the phone company Credo Mobile; Rachel Colyer of Daily Kos, the liberal blogging platform; Candace Clement at Free Press; Stephanie Taylor at Progressive Change Campaign Committee; and Margaret Flowers of Popular Resistance, who drew sustained attention to the cause through actions like “Occupy FCC.”
These women saved the Internet and transformed politics They deserve the respect and gratitude of all of us who care about free speech in the 21st century.
*Correction, Sept. 22, 2015: This post originally misspelled Jennifer Calloway's last name.
When Anti-Virus Software Is Really Spyware
The terms of service seem straightforward at first. AVG says that it wants its products to act as reliable cybersecurity tools, and the company collects some user data so it can offer things like customer support and promotions. But it also aggregates data "to make money from our free offerings so that we can continue to offer them for free." The document explains, "We use data that does not identify you, called non-personal data, for lots of purposes, including to improve our products and services and to help keep our free offerings free."
AVG seems to want things like search queries, anonymized location data, and browsing history, but don't worry! "You can be assured that we protect the information we collect." We've all heard that before. An AVG representatives told Wired:
Those users who do not want us to use non-personal data in this way will be able to turn it off, without any decrease in the functionality our apps will provide. ... While AVG has not utilised data models to date, we may, in the future, provided that it is anonymous, non-personal data, and we are confident that our users have sufficient information and control to make an informed choice.
The situation is concerning, though, because customers looking for a cybersecurity solution may not in fact receive "sufficient information" to understand that a product marketed to help them protect their privacy might also be surveilling them. Essentially the same product that is protecting people from adware, spyware, and malware might be exactly that. And your anti-virus probably isn't going to alert you about itself. This is one reason that cybersecurity professionals are often skeptical of anti-virus products.
AVG claims to have more than 200 million active users, so this is potentially a significant point. The situation is reminiscent of what happened when Lenovo pre-installed Superfish adware on millions of PCs. People trust that laptops come out of the box clean and only acquire malicious software later—they're not thinking about what the device maker itself might be implanting.
Of the Superfish incident, David Auerbach wrote on Slate in February that Lenovo "betrayed its customers and sold out their security." At least AVG is being more upfront about what it might do with user data, but that doesn't mean the business model isn't creepy.
Would You Trust a Flimsy Rope Bridge Built by Drones?
Are unmanned aerial vehicles the future of flimsy rope bridges spun out over deep and terrifying chasms? I am happy to report that the answer is “probably.” Quartz reports that researchers from the Institute for Dynamic Systems and Control at the Eidgenössische Technische Hochschule Zurich (ETH Zurich) in Switzerland have released a video that shows three autonomous quadcopters equipped with spools of rope working together to construct a rudimentary 24-foot bridge between two scaffolds. The final product is sturdy enough to support a man’s weight but minimal enough to scare the crap out of you if you try to use it. Basically, this video is exactly what all the Indiana Jones movies would have looked like if Indy were into robots instead of whips.
It’s probably worth noting that the bridge was built in one of the most controlled environments imaginable: the Flying Machine Arena, a specially built drone testing ground housed at ETH Zurich that eliminates the sort of unpredictable interruptions a drone might face in the real world. (The ETH Zurich researchers have also built a portable version of the arena.) The arena boasts an intricate motion-capture system that gathers data on a given drone’s position and trajectory, processes the data in real time, and then tells that drone where to go next. But a system that works well in a lab might not work quite as well out in unmoderated space beset with wind and rain and birds and stray yo-yos and other aerial hazards. It’s easy for drones to build a bridge in a space that has been built specifically for drones to build bridges in. It remains to be seen how well they would work under suboptimal conditions.
To me, the most interesting thing about this video isn’t the rope bridge itself, but its implications for how drones might eventually be used for building purposes. I don’t think that drones will ever replace human builders—for one thing, there’s a limit to what a typical drone can carry, and for now that limit is basically “a spool of rope”—but they might well replace some of the ground-bound robots that are already used in construction today. (You don’t need to erect a scaffold or rent a crane to get a drone up the 10th floor of a building under construction, for example.) Anyway, however this line of research plays out, I do think it’s nice to remember that drones can be used to do more than just spy on you, take your job, or hit you on the head. Sometimes, they can be used to do cool things with ropes. The future is now!
This article is part of a Future Tense series on the future of drones and is part of a larger project, supported by a grant from Omidyar Network and Humanity United, that includes a drone primer from New America.
Apple Really Is Building an Electric Car
Corroborating what some have called the worst-kept secret in Silicon Valley, the Wall Street Journal reported Monday that Apple is not only building an electric car but aims to begin selling it as soon as 2019.
It has been rumored for months that Apple was getting into the car business, but details have been sketchy. They’re getting less so.
The company has internally designated the car as a “committed project” and will soon have 1,800 employees working on it, the WSJ reports, citing unnamed sources. The WSJ previously reported in February that Apple was working on a “minivan-like vehicle” as part of a secret project code-named Titan. The company has been on a spree of high-profile automotive hirings, including former Ford executive Steve Zadesky, who is reportedly helming the project.
While Apple has hired people with expertise in self-driving car technology, its first car will probably not be fully autonomous, the WSJ notes.
An electric car would put Apple in direct competition with Tesla, whose CEO Elon Musk acknowledged more than a year ago that he had held discussions with Apple.
By 2019, however, the field might be rather crowded. Tesla plans to release its Model X luxury SUV on Sept. 29, and its first mass-market sedan, the Model 3, is slated for 2017. Nissan, Chevrolet, BMW, and Ford all have notable electric cars on the market, and competitors such as Audi, Mercedes, and Porsche, among others, are working feverishly to challenge them.
That said, an Apple car would probably not have much trouble attracting attention even in a saturated marketplace. Tesla’s success with the Model S, which is America’s best-selling plug-in electric car, provides a blueprint for success: Forget about building 10 different models with 10 different trim lines. Just build one or two iconic vehicles that stand out from the pack, and sell them at a premium. It’s a strategy that has served Apple itself quite well over the years, albeit in a different industry.
Whether Apple’s expertise in building sleek little computers will translate to automotive success remains to be seen. We can probably count on Cupertino for sharp design, clever software touches, and an intuitive driver experience. But can we count on it for safety and reliability? Let’s just say its cars are going to have to last a little longer than its phones.
Regardless, it will be fascinating to watch the world’s richest company plunge into a sector so massive, complex, storied, and capital-intensive. The Apple Car (iCar?) could be Tim Cook’s crowning achievement—or his Waterloo.
Previously in Slate: Apple's Next Big Project Could Be an Electric Car