Cheetah Robot Is Now Wireless and Gallivanting on MIT’s Campus
Have you ever seen a cheetah roaming around a university campus? Because Cheetah 2 is on the move at MIT. It may be a robot, not a wild animal, but it recently stepped out of the lab for the first time ever and started running and jumping around.
The Cheetah 2, which MIT researchers work on through funding from DARPA, can run at 10 mph and can jump more than a foot in the air. By letting it go free rather than just testing it in the lab, researchers are evaluating its performance in real-world environments and working on improving its design. Breakthroughs in the cheetah’s development could be applicable to other autonomous robots or things like prosthetics.
It may not be an animal rights issue to make sure the cheetah has enough outdoor time, but perhaps it’s a robot rights concern?
Microsoft Is Buying Minecraft for $2.5 Billion, but Its Founder Is Leaving
Microsoft and Mojang, the studio that makes Minecraft, have been in talks about an acquisition deal for a while. And now Microsoft is officially shelling out for the game maker to the tune of $2.5 billion. That’s a lot of blocks.
Phil Spencer, the head of Xbox, said in a statement, “ ‘Minecraft’ is one of the most popular franchises of all time. We are going to maintain ‘Minecraft’ and its community in all the ways people love today, with a commitment to nurture and grow it long into the future.”
Mojang also released a statement, which attempts to reassure Minecraft users that Microsoft isn’t going to immediately ruin everything about the game they love. But the wording of the statement, including phrases like, “everything is going to be OK,” makes it seem like the decision was difficult, and that the “Mojangstas” may be uncertain about the decision internally.
Clearly no one is more conflicted than Minecraft inventor Markus Persson, aka Notch. Mojang’s statement explains:
As you might already know, Notch is the creator of Minecraft and the majority shareholder at Mojang. He’s decided that he doesn’t want the responsibility of owning a company of such global significance. Over the past few years he’s made attempts to work on smaller projects, but the pressure of owning Minecraft became too much for him to handle. The only option was to sell Mojang. He’ll continue to do cool stuff though. Don’t worry about that.
On his personal website Notch explains in detail how the success of Minecraft has been difficult for him on a personal level. He writes that it’s been “interesting” to gain fame and recognition for something he created, but that it has also created pressure and responsibility in his life that he doesn't want. He writes,
Minecraft certainly became a huge hit, and people are telling me it’s changed games. I never meant for it to do either. ... As soon as this deal is finalized, I will leave Mojang and go back to doing Ludum Dares and small web experiments. If I ever accidentally make something that seems to gain traction, I’ll probably abandon it immediately.
It's sort of reminiscent of Dong Nguyen buckling under the pressure of Flappy Bird’s fame, though Notch says in his post that he has wanted to and planned to leave Mojang for a long time. Luckily Minecraft is much bigger than Notch at this point. Hopefully that also means that Microsoft won’t be able to demolish it.
Twenty-One Months’ Worth of Rain in Two Days for Baja California
An incredible amount of rain is hitting Mexico’s Baja California peninsula thanks to yet another rare hurricane passing through—the second one in the desert in barely more than a week.
The latest in a string of exceptionally powerful storms—Hurricane Odile—made landfall late Sunday night near Cabo San Lucas on Mexico’s Baja Peninsula with 125 mph sustained winds.
The National Hurricane Center warns that Odile could bring isolated rain totals of 18 inches to an area that typically gets only about 10 inches per year. (That’s like New York City getting seven and a half feet of rain.) It could remake the local geography and turn the desert into a temporary Niagara. Since comprehensive weather records began in 1950, only one other hurricane in that part of the world has been this strong at landfall, 1967’s Hurricane Olivia.
On its approach to Mexico this weekend, Odile looked ferocious from space.
El Niño-fueled warm water is turning the Pacific Ocean into a hurricane factory this year. A combined index of hurricane strength and longevity off the coast of Mexico is at 151 percent of average in 2014. Odile comes just days after a surge of moisture from Hurricane Norbert brought Phoenix its rainiest day in history. Late last month, Hurricane Marie, one of the strongest Pacific hurricanes ever measured, made for an epic day of surfing in Southern California.
Odile, too, will affect the Arizona desert this week, bringing another round of tropical moisture at off-the-charts levels.
On Sunday night, National Weather Service meteorologists from Las Vegas, Tucson, Phoenix, and Flagstaff organized an impromptu Twitter chat that spurred hundreds of comments from people nervous about the potential for back-to-back major flood events.
As I mentioned last week, deluges like these do comparatively little to blunt the effects of ongoing drought in the Southwest, since most of the rain finds its way into gullies and riverbeds. As the climate warms, more rainfall is expected from heavy downpours, further taxing local attempts at flood management.
And just like Norbert, most of the rains from Odile will miss the U.S. state of California, where drought lingers at levels not seen in hundreds of years.
We Need to Pass Legislation on Artificial Intelligence Early and Often
Not that long ago, Google announced something unheard of in the auto industry—at least in the part of the auto industry that makes moving cars. A car without a steering wheel or gas and brake pedals. To Google, this was the next step in self-driving cars. Why bother with a steering wheel if the driver isn’t driving? Some observers questioned whether this feature in the proposed the test vehicle violated the autonomous vehicle statute in California (where the vehicle would be tested), which required that the driver take control of the self-driving vehicle in case the autonomous system malfunctions. Google claimed that it installed an on/off button, which satisfied the California law.
California recently weighed in: Google, you’re wrong. The state has released regulations requiring that a test driver be able to take “active physical control” of the car, meaning with a steering wheel and brakes.
To this I say—good for you, California.
This is only the most recent example of wonderfully swift governmental response to autonomous technology and artificial intelligence. At every level of government—local, state, federal, and international—we are seeing rules, regulations, laws, and ordinances that address this developing technology actively discussed, debated, and passed. Four states have passed legislation governing autonomous cars, and they’re not even on the market yet. The FAA is drafting regulations to address drones, even though their use is relatively limited; there is even case law on the books addressing drone regulation. States and towns are weighing in on drones, too. At least one urban planner is actively developing zoning ordinance provisions that would establish fly and no-fly zones for drones in cities. Internationally, the United Nations has begun to weigh in on how military drones fit into established international legal norms.
If you look at the details, this activity is not targeted so much at pure government regulation like registration, permitting, etc. (although that is certainly present). Legislators and regulators are targeting people and their welfare, in much the same way that historic legislation responding to the Industrial Revolution targeted people and labor. The state legislation governing autonomous cars is very concerned with how people will use those vehicles, their safety, and their best interests. The FAA’s drone regulations will be largely (although not exclusively) concerned with the safety of people: in the sky, on the ground, in their homes. In contrast, states and towns are concerned with another element of people's welfare: privacy. They are pushing for tighter restrictions on drone usage by police forces, frequently requiring warrants, as Margot E. Kaminski recently advocated in Future Tense. Some states, like Indiana and Oregon, have gone so far as to try to prohibit drone usage altogether. Efforts from the United Nations are focused exclusively on saving lives and preventing killings by military drones. Many recognize that it is easier for governments to order killings and attacks when their own human soldiers aren't on the ground, separating the decision makers from the true cost of their decisions.
These laws, these regulations, these debates—this is exactly what we should be doing. Economists and historians traditionally claim that the technological advances of the Industrial Revolution led to the creation of a large middle class in the United States. That’s only partly true. The technology certainly made that middle class possible, but the legal innovations that we created following the Industrial Revolution made possible the widespread prosperity of the mid-20th century American middle class: minimum wage laws, child labor laws, laws protecting unions, regulations governing workplace safety and environmental protection, etc. All of these laws tried to help average Americans benefit from the new system that the Industrial Revolution introduced.
But those laws took 100 years. We don’t get that much time anymore. The Internet and automation in factories (i.e., robots) were among the last two major technological advances that altered our economy and labor market. Automation decimated blue collar workers beginning in the 1960s and the internet hit small businesses hard starting in the 1990s. One of the big reasons those technological advances were so hard on working and middle class workers is that we never adequately addressed them with legal changes as we did following the Industrial Revolution. But we also had much less time. There were 30 years between robots in factories and Amazon on the Internet, and there have been about 20 years between the introduction of the Internet and the introduction of AI and autonomous devices.
In addition to its self-driving cars, Google is developing a secret fleet of self-flying delivery drones, according to a recent Atlantic article by Alexis Madrigal. The FAA’s regulations aren’t due until 2015—and those will probably be late—but I hope the drafters are looking ahead at the drones of the future and not just what’s available today.
This technology is going to develop fast, almost certainly faster than we can legislate it. That’s why we need to get ahead of it now. There are legitimate concerns about how AI and autonomous technology will impact the work force and our quality of life. There were in the face of the Industrial Revolution, factory automation, and the internet, too. If we want to make sure that the benefits of these technological advances are widely shared among all people, we need to legislate early and often. Otherwise, the problems experienced by working Americans and the middle class over the last several decades will only get worse.
How Do You Say “Crack Down on Google” in German?
There is, in Germany, a law under which companies must provide a way for customers to communicate with them. And so, on Friday, a German court ruled that Google.de can no longer respond to customer service emails with an automated response saying that Google gets so many emails that it will not answer, or even bother to read, yours.*
The decision is but a small part of a larger German pushback against Google. Earlier in the week, Germany, along with France, put pressure on the EU antitrust authorities not to settle with Google over its search practices.
The deal, which EU Competition Commissioner Joaquín Almunia supported after its announcement back in February, would have allowed Google to avoid $6 million worth of fines by displaying the services of its rivals comparably to how it displays its own in search results. Instead, a variety of EU politicians—including the German economic minister—have urged that the EU seek “fresh concessions” from Google in the fourth round of such talks. German media associations and publishing houses were particularly vocal about their discontent with the proposed settlement. Axel Springer, for example, publishes Germany’s largest-circulation newspaper, Bild, and owns, among other websites, a shopping comparison site that has been adversely affected by Google.
For obvious reasons, Germans are sensitive to issues of privacy and overreach, creating a serious trust issue for Google. In an open letter, Matthias Doepfner, chief executive of Axel Springer, wrote, "Google knows more about every digitally active citizen than George Orwell dared to imagine in his wildest dreams in 1984."
But neither matters of trust nor antitrust are likely to be resolved soon in Germany for Google.
*Correction, Sept. 12, 2014: This post originally misstated that Google.de will no longer be allowed to send automated responses claiming Google gets so many emails “it will answer, or even bother to read, yours.” The automated response is “it will not answer, or even bother to read, yours.”
Silicon Valley Has Officially Run Out of Ideas
Every year the tech blog TechCrunch holds a competition for tech startups at its New York and San Francisco “TechCrunch Disrupt” conferences and crowns a winner at the end.
Over the years, the competition has proven to be something less than a fountainhead of world-changing ideas. Past champions include a conference-calling app, a car-sharing service that is not Uber or Lyft, and a half-hearted “Second Life” ripoff that hasn’t been heard from since 2012.
At this year’s San Francisco conference, however, the blog has outdone itself, anointing as winner a startup so frivolous and asinine that it makes its lackluster predecessors look like Hewlett-Packard and Fairchild Semiconductor by comparison. It’s a Boston-based venture called Alfred Club, and as far as I can tell the idea is basically “Uber for servants.”
Alfred, explains TechCrunch writer Sarah Perez, who must have drawn the short straw in the office pool, is “the first service layer on the shared economy that manages your routine across multiple on-demand and local services (like Handybook, Instacart, and the local dry cleaner).”
I was unable to find a precise English translation for that sentence, but I have to agree with ValleyWag’s Nitasha Tiku: It sounds an awful lot like a butler service. More from TechCrunch’s Perez:
When you first sign up for the service, you’ll be assigned an “Alfred.” The app shows you this person’s picture and some general information, as well as the verification for the person’s background checks. You’ll then choose a specific day for this person to deliver your goods each week, and you’ll compile a grocery list to get them started. …
Afterward, your “Alfred” will head over weekly to drop off your clean laundry, put it in the closet, drop off your household supplies, and replace supplies as needed—like putting new paper towels on towel holder, for example. He or she will also put your groceries away and make sure the house is spotless.
Jargon aside, what exactly differentiates “an Alfred” from a plain old personal servant is not entirely clear. Perhaps it’s the fact that “Alfreds” apparently come unburdened by individual names of their own. Which is nice, because real names can be hard to remember and disconcertingly humanizing, like when you go to a fancy restaurant and they serve your fish with the head still on.
Perhaps the most innovative thing about Alfreds is that they cost just $99 a month. At that price, they’re sure to disrupt the antiquated business models of conventional, non-“shared-economy” servants, whose bloated cost structures include such extravagances as job security, a stable income, shelter, and sometimes even basic health care.
It seems the co-founders came up with the idea for a class they were taking at Harvard Business School. No doubt Clay Christensen would be proud.
Alfred’s big win is doubly impressive when you consider that TechCrunch founder Michael Arrington, who just happens to be an investor in Alfred, didn’t even participate in the final judging process this year, according to an ethical disclosure in Perez’s article.
In his absence, Alfred's plucky Harvard grads had the difficult task of winning over a highly diverse panel of six other fabulously wealthy tech luminaries, at least a few of whom may have had no direct financial stake in it. That it prevailed in the face of such odds is a testament to
unbridled capitalism the sharing economy, systemic inequality the Silicon Valley meritocracy, and the enduring power of the whims of a handful of self-regarding billionaires great ideas.
What Happens When You Let Machines Edit All of Your Videos?
Most videos on the Internet should be shorter, but most people don't know how to edit video. This problem haunts otherwise worthy Web videos of all kinds (even duckling rescues aren’t immune), and the onslaught of amateur video from Facebook and sophisticated new apps are poised to make it worse.
Plenty of developers have engineered solutions in “automatic” video editors that ingest footage and package it more or less on their own. Google offers auto-editing features like music and automated scene selection for videos uploaded to Google Plus, and services like Magisto rely on an algorithm that uses footage and user choices to create slick, ready-made edits. Videolicious has made it into newsrooms at the Washington Post, enabling reporters to assemble quick videos from a variety of photos and rough footage.
This summer, Instagram users have embraced Hyperlapse, and that may signal a marquee moment for this form: The app builds ready-to-share time lapses from video shot on an iPhone, stabilizes the footage for unsteady hands, and produces short movies with a “cinematic feel,” as Instagram likes to put it. Early uses are not always revelatory—dancing clouds from airplane windows are the new dewy sunsets—but the technology is showing how users are responding to the automated video editors of the future.
But these services still rely on people to make at least some of the decisions in the editing process, especially in adding sounds and effects that give the automated videos their proto-professional feel. What if they cut out the human editor altogether? What if videos were edited in real time, even as someone was shooting them? That notion animates Bin Zhao, a graduate student in machine learning at Carnegie Mellon University, who began demonstrating a system called LiveLight this summer with CMU professor Eric Xing. The technology relies on an algorithm that “watches” a video only once, building what Zhao called a “visual dictionary” to analyze the footage. The process relies on visuals, not sound, using movement and novel cues to build an instant summary of the footage. Here’s a rough demonstration:
LiveLight technology is in the early stages compared with some of its competitors, making it feel like another tool to “cut the boring parts” out of long videos for now. Zhao told me, for instance, that he’s already testing its obvious ability to detect any suspicious movements in an otherwise static security feed.
But the long game is more interesting. LiveLight could take hours of footage from Google Glass, GoPros, smartwatches, and other wearable devices, then process it in almost real time, giving a raw summary of a just-completed event. “If we deploy this app onto Google Glass, by the time the video is finished recording—maybe within 30 seconds or one minute—the video will be waiting to view,” Zhao said. He imagines a fascinating if eerie future where LiveLight can aid in a kind of selective self-surveillance: “The system will pull out the most interesting things that happened that day—a video summary, a video diary, a video journal of what has happened.” It’s easy to see the potential for digesting a day of travel or meetings, not to mention applications for police departments.
Zhao gave me the keys to try LiveLight out, and the tool is now decidedly scrappier than these larger ambitions will require. An hour of video can take well more than an hour to process at this point because of limited backend power. Nevertheless, I tried some rudimentary raw footage, and it worked as advertised, editing it based on new scenes and sudden movements. I’d singled out similar moments in the video I uploaded.
At a colleague's suggestion, and with Zhao’s reluctant permission, I also trolled the system a little bit, feeding it more complicated (and nonamateur) video to see how it would grapple with complex footage. I was pleasantly surprised. I tried it on ClickHole’s parody viral video “This Stick Of Butter Is Left Out At Room Temperature; You Won't Believe What Happens Next,” featuring three full hours of butter slowly melting in a dish. LiveLight was unfazed, spitting out a short video summary and this charming, undeniably accurate GIF:
I also gave LiveLight several other videos, from short footage to feature-length movies, and the canniest result I got was for Stanley Kubrick’s 2001: A Space Odyssey. The movie seemed an obvious choice to pit against an all-knowing algorithm, and LiveLight came up with a creepy, not-half-bad summary of the movie’s visual arc:
Perhaps grudgingly, LiveLight does allow humans to tinker with its edits, offering a ranked list of important moments that can be used to add cut sequences back into the final product. The team also plans to develop the product for near-term consumer and security applications (which explains why Carnegie Mellon’s research is supported by groups as varied as Google and the Air Force Office of Scientific Research). But Zhao said the prospect of automatic, indexed visual libraries of our life events is the technology’s most alluring possibility. At its most basic, he said, the tool is “a way to be able to organize people’s videos. Right now they are like the dark matter of the Internet.”
That may be true, but LiveLight suggests automated video editors may not just create and organize “social movies” for us—they may mean trusting an algorithm with how we process the memories themselves. Even tools as simple as Hyperlapse are creating a narrative. A personal, all-day video feed may give us pause for obvious reasons, but perhaps the spookier implication is that a machine would be helping us decide what to remember—and what to forget.
Sotomayor Concerned About Drones and Privacy, Says You Should Be Too
Justice Sonia Sotomayor told law students and faculty at Oklahoma City University on Thursday that Americans should be feeling very concerned about the potential for drones to compromise personal privacy.
According to the Wall Street Journal, she said she thinks that as drones become more ubiquitous, they will encroach on physical spaces that have traditionally been respected as private. And she emphasized that citizens should channel their concern into more active involvement in privacy debates nationwide.
There are drones flying over the air randomly that are recording everything that's happening on what we consider our private property. That type of technology has to stimulate us to think about what is it that we cherish in privacy and how far we want to protect it and from whom.
Sotomayor pointed out that while many current drone privacy discussions center on government surveillance, invasion of privacy by any group—including corporations or other private citizens—can be just as problematic, especially since there is technology available that allows drone operators to have not only eyes in the form of video feeds but also ears from advanced audio techniques focused on an area of interest.
She said, “We are in that brave new world, and we are capable of being in that Orwellian world, too.”
Law professor Margot E. Kaminski recently wrote in Future Tense that states should create drone privacy legislation now, rather than waiting for courts to weigh in. It seems that Sotomayor may agree there.
Cyberspace Looks Pretty Menacing in This Japanese Music Video From 1989
If you’ve always had a hunch that cyberspace is a lot like Freddy Krueger’s basement, then you’d be right! At least according to this 1989 music video by Japanese pop star Anri. The song is “Groove A Go Go” (rediscovered by io9) but the theme is all the scary things lurking in the digital world.
The woman in the video does some awesomely clicky typing on an old-skool keyboard (that doesn't have labels on the keys) and then puts on some sleek sunglasses that transport her into the cyber world. Then as she’s casually waking up on the floor of cyberspace, she notices that she’s being pursued by an evil avatar. We’ve all been there, right?
The digital world is industrial, steamy, and generally filled with pipes. It’s kind of set up as a circuit diagram that the video protagonist navigates as she searches for a place to hang out. Eventually she finds her friends in a club and dances with a guy who turns out to be the evil avatar. But this time she hits “return” and defeats him! Stand up for yourself, ladies.
If nothing else, “Groove A Go Go” definitively shows the influence of smooth jazz on the Internet.
The Earth Is About to Be Bombarded by Two Solar Storms
Two coronal mass ejections—eruptions from the sun that send a cloud of charged particles into space—are hurtling toward Earth right now. As they enter the Earth’s magnetic field, they could create magnetic storms that interfere with radio transmissions, GPS, and even power grids. But scientists from the National Oceanic and Atmospheric Administration say the risk is low. This time.
When the CMEs encounter the Earth’s magnetic field, they stretch and distort it as they pass. It releases energy in the process of recovering, which is what causes auroras and radio communications problems. For more watch Slate’s vide explainer:
On Monday night, a minor solar flare gave off a first magnetic cloud. Then on Wednesday afternoon, a major flare from the same sunspot region gave off a bigger CME. Thomas Berger, the director of the Space Weather Prediction Center at NOAA, said that both are aimed at the Earth. The first burst is forecast to hit Earth sometime Thursday night, and the second should arrive around midday on Friday.
“On a geomagnetic storm scale of 1 to 5, we’re currently expecting the CME impacts to cause G2, moderate, to G3, strong, geomagnetic storming,” Berger said. But there’s good news: “ [The] effects are expected to be manageable and not cause any major disruption to power transmission. The grid operators have been notified … and FEMA has been notified of these events as well just in case.”
NOAA uses observation of the sun and mathematical modeling to predict when magnetic storms will hit. And about an hour or so before the storms hit, they pass NASA’s ACE satellite, which is upstream from Earth and therefore encounters CMEs first. At that point, NOAA can obtain specific measurements about attributes like the direction of the cloud’s magnetic field, which plays a role in how intense the magnetic storms are when the CMEs interact with the Earth’s magnetic field.
Sarah Gibson, a senior scientist at the National Center for Atmospheric Research, says it’s significant that the CMEs are coming one after the other. “Our simulations are starting to show us that if you have one of these coronal mass ejections and then another one not long afterwards, [the first] is like a snowplow making a path and it makes it possible for the second one to become especially fast.”
Berger said that this confluence of events could possibly cause storms as intense as G4 (out of 5), especially at the Earth’s poles, but he emphasized that NOAA is not forecasting a doomsday scenario. “We don’t expect any unmanageable impacts to national infrastructure from these solar events at this time, but we are watching these events closely,” he said. “More pleasantly, we do expect these storm levels to cause significant auroral displays across much of the northern U.S. on Friday night. With clear skies currently forecast for much of these regions this could be a good opportunity for auroral sightings.”