The second test was more intriguing, and more worrisome for Google. The group wanted to see what would happen if eBay stopped buying ads next to results for normal keyword searches that didn’t include brand names, like “banana slicer,” “shoes,” or “electric guitar.” In the name of science, the company then randomly shut down its Google ads in some geographic regions and left them running in others. The results here were a bit more nuanced. On average, the ads didn’t seem to have much impact at all on frequent eBay users—they still made it to the site. But they did seem to lure a few new customers.
“For people who’ve never used eBay or never heard of eBay, those ads were extremely profitable,” Tadelis said. “The problem was that for every one of those guys, there were dozens of guys who were going to eBay anyway.” Add it all up, and eBay’s search ads turned out to be a money-loser.
Tadelis and his team believe that the results would probably be the same for most well-known brands. But for smaller companies—where customers might not know about them without the search ads, and who probably wouldn’t rank as high in results—it could be different.
Overall, the eBay paper isn’t great news for Google. But it also confirms some of the promise of online advertising. Even if many analytics companies don’t use a gold standard—a randomized control experiment to figure out if their clients are getting their money’s worth—it is theoretically possible to show that the ads work.
It isn’t easy, of course. In 2013, Randall Lewis of Google and Justin Rao of Microsoft released the paper “On the Near Impossibility of Measuring the Returns on Advertising.” In it, they analyzed the results of 25 different field experiments involving digital ad campaigns, most of which reached more than 1 million unique viewers. The gist: Consumer behavior is so erratic that even in a giant, careful trial, it’s devilishly difficult to arrive at a useful conclusion about whether advertisements work.
For example, when the researchers calculated the return on investment for each ad campaign, the median standard error was a massive 51 percent. In other words, even if the analysis suggested an ad buy delivered a 50 percent return, it was possible that the company actually lost money. You couldn’t say for sure. “As an advertiser, the data are stacked against you,” the researchers concluded. That bodes poorly for your typical marketing schmo trying to glean meaning from a Google analytics page—all he can do is try to stack enough data to overcome his statistical problems.
Still, in an email, Lewis told me that he believes “online ads absolutely work.” Or, at least, they can work. For instance, Lewis and his Google colleague David Reiley have written papers showing that display ads on Yahoo led to more customers making purchases in stores. The problem, Lewis argues, is that most analytics firms aren’t scientific enough about measuring profitability. Because they don’t run real experiments, he thinks most end up conflating correlation and causation.
In our interview, Tadelis put his advice to advertisers a little more bluntly. “If you were comfortable for the past 100 years, if you were comfortable pissing in the wind and hoping it goes in the right direction, don’t kid yourself now by looking at this data.”
I imagine Mel Karmazin would approve.