Book Blitz

What’s With All the “National Best Sellers”?

How so many books get to the top of the charts.

Click here to read more from Slate’s “Book Blitz.”

Note: Slate initially posted a version of this column before it was finished being edited. A correction at the end of the article details errors that appeared in the original.

Walk into a bookstore, and it can seem as though every book is billed as a “national best seller.” It’s not hard to explain why: There are numerous best-seller lists on which to base the claim, and the lists don’t always anoint the same books. On Thursday, for example, six of the most prominent top-10 fiction lists included 22 different titles. How do these best-seller lists work? And why don’t they all list the same books?

There hasn’t always been such an abundance of lists. According to Michael Korda’s Making the List, the first best-seller list in America began in 1895 as a monthly column in a now defunct literary magazine called The Bookman. The oldest continuously published list was introduced in 1912 by Publishers Weekly, and the New York Times Book Review began publishing its list as a regular weekly feature in 1942. Now there are more than 40 best-seller lists that report how well books are doing either nationally or in various segments of the market—in particular regions, at certain chain stores, at independent bookstores nationwide. However, no list-maker tracks every book sold in the country; even the national lists draw on a sample of actual sales data from booksellers and use it to extrapolate total national sales.

The industry bellwether is the New YorkTimes list, due to the prominence of the newspaper and the scope of its sales survey. Each week, the Times receives sales reports from almost 4,000 bookstores, along with reports from wholesalers that sell to 50,000 other retailers, including gift shops, department stores, newsstands, and supermarkets. John Wright, the assistant to the best-sellers editor at the Times, said the paper does not reveal the precise methodology by which its list is compiled. But we do know that the rankings are based on unit, not dollar, figures and account for sales during a Sunday to Saturday week.

To report sales to the Times, booksellers use a form provided by Times editors. The form lists titles the editors think are likely to sell well. Although there is space below for writing in additional titles, this practice has been controversial. Some critics (particularly independent-bookstore owners and small publishers) believe the Times form makes it more difficult for quiet, word-of-mouth hits to make the Times list. Whatever the cause, the Times list certainly makes “mistakes”: A recent study of best sellers by Alan Sorensen, an assistant professor at the Stanford Graduate School of Business, found 109 hardcover fiction books that did not make the Times list in 2001 and 2002 but sold better than some that did.

Other best-seller lists draw on smaller survey samples. The San Francisco Chronicle and Los Angeles Times (among other newspapers) publish lists that rank sales in their regions. Barnes & Noble and Amazon calculate best-seller lists based exclusively on their own sales—the Barnes & Noble list includes transactions both on the company’s Web site and at its more than 870 stores. The American Booksellers Association’s “Book Sense” list surveys only independent bookstores, compiling data from about 460 of the estimated 2,000 independent bookstores in the United States. (And Library Journal,a trade magazine, even launched a list recently that is not based on sales at all; instead, it ranks the books “most borrowed” from libraries.) Best-seller lists generally separate hardcover and paperback books and also parse by category: fiction, nonfiction, advice, children’s, etc. There are exceptions: Barnes & Noble, for example, lumps together hardcover and paperback sales while retaining the distinction between fiction and nonfiction, and USA Today runs a single, unified list of the top 150.

Since the many lists represent different pieces of the total book-sales pie—and even those representing the same slice use different samples—there can be some startling divergences among the rankings. (Some lists are also faster than others to record sales, which can exaggerate these differences.) For example, on Wednesday, Skinny Dip by Carl Hiassen ranked No. 10 among the ABA’s independents but only came in at No. 121 on USA Today’s list. The No. 7 book on Amazon, War Trash by Ha Jin, was not in Barnes & Noble’s top 25 while the No. 3 book on Barnes & Noble, Anita Shreve’s Light on Snow, was only No. 22 on Amazon—and neither book had yet made it onto the Times or Publishers Weekly lists.

Best-seller lists indicate how a book is selling relative to other books in a given geographical area or niche of the market, but they don’t reveal how many copies a book has sold or how much money consumers have spent on a given title. Movie buffs can log onto the Internet Movie Database and find out how much money a film took in at the box office the week before. But readers who love The Da Vinci Code by Dan Brown, which just had a long run at No. 1 on almost every fiction best-seller list, have no way to tell from the rankings whether it is selling 1,000 copies a week or 1 million, or how much money it has made.

In part, definitive figures on book sales don’t appear in best-seller lists because timely, authoritative data can be hard to come by, even for publishers. The larger companies, such as Holtzbrinck (FSG, Holt, Picador, St. Martin’s) or Bertelsmann (Random House, Knopf, Doubleday) have increasingly sophisticated in-house systems that update sales data for their own titles on a weekly or daily basis, based on figures that sales reps get from book retailers across the country. These systems have blind spots, however: Airports and supermarkets, for example, are slower to report point-of-sale data, so it can sometimes take two to three months for publishers to obtain sales numbers from these venues. And publishing house bean-counters must also contend with the book world’s peculiar return policy, which allows retailers to send any books they cannot sell back to the publisher—for a full refund. Only when a book is bought and retained by the customer does it count as a sale for the publisher. As a result, for the publisher, sales figures are always provisional pending a costly adjustment for returns—and returns can be huge, sometimes amounting to between 40 percent and 50 percent of books shipped.

So how many books do you actually need to sell to make it onto, say, the Times list? There is no defined threshold, but according to the Stanford study, one book made the hardcover fiction list selling only 2,108 copies a week; more typically, the median weekly sales figure in the study was 18,717. And most books can’t keep even these modest sale rates up for long: Sales generally peak during a book’s second week on the list and then steadily decline. Over a period of six months, the median best seller in the Stanford study averaged weekly sales of just over 3,600 copies.

Incidentally, the Stanford study would not have been possible even five years ago—the professor who conducted it would have had trouble obtaining accurate data. But in 2001, a Dutch company called VNU introduced Nielsen BookScan, which reports industry-wide sales figures and is available only by subscription. Like the national best-seller lists, BookScan relies on sampling—in their case, about 4,500 retailers—but BookScan reveals hard data on unit sales for books. The Washington Post bases its rankings on BookScan data, but Nielsen requires that the Post keep unit sales figures out of the paper. 

Of course, the absence of definitive sales data on best-seller lists generally suits publishers just fine since the uncomfortable truth is that sales of most books, even those doing relatively well, are pretty low. Publishers like to promote books as “national best sellers” in part because the term creates a sense of momentum and critical consensus that the phrase “over 25,000 copies sold”—which would actually be a pretty good figure for literary fiction sales in hardcover—does not.

Clearly, publishers can’t call any old book a “national best seller,” but there are no industry-wide rules about how well books must perform to earn the “national” part of the claim. A ranking on any of the lists with a country-wide survey (such as Amazon or USA Today) clearly suffices, but publishers have varying policies on how many regional lists a book must appear on (and how widely dispersed those lists must be) before the title can be promoted as a “national” hit. What makes these distinctions important is that there’s more than just a gold “best seller” sticker at stake: Many bookstores discount whatever is on the best-seller list—often the Times’ list—and display the books prominently, which can further accelerate sales. At this point, a book’s reputation can snowball: Anxious buyers get the sense that they should read a book because other people are doing it. The Stanford study offers the first quantification of the sales benefit from making the Times hardcover fiction list: While it has no discernible impact for famous authors like Danielle Steel or John Grisham, on average it gives a 57 percent boost to first-time authors who make the list.

Correction, Oct. 14, 2004: A version of this piece originally published on June 17, 2004, contained the following errors:

The column inaccurately suggested that publishers deliberately “suppress” information about book sales, withholding “hard data” from the editors who draw up best-seller lists. The column should have noted that the editors of best-seller lists, concerned that publishers might inflate their sales figures, have long preferred to gather data from independent sources that do not have a stake in the success of a given title.

The column inaccurately reported that “most” authors “cannot get access to hard data” about sales of their own books and “have to trust they’re not being cheated out of royalties.” In fact, though authors may have trouble getting timely access to such data, publishers have a contractual obligation to send authors a statement detailing book sales and royalties within six to nine months of a book’s release.

The column stated, imprecisely, that visitors to the Web site IMDb.com could “find out exactly how much a film took in at the box office.” The statement was not intended as an assessment of the accuracy of those figures.

The original column included a paragraph on authors who try “to game the system”—to manipulate the best-seller list by buying their own books in bulk—and incompletely recounted the case of David Vise, a writer who was suspected of trying to do so in 2002. The original column stated that Vise purchased 20,000 copies of his book The Bureau and the Mole and “was only caught when he tried to return 17,500 of them.” Both figures were among those reported at the time, but documents Vise provided to the Washington Post showed that he purchased 18,468 books and returned 9,678 of them, and the number of books he bought and sold was never definitively established. The column also noted that Vise’s actions were “widely seen” as an effort to win a plum spot on the best-seller list without detailing who saw them that way, and its acknowledgment that Vise denied the charges was inappropriately dismissive.