The correct number for air-traffic-control errors in fiscal year 2009, Brown said, was 947. Which would mean (using the FAA's own figures) that air controllers' error rate didn't increase by 51 percent, but rather by, uh … 99 percent.
But wait! Comparing 2009's 947 with 2010's 1,889 really wasn't legitimate, Brown said, because Halsey was wrong when he said the error totals didn't include self-reported errors. Or rather, he wasn't entirely right. It was true that there were two separate databases, one for self-reported errors and the other for the official tally of errors. But often the same error would get reported to both databases. How often? Brown couldn't say. But there was a huge increase in category C (i.e., least dangerous) errors between fiscal year 2009 and fiscal year 2010. Self-reported errors are likeliest to be C errors, because it's difficult to make a really scary Jerry Lewis-Nutty Professor-type error and not have anyone else notice. There were 618 C errors in fiscal 2009 and 1,444 C errors in fiscal 2010. If all 826 of the additional C errors in 2010 were attributable to self-reporting, and therefore excluded from the 1,889 total, there would still be an apples-to-apples increase in errors. But it would be a more negligible 12 percent.
The hitch is that we don't really know how many of the 826 C errors were self-reported. It's actually pretty doubtful all 826 were self-reported because the new self-reporting system was phased in gradually through fiscal year 2010.
Brown also gave me the missing numbers for category A and B (i.e., the more-scary) air-controller errors for fiscal year 2009. They added up to 329. That means (I later calculated) that A and B errors dropped by about 26 percent between 2009 and 2010. But that good news is tempered by an increase of errors in the absolute scariest category, A. These rose from 37 to 44, i.e., by 19 percent.
I asked Brown why the FAA didn't answer the Post story by removing the duplicates from the regular database so we wouldn't have to guess what the apples-to-apples trend was for all categories of air-traffic-control errors from year to year. "I understand what you're saying," she replied, "but that's not how we keep the data." But, I protested, you have both databases. You surely have the means to identify every error that gets reported to the FAA—where it happened, what it was. They number fewer than 2,000 per year!
Brown answered by telling me there was another complicating factor—something called the "terminal area reporting system." That's an automated error-reporting system that tattles on air traffic controllers who commit errors. It's been used for years on planes at high altitudes, but during the past two years it's been phased in for planes operating at low altitudes. The phase-in messes up the kind of comparison I want to make, and so does the fact that at low altitudes the automated error-reporting is switched on at some times and not switched on at other times.
Another factor, Brown said, is how many planes are flying one year versus the previous year. Since the recession the trend has been downward. Which means, I answered, that there are more errors by air traffic controllers even as they have less air traffic to keep track of? Isn't that kind of … bad?
Please remember, Brown said, that the FAA's goal is to constantly improve its system for catching errors.
I'm all for that. But I don't see why the FAA won't make a parallel effort to keep more precise track of whether the performance of its air traffic controllers is getting better or worse. That would tell consumers something about whether flying was getting safer or more dangerous. Could it be that the FAA would just as soon we didn't know?