Future Tense

You Can’t Handle the (Algorithmic) Truth

People are blaming algorithms for the cruelty of bureaucracy.

Illustration by Rob Donnelly

Critics of “algorithms” are everywhere. Algorithms tell you how to vote. Algorithms can revoke your driver’s license and terminate your disability benefits. Algorithms predict crimes. Algorithms ensured you didn’t hear about #FreddieGray on Twitter. Algorithms are everywhere, and, to hear critics, they are trouble. What’s the problem? Critics allege that algorithms are opaque, automatic, emotionless, and impersonal, and that they separate decision-makers from the consequences of their actions. Algorithms cannot appreciate the context of structural discrimination, are trained on flawed datasets, and are ruining lives everywhere. There needs to be algorithmic accountability. Otherwise, who is to blame when a computational process suddenly deprives someone of his or her rights and livelihood?

But at heart, criticism of algorithmic decision-making makes an age-old argument about impersonal, automatic corporate and government bureaucracy. The machinelike bureaucracy has simply become the machine. Instead of a quest for accountability, much of the rhetoric and discourse about algorithms amounts to a surrender—an unwillingness to fight the ideas and bureaucratic logic driving the algorithms that critics find so creepy and problematic. Algorithmic transparency and accountability can only be achieved if critics understand that transparency (no modifier is needed) is the issue. If the problem is that a bureaucratic system is impersonal, unaccountable, creepy, and has a flawed or biased decision criteria, then why fetishize and render mysterious the mere mechanical instrument of the system’s will?

To critics, data-driven algorithms today are complex, opaque, biased, and leave their victims with little recourse or accountability. Dave Bry recently wrote in the Guardian:

An impersonal computer program gets first say as to who gets to earn money to buy food and who doesn’t, based on an application of a binary code too subtle and complex for us to understand. Over a thousand factors, analyzed for every vocal sample. Over a thousand ones or zeros clicked in the corresponding click boxes.

This paragraph, however, conflates two separate things: code and decision calculus. Computer code is not “too subtle and complex for us to understand.” As long as source code is available, well-documented, and laid out well, it is by no means mystical, mysterious, or beyond human understanding. And many commonly used algorithms in business and government are standard procedures likely borrowed from computer science papers or textbooks. Additionally, the “impersonal computer program” hasn’t actually made the decision—it merely automates business logic crafted by humans.

Take a modern automobile, the product of an enormously complicated set of interacting components (many of them increasingly computational). No engineer who worked on the car will understand every aspect of its functioning or the system as a whole; that knowledge is vested in the bureaucracy of the car company that built it. The driver, who is unlikely to know his or her way under the hood, is essentially entrusting in the experts who built and maintain the system and the bureaucracy that has knowledge of how it works. To boot, that car also may have components that will allow a repo man to remotely shut down your car if you don’t pay auto loans, and future cars may also contain instruments for collecting and analyzing data on how well you drive so that the insurance company can charge you an accurate rate.

If you find this creepy or scary, it doesn’t have much to do with technology. After all, didn’t you choose to entrust your life to a complex technological system (the car) that only an impersonal, opaque corporate bureaucracy (the car company) understands in full? Didn’t you agree to a loan, with the understanding that a repo man (an employee of another bureaucracy) could confiscate your property at will should you refuse to pay up? And in entering an insurance contract, did you not also allow an insurance company (yet another bureaucracy) to charge you a rate based on its assessment of your observed behavior? Computers may have made the repo man and the insurance agency’s policies more efficient, but they are not equivalent to those policies themselves. The code merely represents a formalization of the business logic that once may have been previously done by hand with a pen, paper, and a calculator according to stereotyped, even robotlike business procedures and rules.

Algorithms are impersonal, biased, emotionless, and opaque because bureaucracy and power are impersonal, emotionless, and opaque and often characterized by bias, groupthink, and automatic obedience to procedure. In analyzing algorithms, critics merely rediscover one of the oldest and most fundamental issues in social science: the pathology of bureaucracy and structural authority and power. Algorithms are not products of a “black box”; rather, they are the computational realization and machine representation of the “iron cage” of bureaucracy. As sociologist Max Weber noted a century ago, bureaucratic rationality consists of hierarchal authority, impersonal decision-making, codified rules of conduct, promotion based on achievement, specialized division of labor, and efficiency. Any kind of rational, cost/benefit thinking, however, presupposes a goal or objective. That goal may not always be in the interests of the individuals that a bureaucracy governs. Moreover, institutions may default to standard operating procedures even when doing so has counterproductive, harmful, and even absurd implications.

Today’s automation and data-driven programs are merely the latest and greatest of a long movement toward the automation, optimization, and control of social life—and this story begins not with a revolution in computing but a revolution in human understanding of social relations and governance. Sometime around the mid-19th century, scholars believe, the basic technology of social relations and governance shifted dramatically. Fueled by economic and philosophical thinking and sociological changes, some argue, the notion of society was upended and replaced with notions of utility, preference, and collective welfare. The notion of collective society was replaced by the image of an autonomous and self-interested individual who made rational choices to attain the objectively best outcome for him- or herself. Similarly, political governance became dominated by attempts to achieve social and political control through quantification, measurement, and rational bureaucratic processes. Such “scientific” measures would allow authorities to treat society as a machine that they could program and manipulate to achieve desired objectives. This is not a criticism as much as a simple historical and sociological observation. Such a shift also explains, after all, the origin, nature, and folkways of modern bureaucracy and how governmental and corporate metaphorical machines became slowly infiltrated by real machines.

Modern bureaucracy, as a form of power, was originally justified in terms of scientific and enlightened governance of society and optimization and control of corporate business processes. Another feature of bureaucratic and technocratic thinking was the assumption of paternalism. Whether it was early 20th-century thinking about the madness of crowds or trendy modern behavioral psychology influenced policy ideas about the importance of “nudges,” reformers believed efficient procedures and mechanisms could be designed to help otherwise hapless individuals make better decisions.

Progressive government reformers believed that government should be rational, impersonal, scientific, and even automatic in how it makes and enforces policy. Instead of placing their hopes in outing corrupt or morally compromised politicians and civil servants, reformers hoped that they could create sound government processes that could run or more less autonomously. By remodeling the structure of the system, governance would not be dependent on the ability of good men or women to resist temptation. Similarly, in the business world the practice of measurement, governance, and control could produce both profits and social benefits. Scientific methods could optimize the structure of how work was performed, leading to higher productivity, performance, and profits with less cost. The consumer, awash with contradictory information, could be helpfully guided with advertising processes and shaping of information to achieve satisfaction and sound choice that otherwise would elude them.

It does not take an engineer to figure out why the ability to automate procedures that must be executed repeatedly might appeal to technocrats. In addition to increasing the power of centralized decision-makers, the formal character of algorithms allows for optimizing those processes. The logical conclusion of pushing for rational, impersonal, and automatic decision-making free of the taint of human corruption and bias is that bureaucratic procedures inevitably become computerized. Unfortunately for technocrats, automation of decision and control ran into two core problems: limited computational resources and the difficulty of quantifying and computerizing tacit human knowledge and domain understanding. In the adaptations to these challenges, we see the origin of the programs algorithms critics loathe so much.

As Nobel Prize–winning economist Herbert Simon would famously argue, individuals, institutions, and computers have limited information processing, storage, and search powers. Their capacity for “procedural rationality” is “bounded” by computing limits, whether the procedures run on top of human “meatware” or computer hardware. We use rules of thumb and shortcuts to make faster decisions, even if they don’t always deliver optimal results. That may be a good thing; chess players consider only the most relevant moves to make due to their enhanced domain expertise. But these “heuristic” programs also may be the result of bias and dogma. If military men who defaulted to bureaucratic “standard operating procedures” had their way, the Cuban Missile Crisis would likely have ended in nuclear holocaust. Many algorithms that deal with complicated or computationally intensive problems use heuristics and shortcuts; the question is whether they are well-chosen. As with all shortcuts, often times they are not.

The same tradeoff between optimal and non-optimal decision-making also occurs when it comes to the general question of how to automate an employee’s knowledge and skills in a specialized task. There is no objectively best way to represent what is often inherently murky and intangible—so by default representational decisions are made with an eye to cost and effectiveness. Understanding what algorithms to use, how to build the model, and how to represent the problem are all things that humans do. And just because a system may learn from data or identify patterns does not mean that it cannot be led astray or fooled by the nature of its design assumptions or source material.

It’s problematic to examine algorithms as anything except the formalization and realization of procedures and stratagems that predate Python or R analytics code. “Algorithmic accountability” crusaders are talking about entrenched sociopolitical problems without really talking about them; computers become scapegoats for undesired features of capitalism, bureaucracy, and politics.

Algorithms merely remind us that we don’t have control over our own destiny; our lives, fortunes, and choices are structured by large, impersonal entities. In blaming our lack of understanding, control, or accountability on computers, we forfeit hope of re-negotiating the relationship we have with these institutions. If computers implementing some larger social value, preference, or structure we take for granted offends us, perhaps we should do something about the value, preference, or structure that motivates the algorithm. After all, algorithms can be reprogrammed. It is much harder—but not impossible—to recode social systems and institutions than computers. Perhaps the humans who refuse to act for what they believe in while raising fear about computers are the real ones responsible for the decline of our agency, choice, and control—not the machines. They just can’t handle the (algorithmic) truth.

This article is part of Future Tense, a collaboration among Arizona State University, New America, and Slate. Future Tense explores the ways emerging technologies affect society, policy, and culture. To read more, visit the Future Tense blog and the Future Tense home page. You can also follow us on Twitter.