Future Tense

Poor People Deserve Digital Privacy, Too

Applying for benefits often requires people to enter massive amounts of personal information into insecure systems.

Photo by PUNIT PARANJPE/AFP/Getty Images

For centuries, political authorities have punished the poor for being poor. In colonial America, for example, “overseers of the poor” required the destitute to wear badges.

Today, “overseers of the poor” are as much code—database queries to check eligibility—as they are people and institutions. Welfare programs collect massive amounts of data that are stored in potentially unsecure databases for unknown amounts of time, with unspecified permissions control or criteria for caseworker access. Poor people in the welfare system don’t have privacy, and they don’t factor into broader debates on protecting individuals’ liberty and right to be left alone.

This isn’t just a hypothetical. Rogue actors have targeted databases for public assistance programs, leaving poor people exposed and exploited.

One of the more egregious examples comes out of Utah, where in 2010 a Department of Workforce Services employee accessed a client database and released to the media, law enforcement, and governor’s office the names of benefits recipients who were allegedly unauthorized to be in the United States. In response, the state instituted a “zero tolerance” policy for unauthorized database access—but after 24 workers were fired, the penalty was reduced to a four-day suspension. In a separate incident two years later, hackers stole 250,000 Social Security numbers from the Utah state government’s server, along with “less-sensitive information” from about 500,000 more.

Lower-income individuals increasingly have to use online options for public benefits enrollment, and their (justified) fears about personal cybersecurity and identity theft can further ignite anxieties and concerns that arise as a result of  intrusive data collection. Poor people face immense amounts of stigma when applying for public assistance and are required to share a tremendous amount of personal and financial information. Combine that with a digitally insecure welfare system, and you get people in poverty who are even more marginalized—and even more distrustful of government and institutions. 

One straightforward solution to this problem would be to collect less data. To target programs effectively, state agencies need information on applicants’ financial circumstances—but maybe not quite as much as we’re collecting. Asset tests, for example, have historically required applicants to turn over reams of paperwork documenting their finances—everything from bank statements to funeral agreements and life insurance policies—despite the fact that most applicants have next to nothing. The Temporary Assistance for Needy Families program is case in point. In 2010, only 10 percent of TANF families had any savings whatsoever, with an average balance of $215. For many, TANF is a program of last resort. Requiring these families to turn over extensive paperwork to further document just how poor they are is a barrier to access and a waste of everyone’s time—and may subject applicants to needless risk.

Furthermore, narrower approaches can prevent “wealthy” families from accessing programs intended to help the poor. Both the House and Senate versions of the Farm Bill, for example, would prevent SNAP (food stamp) recipients with substantial lottery winnings from continuing to receive assistance. This verification would likely rely on data matching—but would not require nearly as much data in the first place. It takes a scalpel to the problem rather than a sledgehammer.

Automated decision-making in public assistance needs to be fairer, too. As a recent GAO report discussed, everyone in the public assistance ecosystem—from program participants to caseworkers to evaluators—would benefit from sharing data across different programs like TANF, SNAP, and Medicaid. Automation saves time in the process to determine a person’s eligibility in a particular program, her enrollment, and recertification. Research on one program that streamlines public assistance, the Benefit Bank, demonstrates that reducing the burdensome amount of time spent on bureaucracy can allow a program participant to focus on getting a job and earning a wage. And that’s the goal, right?

But automated systems are a reflection of the values of the people and institutions behind them, and they need to be designed with fairness in mind. When they’re not, poor people suffer indiscriminately. In Indiana, the attempt to bring the state’s welfare system up to modern digital standards led to hundreds of thousands of Hoosiers being ruled ineligible—many incorrectly. The case-monitoring system was Draconian, without prompts for granular data that would better inform whether someone should be denied benefits. The upgrade resulted in one woman, a terminally ill patient in hospital care, being pushed off of Medicaid because she missed a single welfare appointment.

Obviously, our recommendations require security measures that withstand attacks and abuse by rogue actors. Making that happen will require effective assessment, coordination, and communication between agencies, IT staff and contractors, caseworkers, and ideally participants themselves. Having a data-sharing plan that’s vetted by security experts is one simple step. Until that happens, poor people will continue to have second-class privacy rights, and the welfare system that’s designed to help them will continue to be inefficient—and largely ineffective—at alleviating inequality.

For more, come to the New America Foundation event In Poverty, Under Surveillance: Examining the Trade-Off Between Privacy and Public Assistance on Dec. 12 at 12:15 p.m. Eastern in Washington, D.C. You can also watch online.