clock menu more-arrow no yes mobile

Filed under:

One of New York City’s most urgent design challenges is invisible

New, 1 comment

Algorithms already affect buildings, policing, and education. Can new technology make the city fairer and more efficient?

Shutterstock

Algorithms are invisible, but they already play a large role in shaping New York City’s built environment, schooling, public resources, and criminal justice system. Earlier this year, the City Council and Mayor Bill de Blasio formed the Automated Decision Systems Task Force, the first of its kind in the country, to analyze how NYC deploys automated systems to ensure fairness, equity, and accountability are upheld.

This week, 20 experts in the field of civil rights and artificial intelligence co-signed a letter to the task force to help influence its official report, which is scheduled to be published in December 2019.

The letter’s recommendations include creating a publicly accessible list of all the automated decision systems in use; consulting with experts before adopting an automated decision system; creating a permanent government body to oversee the procurement and regulation of automated decision systems; and upholding civil liberties in all matters related to automation. This could lay the groundwork for future legislation around automation in the city.

“You have a lot of systems that are being used to make decisions affecting every New Yorker’s rights and liberties in different ways,” says Rashida Richardson, director of policy at AI Now, an artificial intelligence research group that signed the letter. (AI Now cofounder Meredith Whittaker is a member of the task force.) “The opacity behind how these systems are being used—not only to the public but sometimes within agencies—is extremely concerning.”

Automated decision making systems—which use computer models and algorithms to make choices, also known as artificial intelligence—have quietly been influencing New York City’s operations and governance for decades. While these systems are touted for their ability to standardize decisions and perhaps counteract human bias and error, they’re far from neutral. The people who design these systems bring their biases, conscious or unconscious, with them. Data sets that inform the systems are known to be biased. Additionally, there is virtually no transparency and accountability to the public, and to other government agencies, about how they work and exactly how they arrive at decisions. Depending on how the algorithms are designed, they can help further equity and fairness or perpetuate existing biases.

In the 1970s, the South Bronx infamously burned. Seven census tracts lost 97 percent of buildings to fire or abandonment, and another 44 tracts lost more than half. But the catastrophic scale of destruction wasn’t solely due to arson; automated decision making played a significant role. Faced with a civic budget deficit, then-mayor John Lindsay asked FDNY to reduce its costs. The department worked with the RAND Corporation, a think tank that used computer models to analyze response times to fires and suggest which stations could be closed without negatively affecting service. Using suggestions from the RAND analysis, FDNY closed 50 stations. Unfortunately, the algorithm that informed station closures was flawed for a variety of reasons, leaving broad swathes of the city susceptible to fire and disproportionately affecting predominantly black and Latino low-income neighborhoods.

Automated decision systems have improved since then, but they’re by and large “black boxes.” We do know that the city uses algorithms to allocate fire stations, police stations, public housing, and food stamps. The Department of Education uses algorithms to match students with schools. FDNY uses a risk assessment system to determine which buildings it should inspect for fire safety. The Department of Health uses automated systems to track and monitor sexually transmitted infections. The Administration for Children’s Services uses an automated system to evaluate employee and provider performance, but is thinking about expanding its use to include predictive analytics to influence its decisions about placing children into foster care.

New York’s most prevalent use of automated decision systems is for policing. NYPD uses predictive policing algorithms to target people—frequently young men of color—who might engage in unlawful conduct. Developed in a partnership with Microsoft, the Domain Awareness System uses a network of automated cameras, CCTV monitors, traffic sensors, and audio sensors (all located in the public realm) to constantly surveil the city and inform NYPD about events like gunfire, which might otherwise go unreported. This technology is licensed and NYC gets a cut of the fees.

But this is just the tip of the iceberg: New York City likely has more automated decision systems in place that we don’t know about. Because of lack of communication between agencies and within departments, there’s little transparency in city government and to the public.

“We don’t know what we don’t know,” Richardson says.

According to Richardson, who worked as legislative counsel at the New York Civil Liberties Union before joining AI Now in April, the federal government has been promoting automation to help governance become more efficient and offers grants for cities to use the technology. Products purchased with federal funds bypass the municipal procurement process leaving another blind spot. And because community groups are often left out of the equation, the city might be missing opportunities to use automation in a meaningful way.

“By not having publicly available information about what systems are used and how they affect residents, it’s impossible for communities to advocate for themselves,” Richardson says. “In addition, by having New York City residents more informed about what’s going on, they can help optimize what use cases make sense and what doesn’t make sense.”

As tech companies develop more of these systems, and cities face the ongoing challenge of making operations more efficient, more are likely to be adopted. And because New York is often a bellwether for civic technology and testing ground for products that will be marketed and sold to other cities, any policy implemented by the city will likely have ripple effects nationally.

The letter—along with future guidance from community groups, civil rights activists, technologists, and more—can influence how the Automated Decision Systems Task Force forms its report, and, ultimately, help achieve the level of transparency the city needs for this type of civic tech.

“These systems can help build efficiency where efficiency is needed within government and since some agencies are functioning a few decades behind, that may help improve government services,” Richardson says. “It may also force more scrutiny about government procedures to figure out what is optimal and what structural, systemic problems are not being addressed and trying to address them through technology. I think it will force some conversations around public policy that have been avoided for a long time. The concern is who you don’t have transparency and accountability, it’s difficult to even have these conversations.”

Read the full letter here.