Algorithmic Transparency

“Algorithmic governance is made possible by vast increases in computing power and networking, which enable the collection, storage, and analysis of large amounts of data. Cities seek to harness that data to rationalise and automate the operation of public services and infrastructure, such as health services, public safety, criminal justice, education, transportation, and energy. The limitations of local government make private contractors central to this process, giving rise to accountability problems characteristic of policy outsourcing.” (2018, Brauneis, Goodman) Algorithmic governance will be increasingly important in the way decisions are made in cities. This has led to a debate on the transparency of the algorithms and the potential biases built into it. As discussed in the Right Way to Regulate Algorithms: “The purpose of data-driven algorithms like this one is to make policing more objective and less subject to individual bias. But many worry that the biases are simply baked into the algorithms themselves.” New York will be the first city that will scrutinise the potential biases in algorithms and that will develop policies on how to regulate access to underlying assumptions. According to the New Yorker: “Once signed into law by Mayor Bill de Blasio (dec 2017 red.), the legislation will establish a task force to examine the city’s “automated decision systems”—the computerised algorithms that guide the allocation of everything from police officers and firehouses to public housing and food stamps—with an eye toward making them fairer and more open to scrutiny.”

Sources:
Algorithmic Transparency for the Smart City, Robert Brauneis & Ellen P. Goodman
The Right Way to Regulate Algorithms, by Chris Bousquet, Stephen Goldsmith
The New Yorker: New York City’s Bold, Flawed Attempt to Make Algorithms Accountable
Picture: Kolitha de Silva CC BY 2.0