Public Impact Algorithms

Public Impact Algorithms are algorithms that are used to make decisions, by governments or large corporate bodies, that have the potential to cause serious negative impacts to individuals and communities.

The Impact of Covid 19

The Covid crisis has further accelerated the deployment of algorithmic solutions, such as remote proctoring technologies, and working-from-home (WFH) surveillance apps.

Governments around the world are operating with expanded “emergency powers” at the moment to fight the Coronavirus pandemic.

 

There’s a certain amount of public goodwill in the face of these perilous times.

 

But the fear is that such temporary measures could become permanent in a quiet way – what we call the “mission creep effect”. 

In many cases, when algorithms and AI go wrong, the impact is trivial. 

 

Sometimes, however, serious negative impacts may arise because the decision or data concerns fundamental rights, such as a person’s liberty, or safety, or welfare entitlements.

 

Algorithms may embed mass surveillance, biased processes or racist outcomes into public policy, public service delivery and commercial products.

Serious harms may arise because the decision or data concerns fundamental rights, such as a person’s liberty, or safety, or welfare entitlements.

 

Public impact algorithms have been shown to embed mass surveillance, biased processes or racist outcomes into public policy, public service delivery and commercial products too.

Read more about this on our blog or in an interview we gave to The Privacy Collective.