Understanding the impact of algorithms

Algorithms determine or influence so much of our online and offline lives. Yet evaluating their impact remains a murky area, without a shared understanding of the different approaches, or what is most appropriate for a given situation.

 

Research by DataKind UK and the Ada Lovelace Institute

DataKind UK teamed up with the Ada Lovelace Institute to explore these different approaches to algorithmic audits. What follows is a brief summary of some of the conclusions – in particular the ones most relevant to charities and data professionals. We look at the four distinct but related approaches to assessing the impact of an algorithm. 

We can group these methodologies in two broad groups: algorithm audits and algorithmic impact assessment.

Algorithm audits are exercises in which a party external to the one deploying the algorithm carries out an extensive evaluation of the system while looking for flaws. The implementation of such an audit can vary depending on the system and the context. [not great sentence]

Bias audits

One way of doing this is via a Bias audit. Let’s take the example of the Gender Shades experiment of the Algorithmic Justice League. 

In this experiment, Buolamwini and Gebru audited 3 commercial facial recognition systems in their capability of classifying faces of different demographic groups (gender, age, race, etc). They did this by simply using the system as it is intended but keeping track of hundreds of results. In the end, they compared the performance across different groups, and found out that the models had the worst classification accuracy with the dark-skinned female group, and by a significant margin. Upon publishing these results, and the media attraction that ensued, the providers of these systems worked to improve them, and within a year, misclassification rates for that group dropped substantially.

Bias audits can be made by any member of the public with access to an algorithmic system, regardless of expertise and proximity to the system’s owners. This is therefore the primary mechanism that Social Change Organisations can leverage to keep algorithms in check.

Bias audits have limitations. They can be a powerful tool to highlight an existing bias, but bias can be nuanced and difficult to detect with simple experiments, so there’s the chance that it might go unnoticed. Much like at-home COVID tests, bias audits are good at detecting positive cases, but not as good at ruling out negative cases.

Regulatory inspections

To overcome some of its limitations, algorithms can be inspected using more thorough procedures, such as Regulatory inspections. These usually consider the full lifecycle and context of the system, and evaluate its adherence to existing laws and rules. Bias audits might be employed as part of a Regulatory Inspection, but they are usually just one tool among many.

Regulatory inspections are typically carried out by an authoritative organisation, such as governmental regulatory bodies, and in cooperation with the owner of the system. Because of this, they are of limited interest to the sort of person reading this article. If you happen to work for a regulatory agency, my sincerest apologies, dear reader. You are valid.

Algorithmic impact assessment

The second broad group of methodologies is known as Algorithmic impact assessment, and it consists of studies on the impact that an algorithmic system can have or is having on society. We can distinguish two methodologies within this group depending on the timing of the assessment: risk assessment, which is usually done pre-deployment, and impact evaluation, which is done post-deployment.

These methodologies draw from similar practices common in other fields. Preemptive risk assessments are commonplace in fields where there are concerns for the potential harms of a particular action, such as the environment or data protection. Thus, environmental impact or data protection impact assessments are well established, whereas applying it to the context of algorithmic systems is still in its infancy. As for algorithmic impact evaluation, we can find similarities to policy or economic impact assessments, which aim to measure the effects of a particular intervention after it has happened.

Conclusion: What’s in it for you? 

Algorithmic systems are powerful technologies that can alter a person’s life for better or worse. Interrogating them and ensuring they act as fairly as possible is in everyone’s interest.

If you are part of a charity: draw inspiration from the Gender Shades case study mentioned above. Bias audits can be impactful and you don’t have to be a subject matter expert to do one.  

If you are a data professional: keep a bias-aware mindset when building algorithmic systems. Stay up to date with tools and frameworks to evaluate and mitigate bias. Carry out risk assessments and impact evaluations for the systems you deploy.

Deep Dive

Previous
Previous

Putting your charity on the map

Next
Next

What is the best storage option for your data?