July 26, 2022
Ever since Joy Buolamwini’s landmark research into discriminatory facial recognition tools, algorithmic bias has ceased to be a theoretical discussion and become a pressing political issue. This year alone, thousands of people have felt the devastating impact of algorithms that replicate the biases and inequalities of the real-world, from students awaiting their exam results to people awaiting sentencing.
While it’s fair to say that algorithmic bias - and tech ethics more broadly - are by now firmly on the radar of AI companies large and small, there’s a critical difference between the lip service paid by majority white male development teams, and effective action. As algorithms become more and more present throughout the everyday economy, disparate outcomes are likely to proliferate, ultimately forcing the hand of regulators and undermining the radical potential of these technologies.
Challenging algorithmic bias is then not only vital, but urgent, and doing so requires firms and governments to platform critical voices from those worst affected by it.
The role of data
Leading the charge are some of the world’s leading Black and minority technologists, and at this month’s Black Tech Fest, four senior innovators from across tech and business came together to share experiences and formulate strategies for tackling this key issue.
Djamila Amimer, CEO of Mind Senses Global, opened the discussion by defining algorithmic bias. For her, algorithmic bias can best be understood as a series of systematic and repeatable errors by computer systems that can cause unfair or unequal real-world outcomes - with data often given as the driving factor.
“Data has a very important role in tech bias,” argued Shakir Mohamed, Research Scientist at DeepMind. “It is in what we are measuring, and what we think of as worth measuring. It’s also in the choices we make and the variables we use when we’re building the model. You have all these factors combining with each other, and what you get in effect is an artificial division - a system of difference which empowers some and disempowers others. Bias is multifaceted.”
This became crystal clear this summer, when, in the absence of physical exams, the UK government allocated A-level exam results using a deeply flawed algorithm which largely replicated inequalities already present in the British education system. Naomi Kellman explained that grades were given based on teacher ranking of students as well as predicted grades.
“This raised concerns,” she said, “because high achieving students from low income backgrounds tend to be underpredicted, and ethnic minority students are more likely to be from low-income backgrounds.” These students received much lower grades than they had been achieving in mock exams, while pupils at wealthy private schools were given much higher marks.
The results fiasco became the most high-profile example yet of algorithmic bias failing thousands of people, forcing the British government into a rapid U-turn. But for several years, algorithms have been used by the police in minority communities across Britain with quietly devastating results.
Challenging racist policing
Katrina Ffrench is the CEO of Stopwatch, an organisation fighting for fair and accountable policing and challenging racist policies such as stop-and-search. During the course of their research, Stopwatch uncovered the ‘gangs matrix’, a police intelligence database devised by the Metropolitan Police. The gangs matrix was ostensibly devised to identify who was at risk of criminality and serious violence, and used a basic Excel spreadsheet in which officers would enter scores based on their judgement of the risk faced by individuals.
Stopwatch, along with Amnesty International, uncovered that the database was highly discriminatory. 80 percent of people on the gangs matrix were 12-24 year olds, 58 percent were Black, and 35 percent had never committed a serious violent offence. Crucially, the police did not measure the real-world outcomes of using the database, so there is almost no data on whether people were successfully diverted from crime or not.
What is clear is the extraordinary harm it caused. The information on the database was shared with other statutory agencies, leading students to lose places at college, people having their drivers’ licences revoked, and teenagers placed into care and denied a stay at home with their families.
“Most people in civil society had no understanding of the database, so it was incredibly difficult to challenge. As well as the data argument, everyone has human rights, and that’s what I’m concerned about,” Katrina explained. “What we were challenging was that police intelligence is not neutral. If there’s biases in society already and the status quo is already unequal and you produce tools in the same fashion, you’re likely to repeat that bias.”
Tackling algorithmic bias: a systemic approach
Identifying the sources of algorithmic bias is challenge enough, but actually tackling it is something else. For Shakir, addressing the problem needs to take place at ‘every level’, from technical and organisational to regulatory and societal.
“The first thing we need to do, and what Black Tech Fest is about and this panel is about, is build very broad coalitions of people,” Shakir said. “In the past five years, we’ve seen coalitions of amazing women in our field - Black women who saw the problem, wrote the papers, and exposed the issue. Five years later, companies have decided they’re not going to be involved in this kind of facial recognition, and cities and states have even banned its public use.”
One way of tackling algorithmic bias involves building truly effective alternatives. At Rare Recruitment, Naomi Kellman helped to oversee the introduction of the contextual recruitment system. “Traditionally, top employers used to look for certain grade profiles and types of work experience. While that appears colour blind, it’s not, because we know some people have more access to good education and opportunities than others. So we built a system that looks at a person’s achievements in context.”
The contextual recruitment algorithm doesn’t just look at a pupil’s grades; rather, it compares someone’s achievements to their circumstances and highlights to employers when someone has actually outperformed their peers. By collecting data on socioeconomic status - for example, whether a pupil received free school meals, went through the care system, or came to the country as a refugee - the algorithm is able to provide a more rounded representation of a student’s work. For enterprises, the algorithm has been a huge benefit.
“The organisations that use this algorithm now interview a much broader range of people,” Naomi said. “Instead of having a very basic algorithm that looks for 3 As all-round, they’re able to look at more data points and understand somebody’s true potential. It means more data points, and that means more people get hired from a wider range of backgrounds.”
As the applications of predictive algorithms proliferate across society, the need for transparency and accountability among tech companies is only going to grow. But much more than that, tackling algorithmic bias means tackling the underlying stratifications in our society that generate unequal outcomes.
“Tech is not neutral,” Katrina said. “The people who design it are human. Every single one of us have biases, and it’s up to us whether we unpick these and address them. Far too often, things are pushed out without any real scrutiny and then we have to detail with things after the horse has bolted from the stable. The public needs to be aware of what’s going on.”
Black Tech Fest brings together 90 speakers across 20 hours of content celebrating Black culture and tech breakthroughs. Catch up now