0 0
Read Time:4 Minute, 53 Second

As a woman of colour passionate about the intersection of social activism and tech, I have been learning about the huge, terrifying role that AI has on propagating discrimination in our jobs, and livelihoods – especially if you’re not a white man. Here are five key concerns that you, as a young woman, need to watch out for:

 

1. The misconception that AI is inherently unbiased

Make no mistake, all algorithms are written by people who have their own biases. Critics have warned that AI-driven hiring tools are just as biased as the humans who train them. And since most programmers are currently white and male, they often unknowingly reproduce their own biases into their algorithms. Once these algorithms get fed into the black box of machine learning, the same prejudices are multiplied at a mass scale. The catch 22 here is that to know what unfair biases to look for in algorithms, you would need a diversity of perspectives to help teams think outside of the box and anticipate how AI will impact people’s lives.

 

2. Predicting the past?

Too often, we see a gap between businesses that use AI for efficiency versus societies that expect businesses to use AI to tackle systemic discrimination. These are two very different goals. By merely aiming at efficiency, algorithms risk making decisions on a misleading “ideal employee” profile. The problem is this: imagine an algorithm that scans a company’s employee base and calculates that white, abled, men between the ages of 23-29 make up the majority of those who have been recruited, promoted and retained. It, therefore, decides that this group makes up the “ideal employee” profile. What you are left with is an algorithm that deprioritizes diversity systematically and furthers the same real-world bias into AI – which is what happened at Amazon.

AI needs existing data sets to make predictions. But algorithms are unable to account for historical discrimination and sexism. This is where humans would need to come in.

pic by Clay Banks

 

3. Bias is everywhere

Bias exists at every stage of the recruitment process, starting from job ads. Research shows that Google displays better job results to men rather than women and that ads that women perceive as masculine-worded have a significant impact on job appeal. We assume that tech platforms like Google and LinkedIn serve us unbiased algorithms but data shows us otherwise. 

Recruitment algorithms are also trained to read specific formats of CVs and resumes, which could mean that your CV is not evaluated properly. In the case of Amazon’s AI recruitment software, resumes that included the word “women’s,” as in “women’s chess club captain” were penalized. Even if you manage to make it through the screening process, you could be invited to a video interview that assesses you based on the keywords, facial expressions, and tones you use. This may replicate biases that rely on training data if it has not been vetted against categories such as gender, age or religion.

 

4. Intersectional AI

Intersectionality considers different systems of oppression, and specifically how they overlap and are compounded. For example, a black woman might face discrimination from a company that is not uniquely due to her race nor her gender. Most AI recruitment solutions in Europe are still unable to account for intersectional profiles of people. The algorithm might be able to differentiate between male and female or abled and disabled but it can’t do both simultaneously. Isn’t it funny when real-life binary views get projected into AI?

To ensure everyone has access to the same jobs, we need AI that can account for our intersectional identities.

 

5. The burden of proof

Many companies like IKEA, Amazon, and Hilton are already using AI for recruitment. But many of the algorithms in AI are biased. So job seekers risk not being hired because of algorithmic bias. But how can they ever hope to prove this?

Increasingly, we share a disproportionate amount of information about ourselves with the technology we use. You may have to provide personal data (age, marital status, previous salary) to your potential employer (and their recruitment technology). In return, though, do you get information about how AI will be used to vet you, what identity the algorithm gives you or why the algorithm did not shortlist you?

In the existing legal context, the burden of proof lies on the people who have the least amount of information – you and me – rather than on the platforms we use, making it very difficult to identify and tackle discriminatory AI.

 

I’ve outlined a few issues showcasing how algorithms can create more bias in recruitment. But how do we solve them? Here are some thoughts:

  1. Remove bias from our societies: To ensure that AI is unbiased, we have to first tackle the existing discriminatory structures of power and wealth within our societies. Algorithms by themselves can’t fix gender inequality – so even if you did recruit more women in tech, would they stick around in a sexist work culture?
  2. Educate technologists on ethical AI: Educating those responsible for building and unleashing AI, on the ethical implications of algorithms in building fair, ethical and transparent societies, is vital to enabling better AI. Additionally, people working in tech should abide by an established code of ethics, just like our doctors and lawyers.
  3. Keep the human in HR: Companies who sell AI tools say they expect that the job of human recruiters will soon be entirely replaced by robots in some sectors. Recruiters and AI developers alike must make sure that AI solutions are properly tested for bias, that concerns have been adequately dealt with, that there are opportunities for users to raise concerns and that ultimately humans are relied upon to make the final call.

I'm a human rights activist passionate about creating empowering and safer spaces for oppressed groups including women, people of colour, people with disabilities, muslims, LGBTIQ+ people and youth! I'm currently trying to infiltrate the white bro culture of tech by bringing a human centered, inclusive and ethical perspective into tech solutions and challenges that have a profound impact on our lives, opportunities and freedom.

Gail Rego
gail.diadrie.rego@gmail.com
I'm a human rights activist passionate about creating empowering and safer spaces for oppressed groups including women, people of colour, people with disabilities, muslims, LGBTIQ+ people and youth! I'm currently trying to infiltrate the white bro culture of tech by bringing a human centered, inclusive and ethical perspective into tech solutions and challenges that have a profound impact on our lives, opportunities and freedom.

Average Rating

5 Star
0%
4 Star
0%
3 Star
0%
2 Star
0%
1 Star
0%

One thought on “How algorithmic bias affects you…especially if you’re not a white man

  1. […] Gail has a decade of experience in communication and community building roles in the UAE, Colombia, Malaysia, Kenya, and Belgium. She started working on tech and child rights related projects and campaigns during her role as Head of Communications and Membership at Missing Children Europe. This included campaigns against child tracking apps, a multi-stakeholder project to improve missing children investigations using blockchain, geofencing, social media analysis etc. and the NotFound web app that replaces website 404 pages with posters of missing children. Previously, she worked as the Communications and Partnerships Manager at the European Venture Philanthropy Association. She is a member of Young Feminist Europe and the People of Colour Brussels group. […]

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.