People often – falsely – assume that AI and machine learning technologies are uninfluenced by prejudice and other negative human traits. They assume that these technologies therefore make objective and neutral decisions, based on cold facts and data. This is not the case. Any software or algorithm is designed by humans. And their unconscious biases will be reflected in the data that they collect and curate, as well as in the logic of the algorithms they create.

This data is often biased to begin with, as discussed in Invisible Women by Caroline Criado-Perez. From medicine to urban planning – men are considered the default, and women as a deviation from the norm. So seeing as AI tools run on this data, it’s unsurprising that they fall short on accurately representing our diverse society, unless biases have been taken into account.

This is also where inclusive, feminist and inter-sectional representation and oversight is needed, right where these tools are made.

We need human solutions for human problems

This can have real consequences. On the practical side, people with dark skin could be excluded by AI-powered tools or women could have problems with voice recognition – when the training data over-represents people that are male and have white skin. Or they could reinforce systemic discrimination – for example when CV matching tools undervalue women or when algorithms give certain people worse credit ratings based on gender or race data.

These are just some examples where algorithms can detect and exploit statistical correlations that reflect our historically and structurally biased world. Fallacies like this can and must be corrected by humans – or more accurately, by teams of humans that represent diverse cultural, socio-economic and religious backgrounds. Many tech companies have understood this, and are already working hard to tackle these issues.

Three women sitting with computers
Photo by Christina @ wocintechchat.com on Unsplash

So, with all this in mind, here are six examples of how women – both those in tech and non-techy people – can contribute to create a healthier environment for AI development:

1. Being in the room

We need diverse teams in tech and AI to prevent tools from being biased. A failure to do so would negatively affect people that are marginalised or under-represented, including women. You can help correct that, simply by aiming for a seat at those tables, where things are made, and where people like you might be lacking.

2. Understanding the topic

Simply getting up to speed on how this affects your life opens doors to opportunities – both to navigate and impact your environment.
One small example – did you know that job search tools can suggest less attractive jobs for women than for equally qualified men? Even if you don’t have the skills to fix this (yet), you can at least adapt your search behaviour – helping yourself go for the things you deserve.

3. Building better, safer and more effective products

Insufficiently diverse companies make poorer products that are not inclusive, as they do not account for hundreds of so-called “edge cases”, i.e. digressions from an assumed norm. “Technically Wrong” by Sarah Wachter-Boettcher delves into this. Simply not being male should not make you an edge-case, so let’s challenge that.
Rachel Thomas has made it her mission to help non-tech people learn the skills they need to create practical solutions to real-world problems – and to do so fast. This cannot only help address the harms that can be caused by AI technology, but also empowers people to make the most of its positive potential.

4. Empowering others

The proportion of women with a degree in ICT is five times lower in Belgium than it is in Saudi Arabia. If you work in a tech-related job, project or hobby – congratulations, you are a role model! Get out there – teach, share and support other women on their journey.

5. Shaping the public debate – in tech and beyond

It is a futile attempt to improve the fairness of AI, if already outside of technology our society cannot agree on what it means to be human or fair. Any biases in machine learning are nothing but a mirror being held up to our society. Only a broad cultural shift can start to tackle that.

6. Know why you’re doing it

Are you interested in:

You don’t need to get to the bottom of this now, but being focused in your approach is key to your success.

Let me know what you think of this, and whether you had any experience with this mysterious sector called tech, by reaching out to me on Twitter. If you are now curious to get involved, check out the School of AI Brussels, which organises regular meetups and workshops. They are an opportunity to discuss some basic concepts and use-cases of Artificial Intelligence, including its ethical, legal and political implications. We also organise hands-on coding workshops with coding challenges, which are designed for beginners, intermediates or more advanced learners and practitioners. The goal is to explore different aspects of artificial intelligence, and to demystify what is inside the “AI black box” – and to dissect how algorithms work, as well as what (and who) influences their decision-making.

Please follow and like us:
Christina Wunder
wunder.christina@icloud.com
Christina Wunder is a communication specialist with a passion for politics and technology. Having been a press representative for the European Commission for the past years, she is now with the Google team in Brussels (all views expressed in this article are her own). Christina also runs The School of AI Brussels - a learning community for people who want to learn about artificial intelligence, machine learning and data science.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.