Skip to content

Gender bias and (in)fair algorithms: “Fairness depends on our values”

Corinna Hertweck has been working in a research project at the ZHAW School of Engineering since 2020 and is a doctoral student at the University of Zurich. In a joint project with the ZHAW, University of Zurich and University of St. Gallen, she is working on the interface between technology and society. On the occasion of International Women’s Day 2022, she talks about stereotypes, gender bias and fairer algorithms in an interview.

Which female stereotype annoys you?

I’m particularly bothered by the stereotype that women don’t understand anything about technology. Stereotypes like that prevent girls and women from even thinking about getting involved with technology. But to see more women in STEM fields, we should actively encourage girls and women to get involved with technology.

How “fair” are algorithms in terms of gender equality? Does artificial intelligence (AI) still need tutoring here?

It definitely needs tutoring. Several examples have already become public that have shown: AI often systematically treats women and non-binary people worse than men, for example in the hiring process or in granting loans.

AI is often trained to be as “accurate” as possible. But what does that actually mean?

“Accurate” means that the AI can mimic what it finds in the data as well as possible. Have few women been hired so far? Then the AI will also hire few women in the future. An AI does not think about: Why have we as a company hired so few women so far? Do the personnel managers have subconscious prejudices against women? Or have we received few applications from women because the job advertisements were mainly shown to men? Often, inequities are even reinforced from the data. We have to actively counteract this.

What are you doing at the ZHAW to make algorithms fairer?

On the scientific side, we are developing methods to assess the fairness of algorithms and methods to reduce unfairness. But in science it turns out that there is no objective definition of fairness. What is fairness depends on the context and – very important – our values. Therefore, the discussion about the fairness of algorithms is a social discussion about the fairness of the system in which they are used. The question about the fairness of a social welfare allocation algorithm, for example, is not a mathematical question, but a question about the values that should be embedded in our social welfare system. In our project, we try to stimulate this social discourse, for example through our collaboration with AlgorithmWatch Switzerland and the Museum für Gestaltung on the exhibition “Planet Digital”.

Read the full interview here.