AI Inequalities: Women

Author
Published date
TUC Cymru is concerned about the risks facing all workers from Artificial Intelligence (AI). We wanted to know more about the risks it poses for specific groups of workers.
Therefore, we commissioned Prof Lina Dencik from the Data Justice Lab to produce a report on AI Inequalities at Work. Here she writes about AI’s impact on women workers.

Despite concerted efforts to develop AI that can address gender disparities and support women’s experiences, research into uses of AI technologies at work shows that AI tends to continue existing patterns of inequality between men and women. AI has been found to perpetuate gender stereotypes and advance gender norms that disproportionately disadvantage women, from female voice assistants to image-generation depicting men in high-earning jobs. 

Researchers suggest that these trends are unsurprising considering the continued divisions established early on in education that sees only 36% of the global female population enrolled in science, technology, engineering, and mathematics (STEM) subjects and 29% in Information and Communication Technologies (ICT) compared to 70% in health and welfare. In Europe, the proportion is even smaller with only 34% female in STEM and 17% in ICT education. These numbers matter as AI is largely shaped by the environments in which it is being designed, including the values and experiences of developers and engineers who risk neglecting the needs of diverse users and can further entrench or perpetuate stereotypes and exclusions.

Across recruitment and hiring and within the management of workers, women tend to experience forms of exclusion and discrimination from the use of AI tools that are often designed by and for men. For example, researchers have shown that existing gender inequalities will teach algorithms that women are paid less than men, that men are more likely to get business loans, that men are more likely to occupy higher status positions, and that men are more likely to get promoted. This, in turn, leads to biased predictions that inform decisions about who will be employed and on what terms, including the amount of pay a candidate is likely to accept.

Once in work, algorithms have been shown to favour ‘male traits’ in measures of success, such as assertiveness and confidence which is encouraged more in men within society. At the same time, some of these technologies have also been used to highlight existing biases within organisations, evidencing gender inequalities with data to employers in order to encourage action to specifically target areas of discrimination. But often this requires more than a technological solution.  

Also, although many new technologies explicitly designed for women have entered the market, particularly within health and wellness, including a focus on issues such as menopause, pregnancy, and periods, that can be used in workplaces to be more inclusive of women’s experiences, these are premised on the possible exposure of very intimate data that may be misused by employers. This is particularly an issue for women in the workplace, who are already more likely to experience the increased surveillance and privacy infringements associated with uses of data-driven technologies as harmful. They are also significantly more at risk of digital technologies being used to harass or intimidate them, such as the growing distribution of image-based sexual abuse using AI technologies within the workplace, with research showing a disproportionate impact on women in relation to their job prospects, security and well-being.  

So-called ‘pink collar’ jobs like clerical work, human resources, retail jobs, call centres, and banking that are traditionally female-dominated have also been highlighted as being in particular risk of automation with the rapid uptake of generative AI tools in workplaces. At the same time, women have been found to be less willing to embrace generative AI at work. The impact this will have on gender inequalities remains often speculative so far, as these technologies are still being phased in across the labour market, but it presents a particular challenge for unions.  That is because that whilst women make up the majority of union membership, they tend to be in employment that is less likely to have union representation and may therefore be more difficult to reach. Calls have also been made to enhance the diversity within the education of STEM subjects, including computing and engineering, and to address gender disparities in the technology industry in order for the design of technologies to account for more diverse experiences. 

Policy and regulatory efforts have also sought to highlight issues of gender stereotypes and bias as part of a broader ‘Responsible AI’ agenda. This includes a focus on ensuring gender equality when training and developing algorithmic systems as a means of ensuring inclusive, responsible, and fair AI systems. But for this to be meaningful, researchers suggest that such efforts need to be accompanied by an engagement with wider issues that can address gender inequalities more broadly, like workplace culture, harassment, hiring practices, unfair compensation and tokenisation.  

TUC Cymru is campaigning for all workers to be protected against the risks of AI.  If you’re a woman and a worker and concerned about these issues, raise them at your trade union branch.  TUC Cymru has successfully negotiated guidance on the use of AI in the public sector.  Use it and adapt it for your workplace.    

The TUC is campaigning for additional legal protections against the threats of AI and has produced a range of materials to assist reps and officers.  

The full report AI Inequalities at Work is published on the Data Justice Lab’s website.