AI Inequalities: Disabilities

Author
Published date
TUC Cymru is concerned about the risks facing all workers from Artificial Intelligence (AI). We wanted to know more about the risks it poses for specific groups of workers.
Therefore, we commissioned Prof Lina Dencik from the Data Justice Lab to produce a report on AI Inequalities at Work. Here she writes about AI’s impact on disabled workers.

Disabled workers stand to be major losers in the race towards an AI-driven economy. Although efforts are being made to develop AI technologies that can facilitate inclusive environments and provide support for disabled workers across different sectors, research has shown the extent to which AI overwhelmingly disadvantages disabled workers. In hiring, for example, disabled candidates are often systematically excluded from AI-driven processes that rely on streamlined or limited forms of experience that do not account for diverse backgrounds and abilities. 

Audits of CVs that include disability-related information, such as blindness, deafness, cerebral palsy and autism, have shown that even with enhancing achievements, such as leadership awards and scholarships, such CVs are ranked lower by AI systems than other CVs that omit such information.  

Research also shows that disabled workers are often disadvantaged by the ableist attitudes within the design of assessment or profiling tools that lead to discriminatory outcomes and position disabled workers as outliers. Such inequalities are particularly stark in speech and facial recognition tools that are unable to account for different characteristics that may present more in disabled people but they are also evident in the growing use of AI-driven personality tests that tend to disproportionately screen out disabled people and especially autistic people and those with mental health conditions. Partly, disabled candidates are disadvantaged by the lack of appropriate data for determining who is a good ‘fit’ or likely to be ‘successful’ in a job. 

Historically, disabled people are twice as likely to be unemployed and are therefore less likely to be represented in data on past successful employees. On the other hand, they might also experience discrimination from being overrepresented in certain data-sets by virtue of being under more scrutiny, with examples showing that language processing AI in the US, for instance, links disability to addiction, homelessness, and violence since data about disabled people are gathered disproportionately in those settings.  

In general, research on AI-driven hiring and disability is revealing of a broader structural ableism in who is considered qualified for a job and why. Research shows how ambiguous criteria like ‘fitness’ and ‘employability’ often reflects this ableism and reduces disability to one variable, despite the contested nature of how disability is defined and the many ways it is experienced. Once in work, research has shown that such ableist attitudes continue with the adoption of tools and platforms that have been found to be inaccessible to some disabled workers and exclude disabled workers from particular jobs or tasks.

Research has especially focused on gig work and crowd work, such as Amazon’s Mechanical Turk (MTurk), where features measuring speed or the use of certain verification tools, can lock disabled workers out of the platform. The use of algorithmic management techniques, like tracking devices for delivery work, have also been shown to lack accommodation for different abilities in the way they reward and discipline workers, and often conceal disabilities that obscure the responsibility of employers. Concerns have also been raised about the growing reliance on generative AI tools like ChatGPT within workplaces, that risk further exclusion and discrimination against disabled workers, without deliberate investment in language infrastructure, including for sign language and brail, to overcome ableist bias within such tools.

Calls have therefore been made to improve the way AI is audited for bias to better account for experiences amongst disabled people and to provide more information about and adaptable tasks on platforms to allow disabled workers to adapt work processes. There have also been campaigns to foreground so-called ‘disability justice’ in the design of new technologies that provides transparency on how ideas around ‘fitness’ or ‘success’ are actually arrived at within AI-driven assessments. 

This would also include greater responsibility on the part of employers to ensure tools do not harm or disadvantage disabled workers. In fact, equality laws in both the UK and the EU impose a duty on employers which should require engagement with uses of algorithmic decision-making, both at the design level as well as individualised adjustments on a case-by-case basis. Such employer responsibility could extend to ensuring there is an option to op out of AI-driven selection or demanding human intervention in data-driven assessments within the workplace.  

TUC Cymru is campaigning for all workers to be protected against the risks of AI.  If you’re a disabled worker and concerned about these issues, raise them at your trade union branch.  TUC Cymru has successfully negotiated guidance on the use of AI in the public sector.  Use it and adapt it for your workplace.    

The TUC is campaigning for additional legal protections against the threats of AI and has produced a range of materials to assist reps and officers.    

 

The AI Inequalities at Work report is published on the Data Justice Lab’s website.