There is now overwhelming support for stronger regulation to protect workers from the punitive use of artificial intelligence (AI) and surveillance technology and to avoid widespread discrimination, work intensification and unfair treatment.
Workplace surveillance has changed a lot over the last few decades. With the growth of digital technologies, the bag checks and punch-in cards of the past have become browser history searches, phone call and keystroke monitoring and in some sectors, the analysis of facial expressions and tone of voice.
During the pandemic, as many people started working from home, workplace surveillance technology increased.
In a survey of 2,209 workers in England and Wales, a poll by BritainThinks for the TUC found a notable increase in workers reporting surveillance and monitoring in the past year (60 per cent in 2021 compared to 53 per cent 2020).
More workers report monitoring of staff devices (24 per cent to 20 per cent) and monitoring of phone calls (14 per cent to 11 per cent) compared to 2020.
Three in 10 agree monitoring and surveillance at work has further increased since the Covid-19 pandemic began.
Data and technology is increasingly being used by employers to monitor workers, allocate work and even hire and fire.
Yet, tech companies, employers and the government have failed to produce legislation and conditions that protect workers.
Just take the Post Office scandal. Over seven-hundred staff members were criminalised, many imprisoned and financially ruined after a computer program repeatedly reported shortfalls in the accounts that did not exist. This injustice must be a turning point on the uncritical use of worker-monitoring tech and AI.
The creeping role of AI and tech-driven workplace surveillance is spreading beyond the gig economy and into the rest of the labour market, especially the private sector.
Financial services (74 per cent), wholesale and retail (73 per cent) and utilities (73 per cent) are the sectors with the greatest proportion of workers reporting surveillance. Here, automated decision making via AI includes selecting candidates for interview, day-to-day line management, performance ratings, shift allocation and deciding who is disciplined or made redundant.
There is a huge lack of transparency over the use of AI at work, and many staff have no access to knowledge on how technology is being used to make decisions that directly affect them.
Concerningly, the government recently consulted on diluting General Data Protection Regulation (GDPR) as part of its post-Brexit divergence agenda, despite it providing some key protections for workers against surveillance tech.
The EU is currently putting in place laws dealing specifically with the use of AI, whereas the UK does not have anything similar – yet another example of the UK falling behind its EU counterparts on workers’ rights.
In our manifesto, Dignity at work and the AI revolution, the TUC calls for:
British workers agree, with 82 per cent now supporting a legal requirement to consult before introducing monitoring.
Some 72 per cent agree that without careful regulation, using technology to make decisions about workers could increase unfair treatment.
Workers are saying no to increased surveillance. Unions must protect against unfair or incorrect monitoring by technology that is allowed to grow unchecked.
Want to hear about our latest news and blogs?
Sign up now to get it straight to your inbox
To access the admin area, you will need to setup two-factor authentication (TFA).