Toggle high contrast

Artificial intelligence for creative workers

A TUC manifesto
Report type
Research and reports
Issue date
Introduction

This manifesto outlines our values and proposals for addressing the impact of artificial intelligence (AI) on creative work and workers. While we focus on the creative and education sectors, our manifesto advocates for the interests of all workers who generate intellectual property or use their likeness while at work, including writers, performers, educators and others. 

The creative and education sectors are vital to the UK’s economy, social cohesion and cultural identity. Creative workers are key contributors to AI development but face economic vulnerability if their rights are not protected. The rapid development of AI technology requires a response which centres the rights and interests of creative workers. 

Download report (pdf)

Values

These are the values we consider fundamental to ensure this technology benefits all:

  • Transparency: Technology companies should provide clear information about how the technology operates and the data it is trained on. This is crucial for informed consent and preventing misinformation.
  • Consent and agency: People should be able to decide on how they engage with this technology and should be able to withdraw consent.
  • Human creativity and connection: Human creativity has inherent value and should be safeguarded. Human input is essential for the quality, authenticity and emotional resonance of creative work.
  • Rights protection and preservation: The development and deployment of technology should respect, preserve and support workers’ rights and intellectual property rights.
  • Benefit-sharing, compensation and remuneration: The gains from this new technology should be shared fairly with workers so they are compensated and can continue to contribute.
  • Technology ‘for-and-by’ creative workers: AI should be designed with creative workers to meet their needs.
  • Training and skills development: Digital literacy and traditional skills training are essential for safe and effective technology use.
  • Consultation and collective representation: Creative workers and unions should be involved in technology design and deployment decisions.
  • Collaboration: Increased collaboration between technology stakeholders and creative workers is needed.
  • Equality, inclusion and cultural diversity: Technology should uphold equality, inclusion, and cultural diversity, avoiding content homogenisation.

Proposals 

These proposals take forward the values outlined above:

  • Labelling of machine-generated outputs: Clear labelling should be in place to differentiate machine-generated outputs from human-created content.
  • Opt-in for data mining and AI training: The use of human-generated materials should only be permitted if creative workers and rights holders have given their permission and consent.
  • Right to remove content from training datasets: The government should recognise a right to remove content from training datasets and ensure there are clear enforcement routes.
  • Fair contracts: Safeguards should be put in place against unfair terms and practices in contracts, which often arise because creative workers can be vulnerable in contractual negotiations.
  • Preserved and increased intellectual property rights: Government should confirm and uphold the principle that data mining for AI training without consent is an infringement of intellectual property rights and increase protection for creative workers in response to the new technology.
  • New likeness rights: New rights should be implemented to protect workers’ likenesses from being used without their consent, such as in ‘deepfakes’.
  • Remuneration schemes and licensing agreement: There should be licensing and compensation mechanisms for data mining and AI training so that they can be carried out legally with informed consent and fair remuneration of workers.
  • Credits and rights communication: Workers should be clearly and consistently attributed for their work when it is used by technology companies.
  • Disclosures: Technology companies should clearly disclose how their technology operates and what data it has been trained on.
  • Accessible legal redress: There should be user-friendly and timely fora for rights enforcement, backed up with strong sanctions such as fines.
  • Harmonised protection of creative workers: Governments should collaborate to prevent regulatory disparities which could encourage ‘jurisdiction shopping’.
  • Independent AI regulatory body: The government should establish a regulatory body, with social partner representation, to oversee and regulate the deployment of AI.
  • Support for sector-specific and rights-compliant AI: There should be co-operation between workers’ unions, technology leaders and government to support AI technology tailored to the usage needs of creative workers.
  • Specialised training and guidance: The government, further and higher education organisations, as well as unions should provide training opportunities for creative workers on data, technology and rights relevant to new technologies like AI. This training should be sector-specific.
Context

Our manifesto outlines our values and proposals to address the impact of artificial intelligence (AI) on creative work and workers.

We acknowledge that workers across many industries engage in creative work when they carry out tasks capable of generating intellectual property or using their likeness. We refer to them as creative workers.

We recognise that creative workers are often but not always found in the creative sector. For example, teachers, academics, and other service workers who create or deliver content such as text, images, sound or video recordings in daily tasks also engage in creative work alongside writers, artists, and journalists. While we refer to the creative or education sectors in our manifesto to provide practical examples of issues and solutions, we advocate for the rights and interests of all creative workers across all industries.

We also acknowledge that the development of AI technology itself raises legal and ethical concerns on a global scale such as the wellbeing and safety of workers involved in creating the technology, data sovereignty or environmental degradation.

We acknowledge that artificial intelligence is a rapidly evolving technology whose impact on our work, education and personal lives will change. We may revisit the values and proposals outlined in our manifesto in light of future developments.

We note:

  • Workers in the creative and education sectors are the backbone of the UK’s ‘soft power’ industries, generating political influence and thought leadership worldwide. The creative sector is also a growth-driving industry of the country’s economy and world-leading in creative and digital services export. 1 Last but not least, the creative and education sectors play a vital role in building the country’s social cohesion and cultural identity.
  • Those creative workers are also key contributors to AI innovation. They are the custodians of public trust and main producers of human-generated content, both of which are core resources needed for technology development. Yet, the rights and interests of creative workers have not been adequately considered by technology developers or the UK
    government in their approach to AI.
  • We therefore call on industry and government leaders to centre creative workers’ rights and interests in AI innovation, policy and practice going forward.
  • Although highly skilled, creative workers can be economically vulnerable. They often operate in less financially secure or resourced organisations like small medium enterprises (SMEs), under precarious contracts (self-employed or fixed-term), in increasingly concentrated markets where commercial practices are structured to extract value from workers to corporations. 2 For example, creative workers are routinely required to transfer all rights to their intellectual property, likeness or privacy in perpetuity as a non-negotiable condition of their engagement, losing their rights to further consent and fair remuneration in the process. This accumulation of rights concentrates economic power in the hands of a few market players such as media publishers, record labels, online platforms or digital service providers, and risks excluding creative workers from accessing the opportunities of AI.
  • Adding to this challenge, it can be practically difficult for creative workers to organise as many are engaged on a project-by-project basis, often with no single or identifiable workplace, in sectors without statutory rights to union representation.
  • There is consensus among governments, the public and private sectors that ‘ethical’ or ‘responsible AI’ is key in accessing the benefits of technology while mitigating the risks.3 However, existing ‘ethical AI’ declarations only provide high-level principles. As a result, it is unclear what ‘ethical AI’ looks like in practice, leaving everyone vulnerable to ‘ethical AI washing’, which occurs when technology developers and deployers make unsubstantiated claims of ethical practice to attract users. In the absence of enforceable regulations or industry agreements, declarations of ‘ethical AI’ will remain ineffective in protecting creative workers’ rights and interests.
  • 1 Departme.nt for Business and Trade (2024) Invest 2035: the UK’s modern industrial strategy.
  • 2 Rebecca Giblin and Cory Doctorow (2022) Chokepoint Capitalism: how big tech and big content captured creative labor markets and how we’ll win them back, Beacon Press.
  • 3 Mathilde Pavis (2024) Recentering Culture in AI policy and practice, UNESCO
    working document.

It can be practically difficult for creative workers to organise as many are engaged on a project-by-project basis, often with no single or identifiable workplace. 

Enable Two-Factor Authentication

To access the admin area, you will need to setup two-factor authentication (TFA).

Setup now