Toggle high contrast

Consultation response to AI White Paper

TUC response to “A Pro-innovation Approach to AI Regulation”
Report type
Consultation response
Issue date
Summary

Trade unions play a key role in ensuring that the rights and interests of working people are represented and recognised. For example, we advocate the importance of strong collective bargaining rights, appropriate regulation and enforcement to ensure the protection of employment rights, as well as equality of treatment for all, regardless of factors such as race, religion, age, gender, disability, sexuality and access to financial resources.


In light of the purpose and goals of the TUC, we have responded to this consultation from the perspective of workers and the workplace. We emphasise the importance of context specific governance of AI and highlight that the A pro-innovation approach to AI regulation white paper does not provide specifically for the workplace. We believe this to be a serious omission in the government’s approach.

Although we support the importance of the principles outlined in the AI white paper, we believe the government’s proposals are inadequate and do not meet the urgent and pressing need for regulation of the use of AI at work and beyond. This change must be in the form of a legislative upgrade of workers’ rights to address the implications of the use of AI in the workplace.

The principles articulated in the AI white paper are important, but incomplete. We have set out a series of values in our AI Manifesto Dignity at Work and the AI Revolution and we advocate the use of a set of principles based on these values.

In any event, we predict that the white paper principles will be ineffective in the absence of any statutory footing and when left to be enforced by significantly under- resourced regulators.

Voluntary frameworks are likely to be abandoned by technology companies once in conflict with the development of commercially attractive products.

The context-specific approach recommended by the government is sensible. However, in our view urgent context specific legislative change is needed. In particular, we call for regulation of AI in the employment context and set out our proposals in our AI Manifesto, supported by our legal report Technology Managing People- the Legal Implications[2].

There is no time to waste. Whilst AI offers great opportunity and reward in the workplace, the technology brings significant risks for workers. AI powered tools have been rolled out into the workplace at phenomenal pace, in effect being tested on workers, without adequate protection. The government must act now.

The proposals in our manifesto could be implemented quickly to protect workers. But legislative action is not enough in isolation. Support must also be given by government for unions and civil society to ensure that all voices are represented in the fourth industrial revolution.  

There is great potential for collective bargaining to function as a key system of co-governance of technology at work, tailoring sector-specific protections and ensuring that workers and unions can input at each stage of the AI value chain.

For the good of all, unions should be supported in this role with a right to workplace and digital access, new technology forming part of collective bargaining provisions, and a statutory right of consultation for trade unions when employers introduce high-risk AI at work. In addition, we believe public funding should be provided to support unions in training workers and union reps in the use of, and negotiating around, AI at work 1 .

The UK is at risk of becoming an international outlier in its approach to the use of artificial intelligence. In the United States, the White House has published a Blueprint for an AI Bill of Rights. In Canada the government has imposed a regime of mandatory algorithmic impact assessments, and the EU Artificial Intelligence Act is in the final phase of the EU legislative process, with the European Parliament having confirmed its support for consultation of workers and unions before the introduction of AI. Yet the UK has no equivalent plans for AI-specific legislation.

Download pdf version

The revised cross-sectoral AI principles

1. Do you agree that requiring organisations to make it clear when they are using AI would adequately ensure transparency?

This goes part way towards solving some of the issues relating to transparency, but not far enough.

It is important that workers are made aware of when technology is being used to make decisions about them at work.

However, workers must be made aware not just of the fact of the technology being used, but how the technology is operating.

Transparency also requires information and consultation with workers and the TUC considers this is best achieved through meaningful engagement with trade union representatives for the workers concerned.

2. What other transparency measures would be appropriate, if any?

The way in which these decisions have been made by technology must also be easy to explain and understand. And there should be enough information available to workers and job applicants about the technology to ensure they can trust it will operate fairly.

Otherwise, it will be impossible for workers to challenge unfair and discriminatory decisions made by technology, or to know when inaccurate or misleading data has been used.

Workers who are members of a trade union will often rely on their representatives to raise concerns about perceived unfairness and negotiate on their behalf.  There are existing legal protections for these rights which should not be forgotten but could be highlighted in any future guidance. For example, information rights under s.181 Trade Union Labour Relations Consolidation Act 1992, but also Regs. 4A and 7 of the Safety Representatives and Safety Committee Regulations 1977.

To ensure that a worker has ready access to information about how AI and automated decision-making (ADM) are being used in the workplace in a way which is high-risk, employers should be obliged to provide this information within the statement of particulars required by Section 1 of the Employment Rights Act 1996.

In addition, employers should be obliged to maintain a register which contains this information, updated regularly. This register should be readily accessible to existing employees, workers, and job applicants, including employees and workers that are posted to sites controlled by organisations other than the employer.

There also needs to be greater clarity on when and how rights under Article 22 UK General Data Protection Regulation (GDPR) are engaged as employers will routinely argue automated processing is necessary for an employment contract. 

Data Protection Impact Assessments should provide an important early assessment of relevant risks, particularly on discrimination, but we are concerned by plans to water down these existing provisions in the Data Protection and Digital Information Bill.

3. Do you agree that current routes to contestability or redress for AI-related harms are adequate?

The existing employment law framework provides some important and useful protections. As set out in our legal report, Technology Managing People- the Legal Implications1 , there are helpful rights (for both redress and consultation) under the UK General Data Protection Regulation, the Equality Act 2010, the law of contract and unfair dismissal, health and safety law and human rights provisions.

We refer you to our legal report for a detailed exploration of all these routes to redress and an analysis of the gaps in the legal protections.

4. How could routes to contestability or redress for AI-related harms be improved, if at all?

TUC AI Manifesto- Dignity at Work and the AI Revolution

These are our proposals to improve the protections offered by the existing employment law framework:

  • Target ‘high-risk’ decisions

We suggest targeting high-risk AI/automated decision making (ADM) for more regulation.

Our view is that ‘high-risk’ should be defined as broadly as possible without inhibiting harmless uses of AI and ADM and should be focused on the worker impact.

We would like sector-specific guidance on the meaning of high-risk AI/ADM, with full input from unions and civil society.

Worker voice

  • Collective bargaining

We encourage all employers and unions to work together to put in place collective agreements that reflect the values and proposals we outline in our manifesto. Government should provide support for sectoral collective bargaining, including on new technologies at work, and unions should be given workplace and digital access.

  • A new duty to consult

There should be a statutory duty to consult trade unions in relation to the deployment of high-risk AI and ADM systems in the workplace directly or through a third party.

  • Workers, employees and the self-employed

Job applicants, workers, employees and the self-employed should all benefit from protections against AI-related harms.

Equality

  • Legislation to protect against discrimination

The UK’s data protection regime should be amended to state that discriminatory data processing is always unlawful.

The burden of proof in relation to discrimination claims that challenge high risk AI or ADM systems in the workplace should be expressly reversed.

Provision should be made to ensure that all actors in the ‘value chain’ leading to the implementation of AI and ADM in the workplace, are liable for discrimination subject to a reasonable-steps defence.

  • Legislation to protect against unfairness

The Employment Rights Act 1996 should be amended to create a right, which can be enforced in the employment tribunal, for workers not to be subject to detrimental treatment, including dismissal, due to the processing of inaccurate data.

We are also concerned about whether workers receive adequate protection where data is processed unlawfully. In our view, there are significant problems with enforcement of rights.  The ICO decided just 84 matters last year, in comparison with the Employment Tribunal on England and Wales which produced 11,155 judgments.

  • Equality impact audits and data protection impact assessments

Equality Impact Audits in the workplace should be made mandatory as part of the Data Protection Impact Assessment (DPIA) process and made readily accessible to workers, employees and their representatives.

Employers should also be obliged to publish DPIAs, to foster trust in the data controller’s data processing operations and demonstrate accountability and transparency.

  • Additional guidance

There should be joint statutory guidance on the steps that should be taken to avoid discrimination in consequence of AI and ADM at work. Statutory guidance should be developed with input from Acas, CBI, CDEI, EHRC, ICO, and the TUC.

We welcome that the Information Commissioner's Office is currently updating its employment practices code and have responded to the consultation process on the monitoring section of this, highlighting where the draft guidance needs to be further developed. We also welcome the preliminary steps taken by the EHRC to produce guidance on AI for the public sector and position AI as a strategic priority. However, more statutory guidance is needed.  

Human connection

  • New right to human review

There should be a comprehensive and universal right to human review of decisions made in the workplace which are high-risk.

  • New right to human contact

There should be an express statutory right to personal analogue engagement – an in-person engagement – when important, high-risk decisions are made about people at work.

Work/home boundaries

  • Guidance

Although current law (Article 8 European Convention on Human Rights) protects workers against intrusive AI/ADM that infringes privacy, there is inadequate legally binding guidance to employers explaining how these rights work in practice.

There should be statutory guidance for employers on the interplay between AI and ADM in relation to Article 8 and key data protection concepts in the UK GDPR.

  • Right to disconnect

There should be a statutory right for employees and workers to disconnect from work, to create communication-free time in their lives.

Transparency and explainability

  • Legislation to ensure access to information

To ensure that a worker has ready access to information about how AI and ADM are being used in the workplace in a way which is high-risk, employers should be obliged to provide this information within the statement of particulars required by Section 1 of the Employment Rights Act 1996.

  • Action from employers

Employers should be obliged to maintain a register which contains this information, updated regularly. This register should be readily accessible to existing employees, workers, and job applicants, including employees and workers that are posted to sites controlled by organisations other than the employer.

There should be a statutory right for employees and workers to disconnect from work, to create ‘communication-free’ time in their lives.

  • Legislation to ensure technology is understandable

UK data protection legislation should be amended to include a universal right to explainability in relation to high-risk AI or ADM systems in the workplace, with a right to ask for a personalised explanation, along with a readily accessible means of understanding when these systems will be used.

  • Trade deals

No international trade agreement should protect intellectual property rights from transparency in such a way as to undermine the protection of employees and workers’ rights.

Data awareness and control

  • Guidance

Data protection law provides workers with some key protections, including a right to challenge data processing, as well as a right not to be subject to ADM, in specific circumstances.

However, we need better guidance on how these protections operate in practice, to give more clarity to everyone at work.  We need statutory guidance on Articles 6, 21 and 22 of the UK General Data Protection Regulation.

This should include guidance on:

  1. The circumstances in which an employer can lawfully process data on the basis that it is ‘necessary’ to the employment contract under Article 6(1)(b) of the UK GDPR.
  2. The circumstances in which an employer can lawfully process data on the basis that it is ‘necessary’ to protect their legitimate interests or those of a third party.
  3. The interplay between Article 6(1)(b) and (f) bearing in mind that the lawful basis for data processing dictates the extent to which Articles 21 and 22 can be invoked and these provisions include important safeguards in relation to the use of AI powered technologies and ADM.
  4. The circumstances in which Articles 21 and 22 can be disapplied.
  • Data reciprocity, innovation and opportunity

The use of AI at work presents many opportunities for workers. Our proposals for data reciprocity will redress the imbalance of power over data at work but will also enable workers to benefit from AI-powered tools themselves. For example, by undertaking analysis of data that can evidence and support trade union campaigning for better terms and conditions at work.

We advocate the importance of the development of AI-powered tools in worker interests – making work safer, more rewarding and productive. This can only be achieved with union and worker input into the AI value chain.

Our proposals are intended to help trade unions and workers realise these opportunities.

  • New right to data reciprocity

Employees and workers should have a positive right to ‘data reciprocity’, to collect and combine workplace data.

  • New role for trade unions

Trade unions could have a formal data gathering role in relation to member data so that they can effectively monitor the use of high-risk AI and ADM in the workplace.

In addition, trade unions should be recognised as data subject representatives under the UK GDPR.

Trade unions should be assisted in developing AI-powered tools, and investigating ways to collect and make use of worker data, including engaging data scientists.

  • Collaboration and ethical guidelines

To encourage ethical behaviour at all stages of the AI production and implementation chain and establish a common understanding of the acceptable uses and deployment of new forms of technology, we suggest collaboration between the TUC and other stakeholders to establish employment-focused ethical principles in relation to high-risk AI and ADM systems.

The Data Protection and Digital Information Bill

We highlight that in light of the importance of the relevant existing protections for workers under the UK General Data Protection Regulation, these rights must be enhanced to meet the requirements of the digital age. In the circumstances, we are strongly opposed to the Data Protection and Digital Information Bill which will dilute the existing protections. We refer to our briefing in Annex 1 to this response, in which we set out the details of our objections to the Bill.

5. Do you agree that, when implemented effectively, the revised cross-sectoral principles will cover the risks posed by AI technologies?

In the current context it is not possible for the principles to be implemented effectively. The existing regulators need more human and financial resource to do this.

In addition, in our view, the principles are incomplete and should be subject to extensive consultation with civil society, regulators, unions and other stakeholders.

6. What, if anything, is missing from the revised principles?

In our view, the principles are incomplete. The following principles are missing:

  • Worker/stakeholder voice. In order to ensure that everyone benefits from the use of AI at work and beyond, a wide range of stakeholders must have input into each stage of the AI value chain. We advocate the strengthening of trade unions and collective bargaining to support unions in acting as a counter-balance to the power of big data and big tech.
  • Equality and discrimination. Although there is a principle on fairness, we are of the view that it is important to specifically address the issue of equality and discrimination.
  • Human connection. We value the importance of human connection and in person engagement and see this as an important principle in the face of growing technological control and infringement of human agency.
  • Privacy. Work/home boundaries have become increasingly blurred by technology – this should not be the case and everyone is entitled to respect for these life boundaries.
  • Data equality. We believe that data equality and justice are key principles for eh modern workplace. Workers should be educated about the value of their data, understand how their data is being used, and have equal rights to their employer over their data.
  • Collaboration. In our view, multi-disciplinary collaboration is key to solving the problems associated with AI, as well as realising the opportunities.

A statutory duty to regard

7. Do you agree that introducing a statutory duty on regulators to have due regard to the principles would clarify and strengthen regulators’ mandates to implement our principles, while retaining a flexible approach to implementation?

A statutory duty to have regard to the principles would be a step in the right direction. However, there are many hurdles to overcome before regulators can become effective enforcers of a set of principles.

The first is a practical hurdle relating to the existing responsibilities of horizontal regulators (such as the EHRC and the ICO) whose work applies across multiple sectors of society, and those of vertical regulators who are focused on a single topic or area.

All these regulators (both horizontal and vertical) have defined powers. These will need to be amended to ensure that each regulator has the authority to apply the principles. However, this alone will not resolve the issues as the responsibilities of the horizontal regulators will overlap with the responsibilities of the more narrowly focused regulators – there will need to be guidance on which regulator should lead in which areas.

In addition, regulators will need to be empowered to ensure that the principles are applied at each stage of the AI value chain for the principles to be effective. This is by no means a straightforward task – with significant awareness raising and training challenges.

Further, regulators must be properly resourced with financial support and human capability including expertise on AI, as well as the use of AI in the workplace. Without this resource, regulators will be unable to meet the huge and important task that will be allocated to them if the proposals in this White Paper are implemented.

In the employment context, there are two applicable horizontal regulators – the EHRC and the ICO. Acas also has a clear potential role to play in providing assistance and guidance relating to the use of AI in the workplace, as does the Centre for Data Ethics and Innovation. All of these organisations will need to collaborate and communicate in a similar manner to the cross-regulator collaboration of the DRCF (Digital Regulatory Cooperation Forum).

We also highlight the difficulties relating to responsibility for decision making in disputes between employers and workers where multiple regulators are involved. We refer to the implementation of the Public Interest Disclosure Act 1998 and the difficulty of establishing which regulator has responsibility for a complaint where different issues are governed by different regulators.

8. Is there an alternative statutory intervention that would be more effective?

We believe that urgent legislative intervention is required to regulate and govern the use of technology in the workplace and in other aspects of society. We refer to the proposals for change in our manifesto, Dignity at Work and the AI Revolution.

New central functions to support the framework

9. Do you agree that the functions outlined in section 3.3.1 would benefit our AI regulation framework if delivered centrally?

We agree that it is sensible to have a centrally convened function that is responsible for monitoring the effectiveness of any new regime relating to AI regulation. However, we have identified significant omissions from the proposals, as outlined below.

10. What, if anything, is missing from the central functions?

We have noted the following omissions from the central functions:

  • Monitoring and assessment.
  • The standard for whether or not any new AI governance regime is successful should be set through consultation with a wide range of stakeholders, including trade unions and civil society. It is also crucial that employers meet existing legal requirements in relation to assessments such as safety assessments.

Education and Awareness

There is no mention made of the importance of educating workers about the use of AI, and supporting trade unions in their work on this.

Innovators

Civil society and not-for-profit groups are not mentioned as potential innovators and as a result there is nothing about how they might be supported in this role. Workers, trade unions, civil society, organisations providing community and charitable services are potential and valuable innovators. We strongly advocate the importance of support for these groups so that everyone can benefit from the development and use of AI.

Risk assessments

The role of workers and unions in this process is overlooked in the central functions. It is crucial that trade unions and workers are empowered to contribute to the process of risk assessment. To contribute effectively, they need to be equipped with the relevant skills and offered access to technical expertise which should be paid for by employers.

We believe that there should also be a social partnership fund to assist trade unions with the process of training reps and workers so that they can contribute at all the stages of the AI value chain and feed into risk assessment processes.

Horizon scanning

Workers and unions are not mentioned. We believe that workers and unions should be involved in any horizon scanning exercises to map future AI (and other technology) risks and opportunities.

11. Do you know of any existing organisations who should deliver one or more of our proposed central functions?

There is a possibility that, if appropriately resourced, the Centre for Data Ethics and Innovation could fulfil this role. However, we emphasise the importance of civil society representation in any organisation delivering functions such as those outlined. There should be a trade union representative on the advisory board of the CDEI, alongside other civil society representatives.

12. Are there additional activities that would help businesses confidently innovate and use AI technologies?

12.1. If so, should these activities be delivered by government, regulators or a different organisation?

13. Are there additional activities that would help individuals and consumers confidently use AI technologies?

13.1. If so, should these activities be delivered by government, regulators or a different organisation?

Unions and collective bargaining

The following is our response to the above set of questions.

Unions have a unique offering to contribute to the framework for AI governance. Collective bargaining provides a system of democratic involvement in the development, procurement and application of technology. Please see our publication People Powered Technology 2 for examples of this.

Collective bargaining is a flexible mechanism through which workers, unions and employers can collaborate to create the apparatus and solutions needed to facilitate worker participation in technology and innovation.

This expression of collective worker voice is a vital counterbalance to ensure that a range of needs are met by technology at work, not only commercial ones.

We believe that the role of trade unions in this respect is a public good and as such should be supported by public funding and government support for collective bargaining.

We refer to our report, A Stronger Voice for Workers 3 which sets out measures to strengthen collective bargaining. In particular, we propose that the scope of collective bargaining should be broadened to include new technology and that unions should be given workplace and digital rights of access.

We also believe that trade unions should be supported in rolling out AI–related training to workers and trade unions reps through a publicly funded social partnership scheme.

14. How can we avoid overlapping, duplicative or contradictory guidance on AI issued by different regulators?

There must be active and effective cross- regulator collaboration, as well as consultation with a wide range of stakeholders, including civil society.

Monitoring and evaluation of the framework

15. Do you agree with our overall approach to monitoring and evaluation?

No. Businesses must have regular independent audits to identify new risks or concerns and then report to a regulator. Where there is a recognised trade union the audit should  be collaboratively made by recognised union reps with the employer and independent auditor. In addition, an impact assessment should be part of the audits. Guaranteed access to all necessary information from employers (and developers) about the use of the technology is critical in order to generate reports that will be useful for regulators. 

We note the proposal to “In 6 to 12 months publish proposals for the design of a central M&E framework including identified metrics, data sources, and any identified thresholds or triggers for further intervention or iteration of the framework.” We suggest that the identified metrics, data sources and thresholds can be set now as a starting point with a review date in six months.

To decide what risks UK will tackle or allow should not sit solely with the government but can only be decided by a broad and balanced committee to prevent any potential bias.   

16. What is the best way to measure the impact of our framework?

Setting clear and measurable targets at the start is the best way to evaluate if the framework is working well. This requires targeted analysis of the current landscape. Only then the data would be comparable and undeniably transparent when evaluating the impact of the framework.

Relevant to this process will be current workload of regulators, risks identified, positive AI innovation projects affected by the new framework, including which elements were crucial to allow such innovation.

Clear and set reporting mechanisms must be established including compulsory impact assessments especially at the early stages to be able to measure and evaluate the framework as well as the whole reporting mechanisms. 

17. Do you agree that our approach strikes the right balance between supporting AI innovation; addressing known, prioritised risks; and future-proofing the AI regulation framework?

To a degree. See the above issues with feedback loop and reporting mechanisms. The framework does not place enough focus on the role of civil society stakeholders, including trade unions, or on workers. In addition, the framework fails to identify how the contributions from these groups will be facilitated.

18. Do you agree that regulators are best placed to apply the principles and government is best placed to provide oversight and deliver central functions?

It makes a logical sense to structure the framework that way. More importantly, having a set date to evaluate the structure of the proposed framework along with the proportionality of the stakeholders at each level with the ability to change and adapt is crucial.  

We refer to our answers above in relation to the limitations of regulator led enforcement of principles.

Regulator capabilities

See above Q7.

19. As a regulator, what support would you need in order to apply the principles in a proportionate and pro-innovation way?

20. Do you agree that a pooled team of AI experts would be the most effective way to address capability gaps and help regulators apply the principles?

There is no doubt that regulators need a huge input of capability if they are to enforce the proposed principles. However, it is not enough to simply recruit AI experts. The key to successful enforcement is to create a multi-disciplinary group of experts who work together. In addition, individual sectors will need people with sector-specific expertise.  In the employment context, regulator personnel will require an in-depth knowledge not just of the technology, but of the employment relationship, industrial relations and the employment law framework. This will be a significant training exercise for regulators to undertake.

Tools for trustworthy AI

21. Which non-regulatory tools for trustworthy AI would most help organisations to embed the AI regulation principles into existing business processes?

We refer to our responses on the importance of collective bargaining above.

Final thoughts

22. Do you have any other thoughts on our overall approach? Please include any missed opportunities, flaws, and gaps in our framework.

In addition to the omissions we have outlined in our response to this consultation, we would like to highlight the importance of equality in technology. Analysis of the equalities impact of AI at all stages of the value chain, including at testing stage, is crucial. We advocate mandatory equality impact assessments in the data protection impact assessment process and emphasise the importance of the public sector equality duty. The TUC’s AI manifesto contains proposals relating to ensuring non-discriminatory algorithmic decision making.

Legal responsibility for AI

L1. What challenges might arise when regulators apply the principles across different AI applications and systems? How could we address these challenges through our proposed AI regulatory framework?

See Q7 above on regulators’ powers and the difficulties of ensuring effective enforcement.

L2.i. Do you agree that the implementation of our principles through existing legal frameworks will fairly and effectively allocate legal responsibility for AI across the life cycle?

See Q7 above on regulators and AI value chain.

Please also note that we believe the use of principles to regulate up the AI value chain to be a wholly inadequate approach to allocation of legal responsibility. This approach avoids the issue of legal accountability. In our manifesto we make a series of proposals to ensure, for example, that liability for discriminatory algorithms is applied throughout the AI value chain.

L.2.ii. How could it be improved, if at all?

L3. If you are a business that develops, uses, or sells AI, how do you currently manage AI risk including through the wider supply chain? How could government support effective AI-related risk management?

Foundation models and the regulatory framework

F1. What specific challenges will foundation models such as large language models (LLMs) or open-source models pose for regulators trying to determine legal responsibility for AI outcomes?

Foundation models result in a much longer and more complex AI value chains which will make attributing liability a much harder process. In addition, it is much harder to establish transparency and explainability and this in turn makes it far harder to identify where there are potential legal claims.

F2. Do you agree that measuring compute provides a potential tool that could be considered as part of the governance of foundation models?

F3. Are there other approaches to governing foundation models that would be more effective?

We refer to our responses above in relation to the importance of collective bargaining and statutory consultation of trade unions and workers.

We also emphasise the importance of collective data rights to redress the imbalance of power over data at work and refer to our proposals for data reciprocity, unions as data subject representatives and the collectivisation of worker data, as outlined in our Manifesto 4 .

S1. Which of the sandbox models described in section 3.3.4 would be most likely to support innovation?

S2. What could government do to maximise the benefit of sandboxes to AI innovators

S3. What could government do to facilitate participation in an AI regulatory sandbox?

S4. Which industry sectors or classes of product would most benefit from an AI sandbox?

We advocate the importance of stakeholder participation in the development of new technology. In the context of the workplace, workers with knowledge and expertise about the requirements of their role must be actively involved in the development of technology at work as it is workers who understand best the requirements of their role. For example, in a classroom context, education professionals best understand the needs and requirements of students and the curriculum. We believe that stakeholder participation is a foundation block to fair and useful innovation.

 
Enable Two-Factor Authentication

To access the admin area, you will need to setup two-factor authentication (TFA).

Setup now