Legislation automation | The EU's new AI Act could land recruiters and employers in legal hot water

The EU's new AI Act could land recruiters and employers in legal hot waterThe EU's new AI Act could land recruiters and employers in legal hot water
While the EU’s human rights and worker rights protection no longer extends to the UK, this legislation – the first of its kind – will undoubtedly be picked up by other countries, and impacts those with offices in mainland Europe…

Employment law post-Brexit is a murky swamp of disinformation and confusion. Particularly as the current iteration of the Conservative government continually threatens to withdraw from its partnership in the European Convention on Human Rights – which the UK was instrumental in creating. The ECHR offers a lot of protection to employees, including freedom from enslavement, the right to not be spied on by employers, and much more.

It’s a tricky time for employment law and just when it seemed things couldn’t get any worse, we now have several lawsuits, firings and employment tribunals around the use of AI in the workplace.

Thankfully, the EU was superbly on the ball when it came to nipping any potential misuse in the bud. But what does the new Act entail, who does it affect and how should employers and HR interpret it?

HR Grapevine sat down with the legal eagles at law firm Eversheds Sutherland, to seek their expert opinion and advice. We spoke with Lorna Doggett, Legal Director; Carolyn Sullivan, Associate and Hannah King, Senior Associate.

Their advice and analysis of the new legislation, and its possible global impact, is below.

                                                                        …

The European Commission’s new EU AI Act, expected to be adopted by end of 2023, introduces a host of new requirements for organisations using AI tools or machine learning to make decisions about employees or candidates.

The Act will also have extra-territorial scope, extending to providers and users outside the EU where the output is used in the EU. 

The Act is expected to lead the framework for the regulation of AI in and outside the EU.

This is anticipated as being a benchmark AI law which other jurisdictions might look towards when developing their own laws (much like GDPR has become a standard upon which some other countries’ own laws are heavily based).

The EU’s version casts a wide net with regard to the meaning of AI and machine learning. This means that providers placing AI systems in the EU market (regardless of whether the provider is in the EU) or where the output produced by the system is used in the EU, fall within the Act’s scope. Non-EU providers of AI systems must also take note.

This Act has potential for global impact.

It is anticipated that the EU AI Act will have a two-year grace period, although this is not long considering that any AI-related initiatives - either in development or in use - will need to have these requirements imbedded.

The Act does lay down fixed penalties for certain infringements of the Act, the highest fine being €30m or 6% of a company’s total worldwide annual turnover (3% in the case of an SME or start-up) for non-compliance with the prohibitions of AI practices laid down in Article 5.

The proportionate caps for SMEs indicates there might well be a willingness by the Commission to support innovation, while the huge potential fines for certain infringements shows how dissuasive enforcement action is intended to be.

The Act address policies and governance around the deployment of AI tools, but also the ethical and moral questions raised by AI technologies in certain areas. For example, should AI make decisions about whether someone gets hired or promoted or what performance rating an employee should receive?

The EU AI Act is taking aim at three risk categories.

First, prohibited AI practices, such as social scoring or subliminal techniques to distort behaviour, are banned.

Second, high-risk AI systems, such as AI-based decisions for access to education or CV-scanning tools to filter candidates or manage an individual’s employment / engagement, are subject to specific legal requirements.

Third, AI tools that pose a low or minimal risk, such as chat-bots, which, apart from certain transparency obligations, are largely left for existing laws to regulate.

High-risk AI systems

AI systems used by organisations for recruitment or selection, advertising vacancies, screening or filtering applications or evaluating candidates in interviews or tests, are classed as “high-risk” under the AI Act.

Equally, AI used for making decisions about promotion or termination, task allocation and/or monitoring and evaluating performance and behaviour, are also “high-risk”.

These high-risk AI systems are subject to the highest compliance obligations under the Act.

This is why the EU AI Act is so relevant for employer organisations, who increasingly rely on AI technology for the recruitment of candidates and decision making related to their employees. Employers must also consider that existing employees may challenge performance and behaviour evaluations by raising a grievance.

This is likely where underperformance is indicated, particularly where this has been determined by an AI system. Employers need to ensure they verify any automated decision using AI before beginning any formal performance management process with employees, particularly as consistent performance issues may result in termination of employment.

Article 22 UK GDPR is also relevant in this instance – individuals have a right not to be subject to a decision based solely on automated processing, which produces legal or significant effects. Human verification is key. Employers will also need to ensure that employees are kept informed of the use of any AI decision making, ideally in the staff privacy notice.

Requirement to eliminate all bias

One key aspect of the Act is its approach to ethics, which demands that bias of any kind should be prevented from creeping into AI as a result of the data on which it is trained.

Employers will be well aware of their existing obligations, set out in the Equality Act, not to discriminate against individuals based on certain “protected characteristics” (such as age, gender or race) in any part of the recruitment process or employment relationship.

This concept is nothing new.

There can be severe consequences for employers who discriminate (directly or indirectly) against candidates or employees, including facing employment tribunal claims, which if successful, could lead to potentially uncapped damages.

Featured Resource

Hiring Managers: State of Hiring and Retention

Hiring Managers: State of Hiring and Retention

What are your peers doing to fill roles during shortages? What’s the current state of hybrid work and return-to-office? What are the biggest challenges facing hiring managers?

We unpacked the data to bring you answers to your most pressing questions.

Did you know…

  • Nearly half of hiring managers say they’ve recently changed their offer package to entice new employees.

  • 72% of hiring managers say the current state of the workforce has impacted their hiring.

  • 40% of hiring managers say they've had to expedite hiring to fill roles

Download this report to:

  • Learn How Your Strategies Compare - Get an inside look at how your hiring manager peers hire and retain talent.

  • Build More Competitive Offers - Use the latest data to ensure your offer packages align with market trends.

  • Optimize Your Retention Tactics - See how employers balance pay increases with flexibility and other incentives.

Show more
Show less

The Act builds on this discrimination protection in existing employment law.

AI should not undermine efforts made over decades towards equality and fair treatment of all individuals. The Act requires providers of AI systems to look back at their tools and ensure the decision outputs are free from any bias – not just those relating to a protected characteristic. This has far-reaching consequences for providers of AI tools.

In a paper, professor of technology and regulation at the Oxford Internet Institute, Sandra Wachter, points out that there are numerous algorithmic groups that AI tools have been using for some time to make important decisions, like who gets promoted, hired or fired, based on small patterns and correlations in similar attributes or behaviours.

The Act builds on discrimination protection in existing employment law.

The EU AI Act requires providers of AI technology to ensure that the risk of “biased outputs” is reduced for any and all characteristics - not just “protected” ones. It is easy to see how easily an organisation could fall foul of this requirement. For example, employer organisations who have developed sophisticated machine learning to analyse e.g. how quickly an employee packs boxes.

If the system “learns” that the slowest workers tend to be a certain gender, age or race – it may directly discriminate against those groups when dealing with those performance issues and subsequently discriminate against those groups for future hiring or promotion opportunities.

As a second example, there is AI use in online candidate tests. This could teach the AI to learn that the most spelling mistakes are made by individuals from a certain postcode or without a degree, and discriminate against those groups in the future. This is equally outlawed by the Act. Organisations must keep tabs on how and what their machines are learning, or risk non-compliance with the entire essence of the Act.

In summary

The Act is expected to lead the framework for the regulation of AI in and outside the EU. 

While the UK has not proposed new legislation in AI yet, it has published a series of documents on AI, including the UK Government’s AI White Paper “A pro-innovation approach to AI regulation” which is open for consultation until 21st  June 2023.  There is also the ICO’s response to that paper and its own updated AI guidance. We expect the EU AI Act will likely have an influence on the route the UK take to legislating on AI.

Many organisations already use AI to recruit and monitor performance of employees. If caught by the extra-territorial reach provisions, these organisations will need to ensure their systems comply with the EU AI Act requirements and in particular remove any possible bias from its output, or risk how much of the €30m fine EU regulators will look to impose.

You might also like