Artificial intelligence plays an increasingly prominent role in the employment lifecycle from recruiting and onboarding to on-the-job training, personal development and even dismissals. In general, AI has been developed and deployed with very little oversight. However, given its reach and impact, there is a shift toward greater restrictions and regulation globally.

In December 2020, the Labour Court of Bologna ruled that an algorithm, used by the platform company Deliveroo Italia S.R. to determine the “reliability” of a rider, violated local labour laws. The system, which allocated to the most reliable and participative riders the best work shifts and more deliveries, did not distinguish between legally protected reasons for riders withholding labour, such as sickness, versus unprotected reasons for their unavailability.

Other technology pioneers have admitted discrimination is wired into their AI-enabled systems. Uber’s facial identification software denied the use of its app by drivers and couriers of non-White ethnicities. Amazon’s hiring algorithm was ultimately scrapped after its machine learning system taught itself that male candidates were preferable.

The EU approach. The European Parliament has put forward recommendations for a common set of rules for the design and development of AI systems through harmonised technical standards. The European Parliament recognises “that technology can also be misused and provide novel and powerful tools for manipulative, exploitative and social control practices.”

The proposed regulation classifies as “high-risk” AI systems used in employment and worker management “since those systems may appreciably impact future career prospects and livelihoods of these persons.” High-risk AI systems for the recruitment and selection of individuals, for making decisions on promotion and termination and for task allocation, monitoring or evaluation of employees in work-related contractual relationships, and access to self-employment will be subject to specific safeguards.

Proposed obligations to be placed on providers include proper risk assessment, mitigation and ensuring traceability of results. Employers will be required to ensure human oversight of AI systems and to focus on the quality of the data used to teach the AI system. System operators will be required to follow instructions on the use of AI systems, monitor the functioning of the system and keep records. The proposed sanctions for breach are significant, with fines of up to €20 million (US $23.5 million) or 4% of annual worldwide turnover.

The European Parliament’s legislative procedure can take up to three years but once adopted, the regulations will be directly applicable across the 27 member states of the EU.

The UK in step. Although the UK will not be directly subject to any EU regulation, UK legislation will need to remain in step with the EU relating to employment rights under the terms of the Brexit deal. The UK Information Commissioner’s Office issued guidance, “Six things to consider when using algorithms for employment decisions,” on the use of algorithms and automated decision-making and the risks and opportunities they pose in an employment context. The ICO makes the point that as bias and discrimination are a problem in human decision-making, so it is a problem in AI decision-making as well. Employers must assess whether AI is a necessary and proportionate solution to a problem before starting to process data using AI.

Global issue. It is not just in Europe that policymakers are grappling with the need to rein in the use of AI. The US Federal Trade Commission published guidance on the use of AI in April this year, “Aiming for truth, fairness, and equity in your company’s use of AI.” Although there are no new federal laws proposed, the FTC issued a stark warning that “if you don’t hold yourself accountable, the FTC may do it for you.”

China is the world leader in AI publications and patents, and policymakers in Shenzhen, China’s Silicon Valley, are seeking to establish an overarching approval framework for AI products and services. While in January, Asian nations including Malaysia, Singapore and Vietnam through the Association of Southeast Asian Nations published a 2025 digital master plan, highlighting the need to deliver best practice guidance on AI governance and ethics.

Governments know there is a fine line between encouraging the myriad possibilities of AI technology, known and unknown, while ensuring that it is used fairly and responsibly.