In the race to implement AI across the whole organization, TA has been operating mainly on a ‘do’ mandate, not a ‘design’ one. Top-down pressure from company leaders to drive greater efficiency coupled with a tricky marketplace for hiring has forced many talent leaders to implement before they’re ready.
In many cases, this environment has been great for standing up AI experimentation quickly and discerning what tools add value to talent workflows. But across global markets, the law is finally starting to catch up with evolving AI use, and regulate its implementation.
The EU AI Act, for example, represents one of the most comprehensive attempts of this effort to date, setting the standards for transparency, accountability, and fairness across AI-enabled processes and tools, including resume screening, video interviews, and candidate scoring.
In the US, state- and city-issued laws, such as New York’s Local Law 144 and new additions under California’s Fair Employment and Housing Act legislate on AI transparency and discrimination — requiring employers to audit tools for bias and disclose where and when it’s used in candidate evaluation.
Together, these new laws mark a shift in employer accountability. The onus for ethical and compliant AI usage is no longer just on AI vendors — but also on any organization that uses these tools to hire their people.
And for talent teams, falling foul of these complex and evolving laws will come with some very big penalties — particularly in multinational workforces, where specific requirements may differ across country borders.
“Only around 7% of UK organizations have a policy on AI usage, whereas research from Microsoft shows that 71% of people are already using [unauthorized] AI at work,” said Siobhan Brunwin, founder of Blossom HR Consulting. “Currently there is very little in our law about AI — even the Employment Rights Bill, dubbed the ‘biggest shift in employment legislation for decades,’ doesn’t include a single mention of it.”
But while the penalties for the misuse of AI in hiring could hit up to 7% of an organization’s global annual revenue, the risks of non-compliance aren’t just financial or reputational. The new laws also signal a broader shift towards how organizations are expected to manage trust and employee rights in their hiring process.
“Within the next year, I think we’ll see employment tribunals stemming directly from unethical use of AI in organisations,” Brunwin noted. “Trade unions are already drafting bills to protect employees’ rights in terms of their data, surveillance, and job security.”
At this point, compliance isn’t optional. But governance — and building the structures that make AI use fair, transparent, and proactive — will be what separates talent teams that merely meet legal requirements from those who lead.