Human resources and employment law risks are building up as artificial intelligence (AI) technologies become embedded into business practices and workflows. Discrimination, unfair dismissal and data privacy breaches are just a few of the legal hazards. For organisations, heightened awareness at executive level, the introduction of practical policies and wide-scoped training not only mitigate these risks, but also provide a controlled and consistent framework for positive, profitable innovation.
If you are navigating the challenges of AI in the employment sphere, please contact Anne Pritam or a member of our employment team who can support you to develop effective AI policies, training and strategies tailored to your business needs.
Smart integration of AI can bring cost savings, productivity gains, and radically improve business processes, but it can also present businesses with serious risks. The use of unauthorised tools can heighten the chance of recruitment bias, inaccurate performance measures, disciplinary errors, confidentiality breaches, disclosures of personal data, and other HR violations.
In their enthusiasm for the efficiencies that AI promises, managers may fail to recognise exposures to algorithmic bias, the dangers of automated decision-making systems, and data privacy violations. Often systems are being brought into businesses without senior business leaders fully understanding what the new systems and platforms do, and what that means at a granular level for team members. In the employment law and human resources context, mistakes can be hugely expensive for businesses, both big and small.
Used inadvisedly, AI has the potential to make workplaces more unequal. With job applications for specific roles running into the thousands, businesses must find efficient ways of processing these and creating shortlists of candidates. Although machines are essentially dispassionate compared with humans, they can still perpetuate biases from the training data with which they are supplied.
Uncontrolled or unauthorised use of AI heightens the possibility of data breaches. Confidential candidate/employee records, performance or other personal information may also be uploaded as part of a task or question submitted to an AI model. AI tools like ChatGPT and Gemini are public large data repositories. Once details are inputted into AI systems, the user and the business lose control of the data. Using employee or candidate data even in ring-fenced bespoke AI tools can potentially violate data protection laws if individuals are unaware of what is happening to their data.
Increasingly, AI is used by employees to write, or even embellish or extend complaints and grievances. For example, where a worker believes that they have been harassed and is upset about missing a promotion, AI would mesh the two concerns together and create a legal case that is more superficially convincing. Employers should be alive to the tell-tale signs of lengthy, formal and legalistic text created by AI. Significant time, resources and additional costs may be required to investigate such AI-driven grievances if they are not “unpacked” at the outset.
This is a minefield for employers. Data subject access requests (DSARs) by employees have risen exponentially in recent years and AI may provide yet another impetus. AI allows employees to craft a series of prompts, which then generates extensive requests for documents.
Businesses have these multiple hazards to navigate alongside “business as usual”, but often AI risks are not seen as an immediate danger. AI risks often fall to no one team: legal, compliance, HR, IT: the responsibility is often “someone else’s problem”. Some businesses are simply waiting for new regulations to provide specific guidelines but the technology is here, now, and your staff are using it. Just waiting for something to happen could seriously augment the risk profile for your business.
By developing robust and adaptable policies, businesses not only bolster effective risk management, but they also pave the way for competitive advantage. AI is transformational for businesses, but only when effective guardrails enable impactful and innovative use.
Establishing high quality and adaptable policies empowers teams to become more successful in recruitment, retention and other important employment priorities. It gives people the opportunity to innovate in their roles through AI deployment in a positive and focused way. Organisations should implement clear policies governing employee use of AI, including specifying acceptable tools, usage boundaries, and requirements for disclosure.
Agile strategies are vital. The swift evolution of AI technologies means rigid policies become rapidly obsolete. The marked difference between established AI tools (“old AI”) and the new cohort of generative AI systems shows that policies need be treated as living documents, updated frequently as technology evolves. Principles-based policies (focusing on values and objectives) may be more sustainable than detailed directives due to the pace of change.
The human-in-the-loop approach, integrating human intelligence with AI, remains essential too, ensuring ethical guidelines are met and risks mitigated, while helping AI models to learn and adapt over time. Senior managers need to understand the tools, how they're trained and what they’re trained on, to understand where biases might lie.
A progressive strategy is essential. Firms that adopt AI constructively and transparently, position themselves as innovative and responsible employers, appealing to younger and tech-savvy employees.
Successful business executives of the future will:
If you would like to know more or discuss any of these issues please contact Anne Pritam or your usual Stephenson Harwood contact.