HR briefing - artificial intelligence and HR

25 September 2023

In the latest episode of our podcast, Matthew Ramsey is joined by employment associate Amy Powell and lawtech manager Oliver Jeffcott as they discuss the technological development of artificial intelligence.

Matthew Ramsey, Oliver Jeffcott and Amy Powell discuss:

  • the development of artificial intelligence; and
  • what HR and in-house teams need to consider regarding the adoption of AI in the workplace and its implications on the employment lifecycle.

Transcription

Matthew Ramsey: So welcome everybody to the September edition of Macfarlane’s HR podcast. I’m Matthew Ramsey, I’m the senior knowledge lawyer in the employment team and this month we bring you not one guest but two. I’m joined by Amy Powell who’s one of the lawyers in the employment team with a particular interest in all things technological and even more than that, Oliver Jeffcott from our Lawtech team joins me as well. We are unusual in the City in having a dedicated giant function wholly dedicated to law and the technology that supports law and Oliver lives in that bit of the firm and so has rolled out all sorts of snazzy, technological solutions for clients and the reason that he and Amy are here today is because we’re going to talk about AI as it impacts HR and employment practitioners. AI is obviously a massively hot topic at the moment – you can barely open the newspaper without seeing some reference to it. Either because the Government wants to regulate it or because people are terrified of its implications but it has a particular impact for employees and how employers allow or don’t allow their employees to use artificial intelligence. So, before we go much further, we probably ought to start by just understanding what we’re going to mean in this discussion when we talk about AI so that is definitely a technical question, so it’s definitely for Oliver.

Oliver Jeffcott: Thanks Matthew. When we talk about AI what we mean is when a machine can perform a task that requires intelligence although when we say intelligence that’s not really a fixed term. For instance, when I was young if you told me that a computer would be able to beat the world champion at chess then I would have thought that would be pretty impressive in showing intelligence whereas nowadays I think we wouldn’t be quite so impressed by that and it might be debateable whether we’d call that artificial intelligence so it is a term that can move over time. When we’re talking about generative AI, that’s when the AI performs a task which involves generating something.

Matthew Ramsey: I must say when, obviously, we, along with many other firms have been testing the whole range of AI solutions and I watched a demo of one of those platforms a little while ago and it was truly incredible to watch the machine producing words as if by magic from the ether and so for a non-tech person such as myself, it is genuinely incredible to see what it can do. But Amy, presumably alongside just being amazed, it does pose some quite interesting questions for employers?

Amy Powell: Yeah, absolutely. So the whole employee lifecycle is affected by the use of artificial intelligence so even if we start with the recruitment process, if CV’s or cover letters are produced using AI how would you deal with that as a business? Do you just have to accept that some candidates will utilise AI when applying for roles or would you explicitly ask candidates not to do this and even if you do decide to do the latter, how would you police any applications that have been produced using AI? Once employees have started in their roles it presents issues around promotions or rewards so if work is produced using AI, for example, if employees are promoted off the basis of work they’ve produced using AI how is this necessarily fair for employees in perhaps other parts of the business who can’t use AI so easily in their roles. Employees might also receive high bonus awards because their numbers are the highest in the team and again this might be because they’ve utilised AI efficiently in their roles so it opens an entire box of issues to do with measuring performance and efficiency at work.

Matthew Ramsey: But presumably there, if you just conceive AI simply as a tool for employees to use then, you know, you might be able to get comfortable, that efficient use of that tool, merits a higher reward or bigger bonus.

Amy Powell: Yeah, absolutely. I think we’re a long way of any businesses or employers feeling comfortable with that approach and a lot of the issues we’re seeing at the moment now are that some people are perhaps using AI, others aren’t. I mean, we’ve even had a couple of issues at work where (not, obviously at our firm) but clients who’ve sort of suspected that something has been produced with AI – how do you police that? How do you check that and it can also – one thought I did have when we were talking about this podcast previously was – it might also make, sort of, all of the difficult or traditionally difficult processes at work or HR processes like formal grievances or requesting long periods of leave, it makes submitting those types of requests and formal letters much easier because if you just to have ask ChatGPT please write me a letter requesting an extended period of leave, it makes it a lot easier to submit those traditionally quite difficult letters of request which employees would normally put off producing.

Matthew Ramsey: Yeah that’s probably a fair point. So ChatGPT just, I mean I appreciate listening, no doubt is familiar with that but just so I’m clear, that’s one of the generative AI platforms that you mentioned Oliver?

Oliver Jeffcott: Yes, ChatGPT really did kick off all the media interest and interest generally in generative AI. It was very impressive in terms of being able to feed in a free text question and in normal languages as you’d speak to someone and then see it in real time just write back a response.

Matthew Ramsey: And its, just, I seem to remember you telling me when we were looking at this that not only can it create new content from the prompt that you describe but it can also analyse, I uploaded let’s say, a complex grievance document, it would be able to strip out all the allegations that were being made and find which witness might be necessary to respond to those so that would be a job that traditional it might take a lawyer or an HR person several hours and it might be able to do that very quickly.

Oliver Jeffcott: Yeah, absolutely. The way that it can take a vast amount of documents or data and then process that is really impressive. For instance, if I’ve fed in a bunch of my emails, I could potentially ask it to draft a response and it would do so using my own style.

Matthew Ramsey: That really is quite startling isn’t it. That a machine can effectively mimic you, Oliver Jeffcott.

Oliver Jeffcott: Yeah, it can be a really impressive tool and when you think of also the potential applications for being able to review large data sets and amounts of documents. For instance, being able to review a whole bunch of CVs and then provide a summary or you could upload a spreadsheet and then it would review the spreadsheet and then you could use some natural language to ask it questions about the data. It’s quite amazing what, how quickly that’s come along and what the potential use cases are.

Matthew Ramsey: And as you mentioned uploading data to the platform for analysis that, I suppose, raises the question of should you be uploading confidential information to such a platform. Is there a risk of data leakage or even a breach of data privacy rules? Maybe that’s a question for Amy.

Amy Powell: Yeah, absolutely, I mean just to pick up on the data privacy point, for example, if your employees are putting personal data about their colleagues or clients into an AI tool and to you as an employer know how that AI tool might process the personal data, from an intellectual property perspective, if an employee creates a piece of work using artificial intelligence so, for example, a slogan for an advertising agency, then who owns the slogan – is it the employee who utilised the artificial intelligence or the company that owns the AI tool. We also spoke about it in the context of liability for negligence. This can be a bit of a grey area because if an employee uses an AI tool which produces some inaccurate or a negligent piece of work then who exactly is responsible for that negligence and finally it can also present issues around discrimination - so how can we be sure that AI tools being used aren’t discriminatory against certain types of people so, particularly at the moment, we don’t think that we can be sure of this. For example, if an employee was to use AI to filter CVs as we spoke about the recruitment example earlier and when it was filling in a lot of the criteria for selecting candidates it asked the AI tool to apply a minimum standard of a university education, for example, then obviously the AI result saw the candidate pool that the AI tool produces will automatically favour those who attended fee-paying schools because more individuals who went to fee-paying schools attend university so its an incredibly complicated tool to try and understand all of the issues around and as we keep saying we’re just not that much closer to understanding them. To put this into a few more examples, we have spoken before about, so Matthew and I, about what happens if a quantitative analyst is using AI to review perspective investments, they might have confidential information on that AI platform. The AI platform, the output that it produces might involve transferring data overseas to AI housing and also the artificial intelligence tool might suggest more investment and then whose liability is it. So you can think of a million scenarios in which all four of those issues – data privacy, IP, negligence and discrimination come into play.

Matthew Ramsey: My immediate response would be to say that the AI platform provider will always disclaim any liabilities and suggest that its always the, again if you conceive of it simply as a tool, it’s the person using the tool, the actual, the human using the tool whose responsible for its defects and for how its used in practice. I think that, if that isn’t where the law ends up then it probably ought to or perhaps I just haven’t thought deeply enough about it.

Amy Powell: Yeah, absolutely, I agree. I think it’s a, I don’t suppose there even would be, even when we draft employment contracts now it, I suppose if you were trying to provide for that you would perhaps, would try and draft a specific, I can’t think off the top of my head what you would try and include but it would perhaps be, or it could be addressed to the policy but, yeah, it’s a difficult one.

Matthew Ramsey: I was going to say, you mentioned policy. Presumably that’s the key because if we assume that the risk of misuse sits with the person using the tool then employers will need to think about how they guide their employees to use these tools more effectively without giving rise to too much risk. So presumably again that would need policies, guidelines, FAQs, those kind of documents that you see in other areas of professional life.

Amy Powell: Yeah, absolutely. I mean we already have or its very common for most employers to have things such as an internet usage policy and I wonder if it would perhaps, if we would start being, we might be asked to draft some sort of artificial intelligence usage policies, similar guidelines and I think its difficult because it will be, it depends on each employer whether they want to be more or less permissive to AI. For example, perhaps if you have an employer whose a large tech company, they might want to be more permissive towards the use of AI as they might want to harness it or encourage it whereas if you have a business that has a lot more sensitive, personal client data they might want a hard block on using AI so it will be something that each individual employer will have to think about and then each policy will have to be tailored to their preferences. I suppose another more interesting solution that we spoke about before was the idea of putting IT blocks on using AI at work so blocking certain websites, blocking the use but again I’m sure there would only be certain types of employers who would want to use such a draconian approach.

Matthew Ramsey: Yeah, you get the sense don’t you from the sort of direction of travel that this is a technology that is here to stay and will only get better and become more a part of our lives just as, you know, the internet has done that. You know, I’m so old that I remember operating in an almost completely internet free area when I first started as a lawyer, many moons ago. Presumably that’s just going to be how it becomes as time goes on and the tech advances and we all become a bit more used to it.

Oliver Jeffcott: It isn’t that long ago since emails were the new technology and were initially quite controversial in law firms and whether or not they would be used over letters so it will be interesting to see whether there’s a similarly fast adoption over generative AI from a legal context or whether there’s a bit more, bit more push back.

Matthew Ramsey: Yeah. I mean, is it, we’ve spoken so far about this, these technologies being effectively sort of fool-proof, incredible, mind-blowingly clever. Are they ever wrong?

Oliver Jeffcott: That’s a really good point and from my perspective I think its one of the most important things to take away from today’s discussion, is that there is a real risk that generative AI is going to provide incorrect information and then potentially they’re being dangers from people relying on it. There’s a concept called hallucinations for instance which is when a piece of generative AI will create an answer to a question and because it doesn’t know the answer or it can’t find something relevant to put in, it will make it up and there was quite a famous example that people might have heard of when a lawyer in the United States recently used ChatGPT, I think it was, to draft some submissions for a court and then as part of that, the judge was reviewing it and then noticed that one of the cases that was cited was completely made up and obviously wasn’t very impressed with the lawyer as a result. Its also worth pointing out that generative AI tools like ChatGPT are typically trained up to a certain point in time and then there’s a cut-off so it won’t know anything that’s happened past that particular date. When I first started testing ChatGPT, I asked what, who the Prime Minister was and it answered Boris Johnson and I was a bit confused because that happened to be several prime ministers ago at that time and that was simply because as far as ChatGPT knew, that was the correct information but its because its training cut off at that particular date.

Matthew Ramsey: And presumably that would perform the bedrock of any policy that the client wanted to put in place, that its on the employee at an individual level to use the tech wisely and to check it thoroughly. I mean, I can see, yeah, we’re all nodding, we’re all nodding in agreement around the table dear listeners so yeah that it is our takeaway. Check everything and use things wisely. Presumably then, so you’ve got the internet use policy that Amy has already spoken about, there might be a social media policy that feeds into that data protection policy and now an AI policy and obviously it will be up to HR and employment legal to make sure that each of those is internally consistent across the piece. Fascinating. Are there any other points to flag?

Oliver Jeffcott: Yeah, its really important for people to be aware that there are some real dangers there for generative AI to give incorrect information. From my perspective, in addition to some of the policies HR would put in place that Amy mentioned earlier, companies also need to look at aligning those with their IT policies and information security so they’ve got all their ducks in a row.

Amy Powell: Yeah absolutely and I do wonder if now with, sort of, the advent of AI and employers wanting to implement policies like we discussed if, traditionally if they’re not that interested then people aren’t particularly focussed on spending a lot of time in producing their policies but I think now, I mean as we’ve spoken about on this podcast, its going to be very sensible for employers to engage with the employees on any risks associated with using AI and as they do that, perhaps think about updating all of the other policies like Oliver mentioned to make sure everything’s lined up and consistent.

Matthew Ramsey: Yeah, policies are always the last thing employers want to look at whereas of course, I would say this wouldn’t I, they should pay close attention to all their drafting and that is why they should always come to lawyers. If you have any questions about AI as it affects HR or, frankly, any questions about AI generally then please do reach out because as I say we’ve got a particular skillset in the firm within what we call the Lawtech function and so do please send in any questions you have. All of our contacts details are in the episode description. It just remains then for me to thank Oliver and Amy for their thoughts this month and hopefully we will have the pleasure of your company on another edition in the future. Thanks.