Every time I log on these days, I’m bombarded with articles about Artificial Intelligence (AI) – whether it is how it is the most wonderful labour saving thing ever (I’d send an avatar to a meeting too if I had the option!) – to the apocalyptic, with some predicting, Terminator style, in AI wiping out humanity.
While I don’t fall into either category, I’m certainly a sceptic about its uses, particularly some of the most common AI bots that are available (ChatGPT/Meta/Gemini/Grok/Glamdring/Wall-E/Co-Pilot etc). Here’s why:
- They get things wrong:
If you ask Gemini who Sarah O’Connor is, it will tell you she’s a truck driver who was interviewed for the Financial Times. In fact she’s a highly regarded FT journalist. While Chat GPT informed me that drug dealer Howard Mark’s autobiography was called “Thy Damnation Slumbereth Not” and even explained to me how he’d arrived at that title. It’s not called that at all.
- It makes stuff up:
Gemini created a totally fictitious act of parliament (“The Civil Justice Act 2004”). And a barrister nearly got themselves into very hot water with a judge for citing fictitious AI generated case law in court.
- It can’t do certain things (but pretends it can)
This video from people professional Julie Drybrough, when she asked ChatGPT to help create a presentation for her reveals it be like an over-confident intern – claiming it can do the job and constantly saying it was doing it, before finally admitting that it was unable to complete the task.
- It doesn’t know what it doesn’t know
Large Language Learning Models (the basis of AI) has to learn from something. However, despite what we might think, not all human knowledge is on the internet. There are still plenty of books, films and novels that aren’t available and many museum and library archives are not yet digitised, or where access is limited. So, ask a question that it doesn’t know the answer to and it may be honest and admit it doesn’t know – but equally it may revert to point above and just make it up.
- It comes with a lot of ethical and environmental concerns
Meta AI has been subject to a lot of criticism for using illegal copies of copyright works to train its AI (they are probably unlucky that they got caught since I can’t imagine that the other leading AI providers paid for their sources). If you don’t think that’s a problem, try walking out of Waterstone’s with a book you haven’t paid for, and use the defence of “I wasn’t stealing, I just wanted to read it to learn what it contained”
While the environmental consequences in terms of water use and electricity are only just becoming known – but one stat that stands out is that a Chat GPT query uses ten times the amount of electricity as a standard Google search.
And what does all this mean for managing people in a business?
Firstly, relying on AI for information on employment law is risky to say the least. Like Wikipedia, it might be correct, but unless you know already you can’t be certain that it’s giving you the right information.
Secondly, AI can’t possibly know the information in a human head. For example, much of my understanding of HR issues comes from nearly 40 years of experience with a variety of industries and sectors. AI cannot possibly replicate what I learned from the Merseybus bus cleaning dispute in 1995 or the competitive tendering process for Leisure Centres in Runcorn in 1990. Or a TUPE transfer I managed between construction companies in 2004. Or how to deal with an employee charged with child abuse, or who one is terminally ill.
Finally, AI doesn’t understand culture. A solution that works in one company context may not be the most effective or appropriate in a different one, for lots of very valid reasons. AI might give you a range of solutions but can’t advise you on which one might be the best.
I’m sure that in 5 or 10 years, some of these issues with AI might be resolved. But until then, if you ask me to support your business with HR issues, you can be certain that you will be getting advice from an actual human, not a bot.
Just out of interest, I got Microsoft’s Co-Pilot AI (which is built into the latest versions of Word/PowerPoint/Excel etc) to rewrite this post for me. You can read what it came up with (minus the hyperlinks) here


