Why I won’t be using AI anytime soon

Why I won’t be using AI anytime soon

Every time I log on these days, I’m bombarded with articles about Artificial Intelligence (AI) – whether it is how it is the most wonderful labour saving thing ever (I’d send an avatar to a meeting too if I had the option!)  – to the apocalyptic, with some predicting, Terminator style, in AI wiping out humanity.

While I don’t fall into either category, I’m certainly a sceptic about its uses, particularly some of the most common AI bots that are available (ChatGPT/Meta/Gemini/Grok/Glamdring/Wall-E/Co-Pilot etc). Here’s why:

  • They get things wrong:

If you ask Gemini who Sarah O’Connor is, it will tell you she’s a truck driver who was interviewed for the Financial Times.  In fact she’s a highly regarded FT journalist. While Chat GPT informed me that drug dealer Howard Mark’s autobiography was called “Thy Damnation Slumbereth Not” and even explained to me how he’d arrived at that title. It’s not called that at all.

  • It makes stuff up:

Gemini created a totally fictitious act of parliament (“The Civil Justice Act 2004”). And a barrister nearly got themselves into very hot water with a judge for citing fictitious AI generated case law in court.

  • It can’t do certain things (but pretends it can)

This video from people professional Julie Drybrough, when she asked ChatGPT to help create a presentation for her reveals it be like an over-confident intern  – claiming it can do the job and constantly saying it was doing it, before finally admitting that it was unable to complete the task.

  • It doesn’t know what it doesn’t know

Large Language Learning Models (the basis of AI) has to learn from something. However, despite what we might think, not all human knowledge is on the internet. There are still plenty of books, films and novels that aren’t available and many museum and library archives are not yet digitised, or where access is limited. So, ask a question that it doesn’t know the answer to and it may be honest and admit it doesn’t know – but equally it may revert to point above and just make it up.

  • It comes with a lot of ethical and environmental concerns

Meta AI has been subject to a lot of criticism for using illegal copies of copyright works to train its AI (they are probably unlucky that they got caught since I can’t imagine that the other leading AI providers paid for their sources). If you don’t think that’s a problem, try walking out of Waterstone’s with a book you haven’t paid for, and use the defence of “I wasn’t stealing, I just wanted to read it to learn what it contained”

While the environmental consequences in terms of water use and electricity are only just becoming known – but one stat that stands out is that a Chat GPT query uses ten times the amount of electricity as a standard Google search.

And what does all this mean for managing  people in a business?

Firstly, relying on AI for information on employment law is risky to say the least. Like Wikipedia, it might be correct, but unless you know already you can’t be certain that it’s giving you the right information.

Secondly, AI can’t possibly know the information in a human head. For example, much of my understanding of HR issues comes from nearly 40 years of experience with a variety of industries and sectors. AI cannot possibly replicate what I learned from the Merseybus bus cleaning dispute in 1995 or the competitive tendering process for Leisure Centres in Runcorn in 1990. Or a TUPE transfer I managed between construction companies in 2004. Or how to deal with an employee charged with child abuse, or who one is terminally ill.

Finally, AI doesn’t understand culture. A solution that works in one company context may not be the most effective or appropriate in a different one, for lots of very valid reasons. AI might give you a range of solutions but can’t advise you on which one might be the best.

I’m sure that in 5 or 10 years, some of these issues with AI might be resolved. But until then, if you ask me to support your business with HR issues, you can be certain that you will be getting advice from an actual human, not a bot.

Just out of interest, I got Microsoft’s Co-Pilot AI (which is built into the latest versions of Word/PowerPoint/Excel etc) to rewrite this post for me. You can read what it came up with (minus the hyperlinks) here

bionic hand and human hand finger pointing
Photo by cottonbro studio on Pexels.com

Why I won’t be using AI (Co-Pilot’s version)


Title: The Impact of Artificial Intelligence: A Critical Examination

In today’s digital age, the prevalence of articles about Artificial Intelligence (AI) is overwhelming. From claims of AI being the ultimate labor-saving tool (who wouldn’t want to send an avatar to a meeting?) to apocalyptic predictions of AI wiping out humanity in a Terminator-style scenario, the discourse is varied and intense. While I don’t subscribe to either extreme, I am certainly skeptical about its applications, especially the most common AI bots available (ChatGPT, Meta, Gemini, Grok, Glamdring, Wall-E, Co-Pilot, etc.). Here’s why:

1. Inaccuracies and Errors

AI bots often get things wrong. For instance, if you ask Gemini who Sarah O’Connor is, it will tell you she’s a truck driver interviewed for the Financial Times. In reality, she is a highly regarded FT journalist. Similarly, ChatGPT once informed me that drug dealer Howard Marks’ autobiography was titled “Thy Damnation Slumbereth Not” and even explained the reasoning behind the title. This is incorrect.

2. Fabrication of Information

AI can create completely fictitious information. Gemini once generated a non-existent act of parliament (“The Civil Justice Act 2004”). Additionally, a barrister nearly faced serious consequences for citing fictitious AI-generated case law in court.

3. Overconfidence in Capabilities

AI sometimes pretends to be capable of tasks it cannot perform. In a video by people professional Julie Drybrough, ChatGPT was asked to help create a presentation. It claimed it could do the job and repeatedly stated it was working on it, only to finally admit it was unable to complete the task.

4. Limited Knowledge Base

Large Language Learning Models, the foundation of AI, must learn from existing data. However, not all human knowledge is available on the internet. Many books, films, and novels are not digitized, and numerous museum and library archives have limited access. When faced with a question it doesn’t know the answer to, AI may either honestly admit its ignorance or revert to fabricating information.

5. Ethical and Environmental Concerns

AI development raises significant ethical and environmental issues. Meta AI has faced criticism for using illegal copies of copyrighted works to train its models. The environmental impact is also concerning, with a ChatGPT query consuming ten times the electricity of a standard Google search.

Implications for Business Management

Relying on AI for information on employment law is risky. Like Wikipedia, AI might provide correct information, but without prior knowledge, you cannot be certain of its accuracy. AI cannot replicate the nuanced understanding gained from years of experience in various industries, dealing with trade unions, disputes, and complex issues. It also fails to grasp cultural contexts, making it difficult to determine the most effective solutions for different company environments.

While some of these issues with AI may be resolved in the next 5 to 10 years, until then, if you seek support for HR issues, you can be assured that you will receive advice from an actual human, not a bot.


A group of people engaging with a futuristic holographic interface, focusing on digital technology and artificial intelligence concepts.

Ch..Ch..Changes

Over the last few weeks, there seems to have been a welter of articles and events on the changing shape of work and –  as a consequence – how we need to throw our models of change and organisational design out of the window. Whether it’s the robots coming to take our jobs, the gig economy, globalisation or Brexit, everything’s changing and we’re living in scary new world where nothing is certain.

Except perhaps it’s not changing quite as much as we think. For example, recent data suggest that the rate of increase of the use of robots has actually slowed across Europe in the last five years and is at a lower rate in the US. That might speed up again but even now is only at a level of 2.5 robots per 1000 workers.

Similarly, the gig economy – as recognised by a recent CIPD report – still only forms a small percentage of the workforce, most of whom remain in traditional employment relationships. Even if we extend that to all self-employed workers, despite the growth in recent years they still only form around 15% of the working population.

I’ve been hearing about the impact of the VUCA world for at least 5 years now. Looking around, most of the organisations I work with are still structured in a very similar way to the way they were in 2012 – and I suspect they will not look that different in 2022.

The reason – humans adapt slowly to change. The technology to create driverless cars may exist, but until they are socially accepted they won’t take off. And that won’t be until numerous ethical and political issues are resolved. How many people talk to Siri/Cortana/Alexa currently? A growing number, but still only a tiny minority. Many humans find the idea of conversing with an inanimate machine a difficult concept. It will come no doubt, but over a longer timescale than the proponents suggest.

So while we should review our models and theories of change (particularly dumping the outdated Lewin model in the dustbin of history) we should remember that change will be controlled by the speed that humans want it to – not simply by the fact that we have the ability to do something.