“Why won’t Rangers sign a Catholic?”

“Why won’t Rangers sign a Catholic?”

Is it possible to have a blanket ban on employing people from a particular country? That question came up last week when it was reported that Indian owned franchises in Cricket’s summer tournament The Hundred would not recruit Pakistani players due to the ongoing political tensions between the two countries.

Under the Equality Act, it is not allowed to discriminate on ‘colour, nationality, or ethnic or national origins”. And as UK employers, the different teams are bound by UK employment law. So how can the alleged ban be legal?

UK employment law protects anyone eligible to work in the UK – this includes not only UK citizens, but citizens of Ireland, anyone with ‘indefinite leave to remain’ and anyone working here with a valid work visa. Sportspeople in the UK are employees like anyone else and are normally here on International Sportsperson Visas.

When it comes to recruitment, an employer is not obliged to consider candidates who are not eligible to work in the UK and they are not required to advertise in any specific location. So in that sense, cricket teams in the UK are not required to consider players from any country. However, The Hundred operates on an ‘auction’ system where players apply centrally and the different teams bid for the players they want. It’s been reported that a number of Pakistani players have put their names into the auction, so they are in effect job applicants who need to be treated in the same way as applicants from Australia, South Africa, West Indies etc.

In practice, this is all a theoretical argument. It is very difficult, if not impossible, for someone who is not eligible to work in the UK to make a claim via an Employment Tribunal. And even if they could, it would be open to a team to defend an individual claim on the basis that the player didn’t have the right skill set (they were a batter and the team wanted a fast bowler for example). If the player in question was a British player of Pakistani heritage, or someone with dual British/Pakistani nationality, the issue might be different however.

It’s a very unwise step to have a blanket ban on people of a specific nationality and doubly so for it to become widely known. Even if the teams in question can get away with it legally (which they probably can) there is the ethical question – especially in a sport which has faced serious accusations of racism in recent years. As the old saying goes “it takes 20 years to build a reputation and 5 minutes to destroy it”.

For younger readers, the title of this post refers to a blanket ban that Rangers FC in Glasgow had against employing Roman Catholics or signing Catholic players – a policy which was only ended, to some controversy, as recently as 1989.

A cricket match in progress featuring players in purple and black uniforms on a grassy field, with a stadium filled with spectators in the background.
Why I won’t be using AI anytime soon

Why I won’t be using AI anytime soon

Every time I log on these days, I’m bombarded with articles about Artificial Intelligence (AI) – whether it is how it is the most wonderful labour saving thing ever (I’d send an avatar to a meeting too if I had the option!)  – to the apocalyptic, with some predicting, Terminator style, in AI wiping out humanity.

While I don’t fall into either category, I’m certainly a sceptic about its uses, particularly some of the most common AI bots that are available (ChatGPT/Meta/Gemini/Grok/Glamdring/Wall-E/Co-Pilot etc). Here’s why:

  • They get things wrong:

If you ask Gemini who Sarah O’Connor is, it will tell you she’s a truck driver who was interviewed for the Financial Times.  In fact she’s a highly regarded FT journalist. While Chat GPT informed me that drug dealer Howard Mark’s autobiography was called “Thy Damnation Slumbereth Not” and even explained to me how he’d arrived at that title. It’s not called that at all.

  • It makes stuff up:

Gemini created a totally fictitious act of parliament (“The Civil Justice Act 2004”). And a barrister nearly got themselves into very hot water with a judge for citing fictitious AI generated case law in court.

  • It can’t do certain things (but pretends it can)

This video from people professional Julie Drybrough, when she asked ChatGPT to help create a presentation for her reveals it be like an over-confident intern  – claiming it can do the job and constantly saying it was doing it, before finally admitting that it was unable to complete the task.

  • It doesn’t know what it doesn’t know

Large Language Learning Models (the basis of AI) has to learn from something. However, despite what we might think, not all human knowledge is on the internet. There are still plenty of books, films and novels that aren’t available and many museum and library archives are not yet digitised, or where access is limited. So, ask a question that it doesn’t know the answer to and it may be honest and admit it doesn’t know – but equally it may revert to point above and just make it up.

  • It comes with a lot of ethical and environmental concerns

Meta AI has been subject to a lot of criticism for using illegal copies of copyright works to train its AI (they are probably unlucky that they got caught since I can’t imagine that the other leading AI providers paid for their sources). If you don’t think that’s a problem, try walking out of Waterstone’s with a book you haven’t paid for, and use the defence of “I wasn’t stealing, I just wanted to read it to learn what it contained”

While the environmental consequences in terms of water use and electricity are only just becoming known – but one stat that stands out is that a Chat GPT query uses ten times the amount of electricity as a standard Google search.

And what does all this mean for managing  people in a business?

Firstly, relying on AI for information on employment law is risky to say the least. Like Wikipedia, it might be correct, but unless you know already you can’t be certain that it’s giving you the right information.

Secondly, AI can’t possibly know the information in a human head. For example, much of my understanding of HR issues comes from nearly 40 years of experience with a variety of industries and sectors. AI cannot possibly replicate what I learned from the Merseybus bus cleaning dispute in 1995 or the competitive tendering process for Leisure Centres in Runcorn in 1990. Or a TUPE transfer I managed between construction companies in 2004. Or how to deal with an employee charged with child abuse, or who one is terminally ill.

Finally, AI doesn’t understand culture. A solution that works in one company context may not be the most effective or appropriate in a different one, for lots of very valid reasons. AI might give you a range of solutions but can’t advise you on which one might be the best.

I’m sure that in 5 or 10 years, some of these issues with AI might be resolved. But until then, if you ask me to support your business with HR issues, you can be certain that you will be getting advice from an actual human, not a bot.

Just out of interest, I got Microsoft’s Co-Pilot AI (which is built into the latest versions of Word/PowerPoint/Excel etc) to rewrite this post for me. You can read what it came up with (minus the hyperlinks) here

bionic hand and human hand finger pointing
Photo by cottonbro studio on Pexels.com

Why I won’t be using AI (Co-Pilot’s version)


Title: The Impact of Artificial Intelligence: A Critical Examination

In today’s digital age, the prevalence of articles about Artificial Intelligence (AI) is overwhelming. From claims of AI being the ultimate labor-saving tool (who wouldn’t want to send an avatar to a meeting?) to apocalyptic predictions of AI wiping out humanity in a Terminator-style scenario, the discourse is varied and intense. While I don’t subscribe to either extreme, I am certainly skeptical about its applications, especially the most common AI bots available (ChatGPT, Meta, Gemini, Grok, Glamdring, Wall-E, Co-Pilot, etc.). Here’s why:

1. Inaccuracies and Errors

AI bots often get things wrong. For instance, if you ask Gemini who Sarah O’Connor is, it will tell you she’s a truck driver interviewed for the Financial Times. In reality, she is a highly regarded FT journalist. Similarly, ChatGPT once informed me that drug dealer Howard Marks’ autobiography was titled “Thy Damnation Slumbereth Not” and even explained the reasoning behind the title. This is incorrect.

2. Fabrication of Information

AI can create completely fictitious information. Gemini once generated a non-existent act of parliament (“The Civil Justice Act 2004”). Additionally, a barrister nearly faced serious consequences for citing fictitious AI-generated case law in court.

3. Overconfidence in Capabilities

AI sometimes pretends to be capable of tasks it cannot perform. In a video by people professional Julie Drybrough, ChatGPT was asked to help create a presentation. It claimed it could do the job and repeatedly stated it was working on it, only to finally admit it was unable to complete the task.

4. Limited Knowledge Base

Large Language Learning Models, the foundation of AI, must learn from existing data. However, not all human knowledge is available on the internet. Many books, films, and novels are not digitized, and numerous museum and library archives have limited access. When faced with a question it doesn’t know the answer to, AI may either honestly admit its ignorance or revert to fabricating information.

5. Ethical and Environmental Concerns

AI development raises significant ethical and environmental issues. Meta AI has faced criticism for using illegal copies of copyrighted works to train its models. The environmental impact is also concerning, with a ChatGPT query consuming ten times the electricity of a standard Google search.

Implications for Business Management

Relying on AI for information on employment law is risky. Like Wikipedia, AI might provide correct information, but without prior knowledge, you cannot be certain of its accuracy. AI cannot replicate the nuanced understanding gained from years of experience in various industries, dealing with trade unions, disputes, and complex issues. It also fails to grasp cultural contexts, making it difficult to determine the most effective solutions for different company environments.

While some of these issues with AI may be resolved in the next 5 to 10 years, until then, if you seek support for HR issues, you can be assured that you will receive advice from an actual human, not a bot.


A group of people engaging with a futuristic holographic interface, focusing on digital technology and artificial intelligence concepts.

HR’s Donald Trump moment?

There has been a view for many years that the UK is governed by what has been termed the “Good chaps” theory of government – that there are certain unwritten rules and conventions that are understood by all parties and which everyone works within, even if there is profound disagreement on the issue at hand. Some commentators argue that this has disappeared in recent years as certain politicians have wilfully disregarded these understandings in order to gain a particular advantage or objective. (It was also in evidence in the Donald Trump years in the US, where Trump would say or do things that shocked people, not necessarily by their intent or outcome but the fact that he said or did them at all)

The world of work in the UK had a similar Trump type moment yesterday when P&O Ferries announced via Zoom that they were sacking 800 UK crew immediately and intended to replace them with cheaper foreign staff via an agency. Much of the shock and anger from politicians, HR professionals and others was not so much around the decision (other companies in the past have announced far bigger changes or lay-offs) but the fact that they did so in a way in which broke all the unwritten rules of employment relations in the UK. Even if they can legally do something in a particular way, most companies would approach a decision like this with an understanding of how they would be expected to behave.

P&O are not the first company to break these norms – in fact it has always gone on, even in the days when trade unions were stronger. But they are a high profile well-established ‘household’ name and consequently the expectation would be that they would do things ‘properly’.

An example of Donald Trump breaking the political norms by making personal comments about rival John McCain

What P&O have also done is shown how weak UK employment law is in protecting employees from an employer determined to behave in this manner. They will have factored in not only the cost of 800 potential unfair dismissal claims (all of which they are likely to lose) but also the fact that it will take 18 months -2 years before a tribunal hearing takes place (and even then they could refuse to pay, meaning individuals would have to take further legal action to enforce their claims). By which time many  ex-employees will have given up and the news story will have died down.

So what is to be done? More, and/or stronger employment law is the cry from certain sections. But as has been pointed out when there is a call for the UK to have a written constitution to resolve the reliance on ‘good chaps’, this wouldn’t solve all the problems – and would take time to go through parliament.

A better solutions, in my view,  would be for existing laws to be more easily applied – not only by tribunals being made faster and their judgments more easily enforced, but by the use of a properly funded statutory body similar to the Health and Safety Executive) with powers to hold bad employers to account. We already have a pretty impotent “Director of Labour Market Enforcement” within the Civil Service so the basic structure is there.

But also, HR professionals need to stop living in their unitarist utopia and accept that there is a need to recognise that employees often want different outcomes from their employment relationship. And so we need to be more open to negotiation , compromise and the role of trade unions or other employee representatives. That means going back to the ‘good chaps’ theory of employee relations, that there are unwritten rules that we all follow. P&O may be a particularly egregious example but they are in many respects the ‘tip of the iceberg’ for modern HR and business practices.

The 12 things HR can do for your business

Last year, I published a post which outlined the 15 things that HR should do – at a minimum – for the people who work within a business. Although I’d argue that doing these things for workers has a positive impact on employers as well, a more sceptical businessperson might wonder if and how their company would benefit from HR. After all, why would you pay for something if you aren’t getting something in return? So here are my 12 reasons why a business would want HR:

1. We’ll make sure that not only do you comply with employment laws, but that we implement them in a way that fits the business strategy and culture

2. We’ll make sure that the business is able to get the right people, in the right number, at the right time.

3. We’ll advise you on the ‘people consequences’ of any business proposals, so that you are taking decisions on the future with full knowledge of all the issues (not just the financial ones)

4. When problems occur with individuals, or groups of employees, we’ll look to find sensible, legal and effective solutions to minimise the damage to the organisation

5. We’ll be your experts in the labour market, knowing what outside factors will have an impact on helping us to deliver – or which need to be overcome to deliver – point 2 above.

6. When changes happen, we’ll understand the best way to minimise disruption and achieve what you want to set out.

7. HR isn’t your business conscience – but we will remind you that you have ethical responsibilities (and normal human emotions) that need to be factored in

8. We’re not your police either – so if we need to put in policies, systems, or procedures,  we’ll make sure they are there for a clear and understandable reason and that everyone understands the consequences of not complying

9. We’ll manage training and development, so that people in the business get the skills they need to do their jobs in a way that’s cost-effective.

10. We’ll use our specialist knowledge to support managers to manage people more effectively

11. If a problem needs a long-term solution, we won’t just offer you a quick fix

12. If there’s a new idea floating around, we’ll look for evidence that it will actually improve things before recommending you implement it

human-resources-1

Human Resources by Nick Youngson CC BY-SA 3.0 Alpha Stock Images