“Why won’t Rangers sign a Catholic?”

“Why won’t Rangers sign a Catholic?”

Is it possible to have a blanket ban on employing people from a particular country? That question came up last week when it was reported that Indian owned franchises in Cricket’s summer tournament The Hundred would not recruit Pakistani players due to the ongoing political tensions between the two countries.

Under the Equality Act, it is not allowed to discriminate on ‘colour, nationality, or ethnic or national origins”. And as UK employers, the different teams are bound by UK employment law. So how can the alleged ban be legal?

UK employment law protects anyone eligible to work in the UK – this includes not only UK citizens, but citizens of Ireland, anyone with ‘indefinite leave to remain’ and anyone working here with a valid work visa. Sportspeople in the UK are employees like anyone else and are normally here on International Sportsperson Visas.

When it comes to recruitment, an employer is not obliged to consider candidates who are not eligible to work in the UK and they are not required to advertise in any specific location. So in that sense, cricket teams in the UK are not required to consider players from any country. However, The Hundred operates on an ‘auction’ system where players apply centrally and the different teams bid for the players they want. It’s been reported that a number of Pakistani players have put their names into the auction, so they are in effect job applicants who need to be treated in the same way as applicants from Australia, South Africa, West Indies etc.

In practice, this is all a theoretical argument. It is very difficult, if not impossible, for someone who is not eligible to work in the UK to make a claim via an Employment Tribunal. And even if they could, it would be open to a team to defend an individual claim on the basis that the player didn’t have the right skill set (they were a batter and the team wanted a fast bowler for example). If the player in question was a British player of Pakistani heritage, or someone with dual British/Pakistani nationality, the issue might be different however.

It’s a very unwise step to have a blanket ban on people of a specific nationality and doubly so for it to become widely known. Even if the teams in question can get away with it legally (which they probably can) there is the ethical question – especially in a sport which has faced serious accusations of racism in recent years. As the old saying goes “it takes 20 years to build a reputation and 5 minutes to destroy it”.

For younger readers, the title of this post refers to a blanket ban that Rangers FC in Glasgow had against employing Roman Catholics or signing Catholic players – a policy which was only ended, to some controversy, as recently as 1989.

A cricket match in progress featuring players in purple and black uniforms on a grassy field, with a stadium filled with spectators in the background.

No, it’s not OK to call your boss a “Dickhead”

The media had a field day last week with a story that an Employment Tribunal had ruled that being sacked for calling your boss a “Dickhead” was grounds for Unfair Dismissal. As always when it comes to the reporting of Employment Law stories, the old adage “never let the truth get in the way of a good story” applies here.

The case involved a Ms Herbert, who was sacked from the small construction business she worked for by its Operations Director.  He was also Ms Herbert’s brother in law, and was married to the Managing Director who was therefore her sister in law.

Most of the facts were disputed – included the date Ms Herbert was dismissed. The only consistent fact that everyone agreed on was that at some point in the meeting where she alleged she was dismissed, she used the words “fucking dickheads” to describe the Operations Director and his wife.

The reason she won her case was that the company made a complete pig’s ear of dealing with the situation. In fact pretty much everything they could have got wrong they did.

  • When the ‘dickheads’ comment was made, the Operations Director told Ms Herbert to ‘get out’ and that she was ‘sacked’
  • The company then tried to construct a case about Ms Herbert’s misconduct and hold a later disciplinary hearing on a series of alleged serious breaches of company rules. She refused to participate in this process on the basis that the company had already sacked her.
  • The context in which the comment was made was not enough to justify a gross misconduct dismissal – and therefore she was entitled to be paid notice. In fact their company policy said that use of ‘insulting and abusive language’ would only result in dismissal if a prior warning had been given.

There are three important takeaways for small businesses from this case – none of which are around what Ms Herbert said

  • Employment Tribunals deal with the facts in each individual case. This decision does not set a precedent in any way, and especially it doesn’t give a green light for employees to abuse their bosses without penalty.
  • To make a dismissal fair, you have to have a fair reason and follow a fair procedure. The judge did conclude that Ms Herbert’s behaviour could well have been a fair reason but the Operations Directors ‘spur of the moment’ decision was not in any way a fair process.
  • The case shows starkly how long the tribunal process is taking – the events of this case happened in 2022 but the decision was not published until last week. Regardless of who is right or wrong, having a claim that takes 3 years to resolve is not a sensible way to resolve employment disputes (and as this article shows, this is not an untypical timescale)

Why I won’t be using AI anytime soon

Why I won’t be using AI anytime soon

Every time I log on these days, I’m bombarded with articles about Artificial Intelligence (AI) – whether it is how it is the most wonderful labour saving thing ever (I’d send an avatar to a meeting too if I had the option!)  – to the apocalyptic, with some predicting, Terminator style, in AI wiping out humanity.

While I don’t fall into either category, I’m certainly a sceptic about its uses, particularly some of the most common AI bots that are available (ChatGPT/Meta/Gemini/Grok/Glamdring/Wall-E/Co-Pilot etc). Here’s why:

  • They get things wrong:

If you ask Gemini who Sarah O’Connor is, it will tell you she’s a truck driver who was interviewed for the Financial Times.  In fact she’s a highly regarded FT journalist. While Chat GPT informed me that drug dealer Howard Mark’s autobiography was called “Thy Damnation Slumbereth Not” and even explained to me how he’d arrived at that title. It’s not called that at all.

  • It makes stuff up:

Gemini created a totally fictitious act of parliament (“The Civil Justice Act 2004”). And a barrister nearly got themselves into very hot water with a judge for citing fictitious AI generated case law in court.

  • It can’t do certain things (but pretends it can)

This video from people professional Julie Drybrough, when she asked ChatGPT to help create a presentation for her reveals it be like an over-confident intern  – claiming it can do the job and constantly saying it was doing it, before finally admitting that it was unable to complete the task.

  • It doesn’t know what it doesn’t know

Large Language Learning Models (the basis of AI) has to learn from something. However, despite what we might think, not all human knowledge is on the internet. There are still plenty of books, films and novels that aren’t available and many museum and library archives are not yet digitised, or where access is limited. So, ask a question that it doesn’t know the answer to and it may be honest and admit it doesn’t know – but equally it may revert to point above and just make it up.

  • It comes with a lot of ethical and environmental concerns

Meta AI has been subject to a lot of criticism for using illegal copies of copyright works to train its AI (they are probably unlucky that they got caught since I can’t imagine that the other leading AI providers paid for their sources). If you don’t think that’s a problem, try walking out of Waterstone’s with a book you haven’t paid for, and use the defence of “I wasn’t stealing, I just wanted to read it to learn what it contained”

While the environmental consequences in terms of water use and electricity are only just becoming known – but one stat that stands out is that a Chat GPT query uses ten times the amount of electricity as a standard Google search.

And what does all this mean for managing  people in a business?

Firstly, relying on AI for information on employment law is risky to say the least. Like Wikipedia, it might be correct, but unless you know already you can’t be certain that it’s giving you the right information.

Secondly, AI can’t possibly know the information in a human head. For example, much of my understanding of HR issues comes from nearly 40 years of experience with a variety of industries and sectors. AI cannot possibly replicate what I learned from the Merseybus bus cleaning dispute in 1995 or the competitive tendering process for Leisure Centres in Runcorn in 1990. Or a TUPE transfer I managed between construction companies in 2004. Or how to deal with an employee charged with child abuse, or who one is terminally ill.

Finally, AI doesn’t understand culture. A solution that works in one company context may not be the most effective or appropriate in a different one, for lots of very valid reasons. AI might give you a range of solutions but can’t advise you on which one might be the best.

I’m sure that in 5 or 10 years, some of these issues with AI might be resolved. But until then, if you ask me to support your business with HR issues, you can be certain that you will be getting advice from an actual human, not a bot.

Just out of interest, I got Microsoft’s Co-Pilot AI (which is built into the latest versions of Word/PowerPoint/Excel etc) to rewrite this post for me. You can read what it came up with (minus the hyperlinks) here

bionic hand and human hand finger pointing
Photo by cottonbro studio on Pexels.com

Why I won’t be using AI (Co-Pilot’s version)


Title: The Impact of Artificial Intelligence: A Critical Examination

In today’s digital age, the prevalence of articles about Artificial Intelligence (AI) is overwhelming. From claims of AI being the ultimate labor-saving tool (who wouldn’t want to send an avatar to a meeting?) to apocalyptic predictions of AI wiping out humanity in a Terminator-style scenario, the discourse is varied and intense. While I don’t subscribe to either extreme, I am certainly skeptical about its applications, especially the most common AI bots available (ChatGPT, Meta, Gemini, Grok, Glamdring, Wall-E, Co-Pilot, etc.). Here’s why:

1. Inaccuracies and Errors

AI bots often get things wrong. For instance, if you ask Gemini who Sarah O’Connor is, it will tell you she’s a truck driver interviewed for the Financial Times. In reality, she is a highly regarded FT journalist. Similarly, ChatGPT once informed me that drug dealer Howard Marks’ autobiography was titled “Thy Damnation Slumbereth Not” and even explained the reasoning behind the title. This is incorrect.

2. Fabrication of Information

AI can create completely fictitious information. Gemini once generated a non-existent act of parliament (“The Civil Justice Act 2004”). Additionally, a barrister nearly faced serious consequences for citing fictitious AI-generated case law in court.

3. Overconfidence in Capabilities

AI sometimes pretends to be capable of tasks it cannot perform. In a video by people professional Julie Drybrough, ChatGPT was asked to help create a presentation. It claimed it could do the job and repeatedly stated it was working on it, only to finally admit it was unable to complete the task.

4. Limited Knowledge Base

Large Language Learning Models, the foundation of AI, must learn from existing data. However, not all human knowledge is available on the internet. Many books, films, and novels are not digitized, and numerous museum and library archives have limited access. When faced with a question it doesn’t know the answer to, AI may either honestly admit its ignorance or revert to fabricating information.

5. Ethical and Environmental Concerns

AI development raises significant ethical and environmental issues. Meta AI has faced criticism for using illegal copies of copyrighted works to train its models. The environmental impact is also concerning, with a ChatGPT query consuming ten times the electricity of a standard Google search.

Implications for Business Management

Relying on AI for information on employment law is risky. Like Wikipedia, AI might provide correct information, but without prior knowledge, you cannot be certain of its accuracy. AI cannot replicate the nuanced understanding gained from years of experience in various industries, dealing with trade unions, disputes, and complex issues. It also fails to grasp cultural contexts, making it difficult to determine the most effective solutions for different company environments.

While some of these issues with AI may be resolved in the next 5 to 10 years, until then, if you seek support for HR issues, you can be assured that you will receive advice from an actual human, not a bot.


A group of people engaging with a futuristic holographic interface, focusing on digital technology and artificial intelligence concepts.
Rearranging The Desks

Rearranging The Desks

One of the things that concerns many small businesses in relation to employment law is the issue of constructive dismissal. A perception has grown up around the concept that if a business makes any change that an employee doesn’t like, they can resign and take the company to a tribunal. This perception isn’t helped by stories like this from The Guardian, especially its clickbait headline “Senior staff can sue if given ‘low status’ desk, UK tribunal rules”

Of course, the headline isn’t true and the story is far more complex. But it’s a useful one to look at since it covers a lot of issues that a small business might face when dealing with a change.

Firstly, what is constructive dismissal? It’s the idea that an employer makes a significant detrimental change to an employee’s terms, conditions or position that the employee can consider that they have effectively been dismissed and resign on the spot. It is in many ways the reverse of gross misconduct, which is where an employee does something so serious that the employer considers that dismissal without notice is the appropriate solution.

In some situations, constructive dismissal can be triggered by the ‘final straw’ in a series of changes. That’s in part what happened in this case.

The claimant, a Mr Walker, was a manager of a branch of an estate agents. He was moved without discussion to a smaller branch and given – again without discussion – a revised pay package. To achieve a similar level of salary he would have to hit more demanding targets. A couple of months later, he was required to return to his original branch in what was termed a joint managerial role, but which required him to sit with the majority of staff in the main office rather than at the manager’s desk. He considered this to be a demotion and considered that it would be difficult and embarrassing to have to sit among staff he had formerly managed. When he raised this with his Director, the director dismissed his concerns with a comment that he couldn’t believe someone of Mr Walker’s age would make a fuss about a desk (there was some dispute about whether the Director swore in this conversation). Mr Walker considered this the final straw, and resigned claiming constructive dismissal.

There are three key learning points from this for small businesses:

  • If you are making a change which has a negative impact on someone – even if they are in a senior role – discuss and (where possible) agree the way forward. Don’t just impose it.
  • Consider your company culture. In some businesses, where people sit would not be an issue. But if status is conveyed by office location, type of desk or anything else, taking that away will clearly indicate to people in the organisation that the individual’s status has been downgraded.
  • The Estate Agents had an external HR adviser who gave advice on how to handle the situation. The advice was not necessarily incorrect but was based on partial information (what the Director thought was important for her to know) and as result made the situation worse. Always tell your HR adviser (for my clients that means me ) the full story, even the bits that you don’t think are valid.

Constructive dismissal is actually very rare (in a 40 year HR career I’ve dealt with 2 cases and I don’t think that I am unusual in that). But you can make it rarer still by following the steps above.

A businessman in a suit sitting in an office, looking concerned, with colleagues working at desks in the background.