top of page
  • Writer's pictureMocha Sprout

AI As A Gatekeeper Upholding Bias In Hiring

Many companies are turning to artificial intelligence to help hire the right people. AI can make hiring faster, cheaper, and more fair. But here’s the kicker: AI might be making things worse regarding fairness.

Let’s break it down. Imagine you’ve got a machine learning how to pick the best candidates for a job. That sounds great, right? But what if this machine starts picking people based on their race or gender? Not so great. This isn’t just a what-if scenario. Bloomberg recently did some digging and found out that OpenAI’s big brain, GPT 3.5, is showing serious biases when it comes to hiring.

Here’s what happened. Bloomberg fed fake names and resumes to AI recruiting software. They chose names that, in the US, are often associated with certain racial or ethnic groups. The AI had to rank these resumes for different jobs, just like a human would. But instead of treating everyone equally, AI played favorites based on race and gender. For example, it often shoved women, especially Black women, aside when it came to tech jobs.

You might think, “But it’s just a machine!” True, but this machine learns from data created by humans. Since humans aren’t perfect, our biases can sneak into the AI algorithms. So, when a company uses AI to sort through job applications, it might unknowingly let these biases decide who gets an interview and who doesn’t.

This isn’t just about hurting people’s feelings. It’s about fairness and opportunity. If AI systems keep sidelining certain groups, those people will have more difficulty landing good jobs. That’s not just bad for them; it’s bad for everyone. Companies miss out on great talent; other industries (like tech) can become less diverse and innovative.

Why do companies use AI for hiring if it’s problematic? Here’s why: AI can review hundreds of resumes quickly, saving time and money. But as we’ve seen, cutting corners on fairness is too expensive to pay. Isn’t there a law against this kind of thing? Sort of. There are laws against hiring discrimination, but AI is a new player in the game. Determining who’s responsible when an AI system discriminates is tricky. Is it the company using the AI? The people who made it? The data the AI learned from? It’s a legal and ethical puzzle.

But here’s the good news: we can fix this. First, companies need to make sure their AI systems are learning from a wide variety of resumes, not just ones from a single group of people. They should also regularly check that their AI isn’t favoring particular groups over others.

Second, there must be a human touch. AI might be smart, but it needs to understand fairness or ethics. So, humans should have the final say in hiring decisions, not machines.

Lastly, we need rules. Just like there are rules for how companies can collect and use your data, there should be rules for using AI in hiring. This could help ensure everyone gets a fair shot, regardless of background.

Fixing AI bias won’t be easy, but it’s necessary. Everyone deserves a fair chance at a job they’re qualified for without a biased machine standing in their way. Plus, companies will be better off in the long run if they hire from a pool of diverse talent.

AI can lead to excellent outcomes in good hands and with the proper guidelines. However, if we’re not careful, it can cause unfairness and discrimination. We must take control and guide AI towards the right path.

Let’s Empower, Educate, and Elevate — Mocha Sprout style!

Remember… Slay What Ya Hear!® Change the Conversation; Change the Perspective!

6 views0 comments


Post: Blog2_Post
bottom of page