Three Best Practices for Tackling AI Bias in Recruitment

Written by michaelakuchie | Published 2022/12/13
Tech Story Tags: ai | artificial-intelligence | ai-trends | future-of-ai | future-of-work | work | recruiting | ai-top-story

TLDRIn 2014, Amazon's AI system preferred male candidates to their female counterparts. Amazon’s computer models were trained to vet applicants by observing patterns in resumes submitted to the company over a 10-year period. Most came from men, a reflection of male dominance across the tech industry. This is a risky trend that, if unchecked, could greatly tarnish a firm's image and negate the good this technology symbolizes. To help you effectively handle AI-sponsored prejudice in recruitment, this article provides some pointers.via the TL;DR App

It's not wrong to infer that we're yet to experience the full range of artificial intelligence (AI) and its capabilities. After all, its impact, fears, and prospects still dominate research endeavors; as scientists are keen on finding new use cases for this innovation. We're likely to have encountered the application of AI in various situations. This is because many companies whom we patronize have increased their adoption of the technology. 
Consider the interactive personal assistant Siri which enables Apple customers to access relevant information from various apps, dictate emails, and perform many other tasks from their iOS-enabled phones, smartwatches, computers, and televisions. 
Brands are also leveraging chatbots to deliver impeccable customer experience in a manner that not only boosts sales but eliminates the need for repetitive tasks, thereby increasing human employees’ engagement. 
Just as the adoption of AI-led tools rose, so did investments in the enterprise soar. According to McKinsey’s “The State of AI in 2022” report, 52% of respondents confirmed that over 5% of their digital budget was channeled to AI. In 2018, the percentage stood at 40. 
This is a striking improvement. 
In human resources, AI plays a critical role in helping corporations enhance employees’ satisfaction with their current roles and rapidly complete tasks, thus saving the indispensable duo of time and money. Aside from this, companies can refine recruitment by using AI-powered software to sift through thousands of applications and shortlist a handful of experienced candidates. 
However, there have been situations when the system favors a specific group or gender over others. This is a risky trend that, if unchecked, could greatly tarnish a firm’s image and negate the good this technology symbolizes. To help you effectively handle AI-sponsored prejudice, this article provides some pointers.  

AI in Recruitment: What You Need To Know 

Previously, individuals found job postings in the classifieds portion of newspapers and responded via handwritten letters. Nowadays, anyone can access job postings on the internet across a plethora of channels. 
Recruitment is a delicate task that companies prioritize. It explains the growing use of talent acquisition agencies. Using AI not only simplifies the process but broadens the possibilities of automation in other areas. From Tidio's inquisition into the impact of AI on HR, it's discovered that nearly 67% of HR professionals admit that the innovation has positively impacted the recruitment phase. 
But then, how does bias enter the scene? 
In 2014, e-commerce giant Amazon opted to integrate AI into its hiring system. And while this was largely celebrated as a step in the right direction since Amazon is a staunch automation advocate, the effort was discontinued following an awful discovery. The system preferred male candidates to their female counterparts. 
Why did this happen? A Reuters report revealed that...
“...Amazon’s computer models were trained to vet applicants by observing patterns in resumes submitted to the company over a 10-year period. Most came from men, a reflection of male dominance across the tech industry.”
AI bias can manifest in several ways, with certain genders, groups, religions, and other affiliations always being the victim. Microsoft’s Tay — which spewed many hateful messages before earning a Twitter account suspension — is another instance of how anomalies in the machine-learning process can yield great consequences. 

Three Best Practices for Mitigating AI Bias in Recruitment

Although it may not be possible to completely eradicate bias in AI models, there are strategies capable of reducing the odds of a future incident. 
Below are three must-have methods. 
1. Always Keep Humans in the Know 
While there's a growing fear that smart tools will replace human workers, we should see this as more of a partnership and less of a dystopian takeover. And also, the notion that AI-led tools should function without human supervision because they exhibit considerably more efficiency needs to be modified. 
Interestingly, human-machine collaborations have been proven to be more rewarding. A Harvard Business Review study discovered that 1,075 companies across 12 industries recorded increased speed, cost savings, and profit. 
Given this undeniable fact, firms should ensure that a human team consistently oversees the software used in recruitment. Together, they can reduce the risk of favoritism. Also, the employees should be drawn from a diverse pool so that every group is represented in the responsibility of reducing discrimination. 
2. Conduct Regular Audits on the AI Models 
When AI algorithms aren't frequently checked for the probability of delivering partial outcomes, this could deal a company’s anti-bias effort a serious blow. By organizing periodic checks on the algorithms, corporations can find problems that may hinder the model from conveying fair results. Incomplete or inaccurate data should be rectified immediately upon discovery to protect ethics. 
3. Patronize AI Recruitment Software Vendors with an Aversion to Bias 
Just like a car dealership, you will find various recruitment software options. To make the right decision, endeavor to learn more about how the mechanisms set by vendors to tackle the various kinds of bias. 
You should request to see the system get tested in various situations and see how it performs. Aside from bias, check the software for scalability, pricing, and cost savings.  The one that ticks all or most of the boxes is the winner. 

A Candid Take on AI’s Future

The advancement of AI and its subsets is a good thing for all of us. From enabling cars to steer and park themselves (with active driver supervision) to sourcing and hiring qualified candidates in lesser time, AI offers several benefits that transcend recruitment. But then, we should purposefully manage this technology to avert the blunders associated with manual effort. 
Dr. Lisa Palmer shares in a thoughtful article that the American state of Oklahoma should throw its weight behind the enactment of “legislative and regulatory action that both support innovation and ensure that this critical technology serves humanity well.”  Perhaps, we can learn from that and as she recommends “do more" to navigate this technological revolution toward the good side.  




Written by michaelakuchie | I am wildly intrigued by the use cases of technology in automobiles.
Published by HackerNoon on 2022/12/13