Get started

Netflix’s Coded Bias documentary is well worth a watch if you haven’t seen it already. It’s caused quite the conversation amidst the future of AI and ML — especially when it comes to recruitment software

Aparna Dhinakaran, Forbes AI columnist and Founder at Arize AI, also recently wrote a short response to Netflix’s Coded Bias entitled, “Coded Bias: An Insightful Look At AI, Algorithms And Their Risks To Society” (link at the end of article).

Dhinakaran highlights that AI/ML technology, specifically facial recognition software, is biased against marginalized communities. This affects “…who gets hired, what kind of medical treatment someone receives, who goes to college, the financial credit we get, and the length of a prison term someone serves.”  

Dhinakaran’s point is that business leaders need to implement AI tools and systems that are fair, ethical, and without bias. But, at what point are we reintroducing bias into the process if we’re putting technical ML decisions back into the hands of those we initially took it from? 

Here, we will explore how better subjective hiring assessments, otherwise known as interviews, can aid automated, machine learning-based systems within hiring processes. We’ll explore whether subjective assessments such as technical interviews, in particular, are scalable, useful, and fair when building diverse tech teams. 

Offering people-powered bots with recruitment software

In short, Netflix’s Coded Bias suggests although AI bots are intended to help avoid bias, they are developing and scaling their own biases, due to a lack of diverse inputs. So, how can we implement automated systems without the unintended consequences?  

At Codility, we objectively test coding skills via CodeCheck—a mechanical evaluation of the candidate’s submitted code quality (i.e., its correctness and performance)—and offer more traditional subjective assessments via CodeLive — our technical interviewing platform that lets interviewers further evaluate candidates. 

Combining both objective and subjective assessments enables companies like Zalando to build a diverse recruiting process. Zalando aims to achieve a 40% female tech workforce by 2023 and has so far, managed to assess over 26,000 candidates using our recruitment software.

Why dual-assessments are vital for a positive candidate experience in technical interviews
Recruitment tools can elevate your candidate experience and improve diversity in your tech teams.

Dual-assessments in technical recruitment

Coded Bias doesn’t outright say that we need to kick bots to the curb. However, it does highlight that we need to work together more than anyone expected. Even the very best automated hiring processes need to include both human and machine decision-makers—as long as they are providing quality and scientifically built assessments.  

Mechanically scored/automated skills-based assessments are useful to determine if someone is a technical fit for a team. They can help to quickly sift through candidates while keeping your talent pool diverse by eliminating bias in early-stage recruitment rounds. 

However, these types of tests alone will not result in building diverse teams. They need to be developed, selected, and implemented thoughtfully (and paired with structured human evaluations) to protect and promote diversity; and ultimately make hiring decisions based on a holistic view of “fit.”

Stephen Byrne, Lead Recruitment Manager at Dolby commented that, “Codility has created another lens for us to assess candidates.” The key wording here is “another lens”—not the only lens. 

Read more like this: Is Your Technical Hiring Tool “Valid” for Your Recruitment Process?

Human intervention is vital in hiring

Ultimately, both types of tests are necessary if you hope to scale and diversify an equitable candidate experience. 

Furthermore, despite advancements in tech, we still need to apply human touchpoints to candidate experiences, to not only ensure we’re hiring fairly, but we’re doing so with empathy. If a recruitment process doesn’t end with a new hire, we at the very least hope to get a brand ambassador—this comes by building relationships and respect. 

These tests should be created and managed by professionals and undergo rigorous (human) quality control, to ensure they’re continuing to do what we built them for. 

At the same time, if we hope to build a well-rounded candidate experience, our recruitment processes should be optimized to enable human touchpoints, not to completely remove them. This way, we build positive relationships and candidate experiences while remaining fair and unbiased. This is what we’re doing with Codility’s recruitment tools. 

Lastly, our machines are only as smart as the data we input, and until we have enough well-rounded data for them to learn from, we need to manually tweak the mechanics of it all to ensure they remain human. 

Extra Reading: Coded Bias: An Insightful Look At AI, Algorithms And Their Risks To Society

Want more insights on building diverse engineering teams  & tech hiring trends? Sign up for our newsletter & never miss a beat. 

Ray Slater Berry is a freelance writer for Codility with over eight years of content, product, and positive initiative experience. He specializes in tech, travel, and employee wellbeing. Ray is also a published fiction writer, with his first novel Golden Boy.

Connect on LinkedIn.