Book a demo

98% of global executives agree – artificial intelligence (AI) and foundational models will be a key element of their business strategies for the next three to five years. But with new regulations popping up around the world that seek to mitigate AI’s use in business (particularly when it comes to human resources technology), the future of AI is looking complicated, to say the least.

Does your business already use automation in your hiring decisions? If you do, you might be affected by incoming legislation such as NYC’s Local Law 144 (LL 144; set to be enforced on July 5, 2023) or the EU’s proposed Artificial Intelligence Act (AIA).

In this article, we’ll look at what these laws cover, how they might affect coding assessments, and some tips to ensure your hiring processes comply with AI legislation. 

Ready to navigate this new legal landscape and still leverage the power of AI in your hiring processes? Let’s dive in. 

4 Elements of AI Laws & How They Will Affect Hiring

It’s important to note that NYC isn’t the only jurisdiction introducing AI-related laws. In the United States, at least 17 US states introduced AI bills or resolutions between 2021 and 2022, including Colorado, Illinois, Vermont, and Washington. 

Even earlier, we saw legislation such as the Illinois AI Video Interview Act, the Maryland Racial Recognition Services Law, and the US Federal Algorithmic Accountability Act shaping this landscape. Extending these examples, NYC LL 144 represents the broadest sweeping legislation to date on this topic in the US.

When it comes to AI legislation, it’s important to note that each state or jurisdiction may approach the issue differently. However, four key elements are often present in these laws:

  1. Capacity for human decision-making versus completely AI-driven decisions: AI should not completely replace human decision-making in critical areas like hiring. There should be a balance between AI and human decision-making to ensure fairness, accuracy, and inclusivity.
  2. Ensuring transparency and consent around the use of AI: Job applicants should be informed if AI is being used in the hiring process and how it will be used. They should also be allowed to consent or opt out of AI-driven assessments or interviews.
  3. Preventing harm to individuals: AI should not result in discrimination, bias, or other negative consequences for individuals. AI algorithms should be regularly audited and tested for potential harm, and any negative impact on individuals should be minimized or eliminated.
  4. Ensuring accountability: There should be clear lines of accountability regarding the use of AI in hiring. Companies should be held responsible for the consequences of using AI, and there should be clear mechanisms for individuals to seek redress if they feel they have been unfairly treated. That said, AI algorithms are often created by third-party vendors or developers who may not have a direct relationship with the companies using their algorithms. Therefore, AI legislation will likely affect end-user companies and algorithm creators. This way, regulatory bodies can help ensure that AI algorithms are designed with ethics and fairness in mind and that the creators of algorithms have a stake in ensuring that their technology is used responsibly and ethically.

These elements are designed to encourage a fair and inclusive job market for all and promote responsible and ethical use of AI in hiring processes.

Codility's webinar on generative AI laws and the future of tech hiring
Our latest webinar is all about the role generative AI laws will play in the future of technical hiring. 

 

Need some more guidance on new AI legislation and its impact on hiring? We recently held a webinar led by our I-O team and experts from Holistic AI and Nilan Johnson Lewis, all about the legal considerations of using AI in hiring decisions and what you can do to ensure fair and unbiased assessments. Check out the recording here

What to Know About New AI Laws & Coding Assessments

So, how will AI legislation change the way you hire engineers? Similar to how GDPR affected privacy compliance, European concepts are finding their way into US legislation – namely, through regulations on the use of AI in hiring processes due to concerns around privacy and bias, as well as consumer protection. 

With the introduction of laws like LL 144, One key aspect of NYC LL 144 is the need for employers to publicly display a summary of results from a yearly “bias audit” conducted by an independent auditor to ensure the automated employment decision tools (AEDTs) they use to screen candidates do not have a discriminatory impact on a federally protected people group. Failure to comply with the regulation may result in penalties.

LL 144 requires employers to conduct a bias audit up to a year prior to using an AEDT, which must be done by an independent auditor rather than by the vendor or their affiliates. The auditor assesses the AEDT’s potential impact on sex, race, and ethnicity using data from the employer’s historical use or test data/historical data from other employers if the sample size is too small. 

A summary of the audit must be made publicly available, including the procedure and data used, sample size, selection and impact ratios, and distribution date for the AEDT. This summary must remain available for at least six months after the AEDT’s latest use, and you can provide it via an active hyperlink to a website.

For more information on how to conduct a bias audit, check out our latest webinar

Because AEDTs are widely used, this new regulation has left many employers uncertain about the legal considerations of using AI in hiring decisions and how to ensure fair and unbiased assessmentsQuestions remain whether LL 144 applies to top-of-the-funnel assessments whose output may be interpreted differently across individual hiring managers or recruiters. 

Either way, reasonable concerns exist about AI or machine learning (ML) based hiring evaluations. While these concerns go beyond what we can fit in this article, a brief overview includes:

In the assessment industry, it’s not uncommon for vendors to deploy AI when developing, delivering, and maintaining tests. It remains unclear whether and how this use of AI will be regulated by upcoming AI legislation, and not all legal experts agree on the matter. As such, this will be an interesting space to watch over the next six to 12 months. As legislation’s enforced, vendors and end users must keep learning about the associated practical effects. Codility is closely monitoring this topic and will provide regular updates to our customers and partners.

You might also like: Codility on ChatGPT and the Future of Technical Assessments 

Make Sure Your Hiring Processes Are Compliant 

At Codility, we’re committed to the ethical use of AI in technical hiring. We don’t currently use AI or ML in our scoring process, nor do we use automated decision-making based on scores. This choice is intentional – we never make decisions solely relying on AI. Instead, we view AI-generated data as only one piece of the puzzle – human involvement and interpretation remain critical.

As such, while AI/ML can offer value to selection assessments, it should not take the role of evaluator or final decision-maker in the hiring process.

However, if you’re looking to achieve ethical AI in your technical hiring processes, consider taking the following steps:

  1. Perform a bias audit: Companies should thoroughly evaluate the AI algorithms and data used for hiring to identify any potential biases or inaccuracies that could unfairly impact certain groups of candidates.
  2. Ensure transparency and explainability: Ensure that AI-based hiring decisions are transparent and explainable to candidates and recruiters. This means providing clear and concise explanations of how AI algorithms were used in the hiring process and how they arrived at specific decisions.
  3. Protect candidate privacy and security: Companies should ensure that candidate data are collected, stored, and used in compliance with relevant laws and regulations and that the data are kept secure and confidential.
  4. Consider human oversight: Keep the element of human oversight in your AI hiring process to ensure that AI-based decisions are fair and unbiased. Recruiters should be involved to ensure that AI algorithms are working as intended and not producing unintended consequences.

Overall, the ethical use of AI in hiring requires a careful balance of technology and human oversight. By taking these steps, you can ensure that your AI-based hiring processes are fair, transparent, and compliant with ethical standards.

Final Thoughts 

If you’re part of the 90% of well-known companies that invest in AI today, do you know if your hiring processes comply with AI laws? If you’re unsure, now is the time to act. 

Make sure you know how these new laws will impact automated hiring processes already in place and how to balance the benefits and risks of using AI in your hiring decisions. 

Want to see Codility’s valid and AI-resistant task library for yourself? Book a demo with us today. 

Dr. Taylor Sullivan is the Senior Director of Product Insights and Principal I/O Psychologist at Codility. Her expertise spans a variety of areas including talent assessment and selection, learning and development, leadership, and credentialing and licensure.

Connect on LinkedIn