A law requiring employers to audit hiring algorithms for bias in NYC, among the first of its kind in the country, has already been watered down from its original iteration, and some advocates who pushed for the law are worried that the law will be further diluted by business interests.
There has been growing scrutiny in recent years of the use of AI hiring tools and their potential for discrimination. Among the tools facing scrutiny are video tools that evaluate candidates’ facial expressions, gamified hiring tools, and screening software that provides recommendations based on resume data.
Videos by VICE
The U.S. Equal Opportunity Employment Commission began publishing guidances on the use of such tools in 2021, with particular focus on the many ways that such tools can violate the Americans with Disabilities Act. The EEOC released a draft enforcement plan in January that outlined how it would regulate discrimination in AI hiring tools, and sued the company iTutor for using tools that excluded older candidates.
Other lawsuits have been popping up as well, including a lawsuit in California filed by a job candidate who alleges that Workday’s hiring software discriminated against him for being Black, disabled and over 40.
In NYC, Local Law 144 was passed in November 2021 with the intended start date of January 1, 2023. But confusion and disagreements over how the law would work and who would enforce it led the city to delay implementation to April 15, 2023 while it worked out the details and took input from the public. The city’s Department of Consumer and Worker Protection released the most recent version of the rules earlier this year, ahead of a January 23 hearing. The law prohibits employers from using an “automated employment decision tool” unless it has been subjected to an audit at least a year before its use and the result of that audit is made public.
The rules clarify that an audit should compare selection rates of candidates according to gender and race, providing a chart with an example for how tools should be evaluated. Advocates for more stringent requirements as well as lobbyists for employers seem to agree that this version of the rules is not quite workable, but for different reasons. The city is still working with advocates, experts and business lobbyists to hammer out the final rules for April.
Advocates for workers and experts on AI bias want the final rules to address other categories like disability, to broaden the scope of the law so that it does not just pertain to job screening tools and to close loopholes that would allow employers to wriggle out of a bias audit by suggesting that the tools are augmented by the judgment of human hiring managers. They also want more clear guidelines on the data that employers are using to evaluate bias.
Business lobbyists, on the other hand, want the law mostly gutted, arguing in emails sent to supporters of the legislation that an amendment should be passed to remove any requirement for a bias audit, arguing that there is no widely-accepted way to perform such an audit accurately. They are also concerned that since race is a voluntary category on employment forms, audits would only be testing a narrow selection of candidates who have offered this information willingly.
In a November 2022 email, Kathryn Wylde, CEO of Partnership for New York City, a lobby for large employers who want to influence civic policy and whose membership includes IBM, Citi, Morgan Stanley, American Express, Pfizer and more, wrote a plea asking for supporters of the legislation to back an amendment removing any requirement for a bias audit. The group had already presented the amendments to city council members and was now asking proponents of the legislation to back them because “the city council is looking for some indication that the groups that advocated for its passage are ok with the amendments,” according to the email.
In addition to a lack of data on race in resumes, Wylde wrote in the email to the bill’s supporters that “there are no standards established for such an audit,” so the requirement should be removed. “Would you be willing to help with amendments that respect the purpose of the law – to eliminate bias in hiring – but do not mandate actions that employers can not fulfill?,” Wylde asked. The email was accompanied by a one-page document prepared by PFNYC which said “employers have concluded that amendments to LL144 are necessary to ensure that they can continue to use AI to help eliminate unconscious bias.”
PFNYC argued further in the document that all of the requirements of the law are already covered by the Federal Equal Opportunity Employment Commission’s guidelines, so it would only add impossible to enforce requirements for AI tools, which the lobby says limit bias. (There is of course no evidence that algorithmic or AI job screening tools are limiting bias, and ongoing lawsuits seem to suggest the contrary) PFNYC also proposed removing a requirement that employers provide a notice to candidates about the use of AI tools 10 days ahead of time, on the basis that “This will impose delay and significant hardship on both many employers and job applicants.”
When reached for comment, Wylde told Motherboard that the email represents PFNYC’s current position on the legislation. “The legislation was enacted without employer input,” Wylde told Motherboard. “We agree that there needs to be a way to ensure that AI tools are unbiased. However, there are no accepted standards for an AI bias audit, they need to be developed. We don’t oppose the concept, but right now no one knows what would be involved in compliance. It is not good to pass laws where regulators have no clear guidelines for enforcement and employers have no clear way to comply,” Wylde told Motherboard, adding that there should be a process to develop “reasonable standards” before the law is implemented.
Ridhi Shetty, policy counsel with the Center for Democracy & Technology, said that some of the limitations on race and ethnicity data and the lack of consensus on how to conduct bias audits should not preclude the use of bias audits.
“I think there are certainly approaches to take now,” Shetty said, pointing out that The Center for Democracy & Technology published a set of standards on evaluating bias in hiring tools in December. Those standards emphasize audits on all automated decision-making tools used by employers, including those determining promotions and targeted job advertising. The suggested standards require employers to look at protected categories beyond race and sex and including disability, and generally advocate for a more system-wide evaluation of a tool’s inputs and goals rather than just a numerical score.
Shetty said that many of the widely-used rules for determining bias in hiring are outdated and were created decades ago. “It is true that it’s going to be hard to standardize an approach to audit for bias in a way that’s going to work for all employers and for all kinds of tools and for all kinds of discrimination,” Shetty said. But this indicates that audits move away from quantitative examinations of selection rate by race and look more at what the tools are designed to do and whether those goals could have disparate impacts, according to Shetty.
“If you’re looking for certain kinds of personality traits… it’s important to scrutinize why those are the criteria that you consider to be related to the job,” Shetty said.
An audit that’s qualitative wouldn’t necessarily require voluntary data on race, Shetty believes. “I would argue you don’t really need people to be sharing their race or their disability or any other protected characteristic during the application process for you to be able to examine a tool or its potential impact.”
Lobbyists for large companies are still pushing to remove audits altogether. But the law has already been watered down from its original iteration in 2020. When it was introduced by Council members Laurie Cumbo and then Council member Alicka Ampry-Samuel, the original bill sought to address bias in hiring tools earlier in the process by putting onus on the software developers to test for bias before selling their tools to employers. The earlier bill also required bias audits to screen for all categories covered by the city’s human rights laws and the U.S. Equal Employment Opportunity Commission, which includes disability and age.
After over a year of inaction, a revised bill was introduced on November 9, 2021 and passed almost immediately by the city council on November 10, 2021, at the behest of then mayor Bill de Blasio. This version put all the onus on employers to make sure their tools are screened before using them. While employers should share responsibility, it potentially leaves vendors of screening software off the hook, according to Shetty.
“When you see vendors increasingly performing the functions of what has been defined as an employment agency, it becomes trickier to hold them accountable,” Shetty told Motherboard.
The law as enacted requires screening for bias by race, ethnicity and sex, categories where employers are already technically required to screen hiring tools for bias, and leaves out other protected categories. The city first released draft rules in May 2022 and has since held 3 public hearings, the latest on January 23, with a plan to release new rules prior to implementation on April 15.
Shetty has no problem with the delayed timeframe for implementation as long as the city takes input from stakeholders seriously. She believes taking time to work out the details is important, particularly since the current version of the law was rushed to vote in 2021. And while it doesn’t include any explicit requirement to audit bias in protected categories like age or disability, it could still be a powerful tool to address discrimination.
“It would be really useful to make sure that even if this law only focuses on race based or gender based discrimination, that it at least does that to the fullest extent possible,” Shetty said.