70% off

New York City Starts to Regulate AI Used in Hiring Tools

A closely watched effort to root out potential bias in hiring and promotion software goes into effect Wednesday Emil Lendof/The Wall Street Journal Emil Lendof/The Wall Street Journal By Lauren Weber July 5, 2023 8:00 am ET Employers are preparing for a new law going into effect Wednesday in New York City that will be the first in the nation to regulate the use of automation and artificial intelligence in hiring decisions. The law, known as NYC 144, requires employers that use certain kinds of software to assist with hiring and promotion decisions—including chatbot interviewing tools and resume scanners that look for keyword matches—to audit those tools annually for potential race and gender bias, an

A person who loves writing, loves novels, and loves life.Seeking objective truth, hoping for world peace, and wishing for a world without wars.
New York City Starts to Regulate AI Used in Hiring Tools
A closely watched effort to root out potential bias in hiring and promotion software goes into effect Wednesday
Emil Lendof/The Wall Street Journal Emil Lendof/The Wall Street Journal

Employers are preparing for a new law going into effect Wednesday in New York City that will be the first in the nation to regulate the use of automation and artificial intelligence in hiring decisions.

The law, known as NYC 144, requires employers that use certain kinds of software to assist with hiring and promotion decisions—including chatbot interviewing tools and resume scanners that look for keyword matches—to audit those tools annually for potential race and gender bias, and then publish the results on their websites. 

There is growing public concern about the role that algorithms play in essential facets of people’s lives, from employment and education to life insurance and mortgage lending. Because algorithms are difficult, if not impossible, for most people to untangle and understand, legislators have focused on mandating transparency rather than on regulating the software itself. 

SHARE YOUR THOUGHTS

What benefits or risks do you see from the NYC law on the use of AI in hiring and promotions? Join the conversation below.

While NYC 144 is designed to root out indications of potential discrimination in employment decisions, “it’s really not an antibias law. It’s a public disclosure law,” said Erin Connell, who represents employers as a lawyer with Orrick, Herrington and Sutcliffe. Legislators and industry groups across the nation are watching New York City as a test case for future technology regulation, she added.

Under the new law, employers will audit any software that plays a significant role in their hiring and promotion decisions and will publish so-called adverse impact ratios, which show whether a procedure has a disparate impact on a particular race or gender. 

Employers who hire New York City residents will have to publish the impact ratios on their websites, along with notice that they are using the tools and a way for job applicants to request an alternative to being screened or sorted by the software. Employers will pay penalties of up to $1,500 per violation a day if they don’t comply.

“There’s an assumption baked into this that if an employer sees the impact ratios, somehow indicating some bias, that they are then maybe going to do something about it,” said Victoria Lipnic, a former commissioner of the federal Equal Employment Opportunity Commission, and now a partner with labor consulting firm Resolution Economics. “That’s a fairly big assumption, but it is a first step.” 

At The Wall Street Journal’s CEO Council Summit in London, executives discuss the value of artificial intelligence as well as possible downsides to the widespread use of the technology.

Reducing or removing bias from automated tools is possible because developers have more ways now to pretest their software and to program it more precisely, said Frida Polli, the founder of recruiting software platform Pymetrics, which is now owned by Harver. She advised the New York City Council on its bill, and supports better regulation of algorithms. 

“It does take more effort, and it forces you to modernize,” Polli said. “But it doesn’t make sense to keep using things that have disparate impact.”

Under the law, workers and job applicants can’t sue companies based on the impact ratios alone, but they can use the information as potential evidence in discrimination cases filed under local and federal statutes. A ratio—a number between 0 and 1—that’s closer to 1 indicates little or no bias, while a ratio of 0.3 shows, for example, that three female candidates are making it through a screening process for every 10 male candidates getting through.

A low ratio doesn’t automatically mean that an employer is discriminating against candidates, Lipnic said. According to longstanding law, disparate impact can be lawful if a company can show that its hiring criteria are job-related and consistent with business necessity. 

For example, Blacks and Hispanics have lower college graduation rates than whites and Asian-Americans, and if an employer can show that a college degree is a necessary requirement for a job and therefore its system screens out a higher share of Hispanic candidates because fewer of those applicants have degrees, the employer can defend its process.

Employers began heavily relying on software for recruiting over the past two decades as the technology became more sophisticated and as online job applications became the norm for job seekers. Inundated with résumés and applications—especially after unemployment jumped following the financial crisis of 2008-2009—employers invested in technology to help them rapidly screen and sort candidates.

Job seekers have long disliked these tools, which sometimes screen people out for having gaps in their résumés or for failing to include keywords related to imprecise job requirements. A 2021 study by Harvard Business School professor

Joseph Fuller found that automated decision software excludes more than 10 million workers from hiring discussions. 

Legislators also have grown increasingly concerned about the potential for bias baked into the software. Laws regulating the tools or mandating disclosures have been proposed in a number of cities and states, including Washington, D.C., California and Connecticut, and the White House has laid out a “Blueprint for an AI Bill of Rights.” Concerns have increased in recent months because of the release of powerful new generative AI tools such as ChatGPT.

Algorithmic discrimination has been documented in a number of cases. In one widely cited example, nearly a decade ago Amazon developed a recruiting algorithm that analyzed 10 years of applications from top software engineers and then looked for candidates whose backgrounds resembled theirs. But because most applicants over those years were men, the algorithm was quick to downgrade female applicants who used certain terms on their résumés or graduated from some women’s colleges. 

Amazon stopped using the program after it realized what was happening. “This project was only ever explored on a trial basis, and was always used with human supervision,” an Amazon spokesman said.

NYC 144 passed the New York City Council in 2021 and was delayed for nearly two years while the council considered public comments, including opposition from many employers and technology vendors. BSA, an organization representing large software companies including Microsoft, Workday and Oracle, lobbied to reduce the reporting requirements and narrow the scope of what kinds of uses would be subject to an audit. 

Some of the organization’s requested changes were made, but the group still objects to elements of the current law and plans to work with policy makers in the U.S. and around the world “to build consensus-driven approaches to risk management and models for regulation that are more constructive and durable than what we see in NYC 144,” Aaron Cooper, a vice president at BSA, said in an emailed statement.

Write to Lauren Weber at [email protected]

What's Your Reaction?

like

dislike

love

funny

angry

sad

wow

Media Union

Contact us >