Six months after the implementation of New York’s City’s artificial intelligence (AI) hiring law, only 18 out of 391 employers analyzed by researchers at Cornell University have complied with the requirement.
The law, known as Local Law 144, requires companies to disclose how algorithms influence their hiring decisions, with the goal of identifying and eliminating any potential bias in the employment process, The Wall Street Journal (WSJ) reported Monday (Jan. 22).
Researchers also noted that finding the audit reports on company websites was often challenging and time-consuming, indicating that the law may have limited value for those seeking employment opportunities, according to the report.
One of the reasons for the low compliance rate is the broad discretion given to employers in determining whether they fall within the scope of the law, per the report.
While the original proposal aimed to include most automated employment decision tools, the final version limited the law’s application to companies whose tools substantially assist or replace human decision-making, according to the report. Some employers claim that their tools do not fall under the definition of an automated employment decision tool.
Another challenge in implementing the law is the lack of race and gender data for most job applicants, the report said. Many employers do not track this information.
Enforcement of the law is complaint-driven, and so far, no complaints have been received by the New York City Department of Consumer and Worker Protection, the agency responsible for overseeing the law, according to the report.
Some experts believe that without enforcement actions, many employers may choose to ignore the law altogether, the report said. Additionally, there is no mechanism in place to verify the accuracy of the disclosed information.
Despite the challenges and limited compliance, the implementation of this AI hiring law in New York City is seen as a significant step toward greater regulation of technology in employment processes, per the report. Similar rules are being considered in Washington, D.C., New York state and the European Union. The Equal Employment Opportunity Commission has also identified technology-related discrimination as a strategic priority.
The use of AI in the employment sector has become widespread, CPI, a PYMNTS Company, reported in April. Automated Employment Decision Tools (AEDTs) use AI technology to match job requirements with relevant keywords in resumes and, in some cases, consider emotional response as a factor in the screening process.
Also in April, four federal agencies released a joint statement saying that decisions made for companies by AI still must adhere to the law. The agencies said they will monitor the use of AI technology and enforce their collective authorities.