Last week’s White House release of new guidelines for the use of AI tools in the workplace—dubbed the AI Bill of Rights—and New York City’s new law that mandates companies audit their AI tools for bias could have a profound impact on HR leaders and the technologists who serve them. Although the announcement from the White House Office of Science and Technology Policy did not include any proposed legislation, vendors of HR tools that use artificial intelligence are expressing support for the AI Bill of Rights blueprint while warning that greater government oversight could be a reality.

– Advertisement –

According to Robust Intelligence, a company that tests machine learning tools, the new guidelines are “protecting fundamental rights and democratic values, which is a core concern across all industries.”



The concept of AI oversight has gathered steam in the last few years and legislation appears to be inevitable. “Governing bodies in the U.S. have started to pay more attention to the ways AI is influencing decision-making and industries, and this won’t slow down with international AI policy on the rise as well,” said Yaron Singer, CEO and co-founder of  Robust Intelligence. 

ADP, one of the largest HCM solution providers thanks to its payroll solution that compensates 21 million Americans each month, took AI tools seriously enough to form an AI and Data Ethics Board in 2019. The board regularly monitors and anticipates changes to regulations and how AI is used. 

“Our goal is to swiftly adapt our solutions as technology and its implications evolve,” said Jack Berkowitz, ADP’s chief data officer, in a statement. “We’re committed to upholding strong ethics, not just because we believe it gives us a competitive advantage, but because it’s the right thing to do.”



Industry observers note that legislation like New York City’s recently passed AI bias audit law, which mandates bias audits for all AI tools used by employers in the city starting on Jan. 1, 2023, could spread to other cities and municipalities.

“Looking specifically to the HR space, the NYC AI hiring law requiring a yearly Bias Audit, another first of its kind, illustrates the start of broader adoption of enforced laws of automated employment decision tools,” said Robust Intelligence’s Singer. “The Equal Employment Opportunity Commission has been more vocal and active surrounding the use of AI in the employment space and will continue to increase their work on a federal level.” 

Calling the AI Bill of Rights helpful, Eric Sydell, executive vice president at recruiting tech vendor Modern Hire, notes that municipal, state and federal governments are working on their own AI guidelines. 

– Advertisement –

“Hopefully the White House’s work will serve to inform and guide lawmakers in creating useful and helpful laws and regulations on AI technologies,” he says. 

According to Hari Kolam, CEO of Findem, an AI-driven recruitment company, the New York City law and the White House guidelines will prompt a shift toward people using technology-enabled decision-making tools instead of technology making the actual decisions.

The HR tech industry has been moving toward automating and building a “black-box system” that learns from information and makes decisions in an autonomous fashion, Kolam wrote in an email interview. “The accountability of wrong decisions was delegated to the algorithms. This [NYC] legislation essentially establishes that the accountability for people’s decisions should fall onto people,” he said. “The bar for tech providers will be a lot higher to ensure that they are enablers for decision-making.”

AI solution providers will have a role to play if these guidelines become law in the U.S., predicts Sydell. 

“The AI Bill of Rights offers principles for the design of AI systems, and these principles align with those of ethical AI developers,” Sydell said. “In particular, the principles help to protect individuals from AI tools that are poorly or unethically developed, which could therefore do them harm.” 

While Sydell believes that internal and external audits of AI tools will become more commonplace, he also predicts that the new guidelines will affect how these tools are built and updated in the future. Transparency and what he calls “explainability” will be important factors in determining how solutions that include AI are created for HR leaders.

“The onus will be on vendors to demonstrate how products enhance the decision-making of HR practitioners by providing them with the right data and framework at the right time,” he says.

This means that AI providers will have to audit their own tools as well, suggests Kolam

“Technology can’t be perfect, and algorithms need to be continuously audited against reality and fine-tuned.”


Registration is open for HRE‘s upcoming HR Tech Virtual Conference from Feb. 28 to March 2. Register here.

The post A warning to CHROs using AI tools: Governmental oversight may be coming appeared first on HR Executive.