New human resources tools powered by artificial intelligence promise to revolutionize many aspects of people management. Simultaneously, a maturing regulatory environment is rapidly reshaping risk/reward calculations.

So, how can HR leaders and executives successfully navigate this new terrain?

  1. Take ownership: There are no shortcuts (yet)
– Advertisement –

First, the good news: the European Data Protection Board recently approved criteria for a common European Data Protection Seal. More certifications will likely emerge in other parts of the world.

However, until official schemes solidify, most seals currently touted on vendor websites warrant healthy skepticism. Even gold standard security certifications, such as ISO, do not (yet) fully assess privacy compliance.

Moreover, the General Data Protection Regulation (GDPR) emphasizes that certifications will “not reduce the responsibility of the controller or the processor for compliance.” Strong indemnification clauses can mitigate vendor risk, but contracts alone are insufficient. Meanwhile, California’s privacy agency warns that if a business “never enforces the terms of the contract nor exercises its rights to audit” vendors, it will not have a strong defense if a vendor misuses data.

Accordingly, leaders need a proactive compliance mindset when selecting and managing vendors.

  1. Learn general privacy principles

Major privacy laws have established common principles. Generally, companies must:

  • Handle personal data fairly, in ways that people would reasonably expect.
  • Communicate transparently about how and why personal data will be processed.
  • Collect and use personal data only for specifically identified purposes.
  • Update notices (and possibly seek fresh consent) if purposes change.
  • Minimize the scope of personal data processed.
  • Take reasonable steps to ensure data accuracy.
  • Implement mechanisms for correction and deletion.
  • Limit how long personal data is kept.
  • Adopt appropriate security measures.
  1. Plan ahead and involve key stakeholders

Consider fundamental questions early on: What problem(s) is your company trying to solve? What personal data is actually needed for that purpose? Could alternative solutions meet goals while minimizing privacy and security risks?



HR, legal and IT are core stakeholders in such discussions. Affinity groups can also ensure alignment with company values and facilitate inclusive buy-in. Increasingly, employees must be notified about productivity monitoring or surveillance. In Germany, employees must be consulted as stakeholders.

GDPR limits cross-border data transfers, so if your company has EU offices, ask non-EU vendors about transfer compliance and whether servers (and technical support) can be localized.

– Advertisement –

Ongoing project management is another success factor. New initiatives are prone to pivots, so periodic reviews will benchmark changes against initial assessments. Retention practices also need oversight. Core employment information—such as names and payroll information—must be kept for a reasonable time after employment ends. But other types of personal data should be deleted sooner.

  1. Remember that even ‘good’ purposes require risk assessments 

Several major privacy laws require risk assessments. Notably, “good” purposes—such as wellness, cybersecurity, or diversity, equity, and inclusion—are not exempt from such mandates.

Why? Keeping any personal data poses risks of misuse. Risk assessments ensure projects are designed with privacy in mind and encourage alternative strategies such as implementing a test phase (limited by geographic areas or types of personal data) or anonymizing survey data.

  1. Consider unique AI requirements, including human oversight

AI tools often handle data about race, sex, religion, political opinions or health status. Such sensitive personal data receives extra protection under privacy laws.

Important questions unique to AI projects include:

  • What personal data will “train” the AI?
  • What quality control measures will detect and prevent bias?
  • How will humans oversee AI decisions?
  • What level of transparency can vendors provide about AI logic?

Some tools have been plagued by bias. If an algorithm is trained on resumes of star employees, non-diverse samples may generate irrelevant correlations that reinforce existing recruitment biases.



Under several major privacy laws, employers cannot rely solely on AI to make important employment decisions. Automated decisions trigger rights to human oversight and explanations of AI logic.

Missteps can be costly. Overeager people analytics has yielded record GDPR fines. Moreover, AI use may be scrutinized by multiple government agencies.

Vendors are optimistic that tools can be improved and even prevent human bias. New technologies often undergo hype cycles that ultimately yield reliable value. But at this stage, thoughtful evaluation remains important.

  1. Seek future-focused vendors

More regulatory developments are looming:

  • The EU is developing new AI rules. Stricter requirements—and higher fines— would apply to “high-risk” applications such as ranking job applications, conducting personality tests, using facial recognition, monitoring performance, etc. Some exploitative uses would be prohibited. And employers would be liable for AI tool use.
  • In California, starting in 2023, employees will have GDPR-like privacy rights. California is also expected to issue detailed regulations on AI transparency.
  • The White House’s AI guidelines, although nonbinding, also signal future policy directions.

Ask vendors how they would adapt to such regulatory changes. Active vendor engagement will be critical to successfully navigating the new world of HR tech.

The post 6 top tips to reduce risk with your HR tech appeared first on HR Executive.