Approved

The Role of Artificial Intelligence In Employment Decisions Part 2

While AI is poised to bring significant benefits to the HR world, it is not without its share of real-world issues. This article will focus on some of the potential pitfalls and some of the recent regulatory efforts to control the use of AI.

Considerations Before Implementing an Artificial Intelligence Solution

One of the primary concerns with AI is an element of distrust.  Like any disruptive technology, peoples greatest concern is “how is it going to affect me.” Since AI in the workplace is going to affect both Employers and Employees, everyone is concerned about how their job and the overall work environment will change as a result of AI. While alarmists are worried that the intent of AI is the wholesale replacement of humans in the workplace, that is most certainly not the intended or likely outcome.  On the other hand, just as there was with the introduction of the personal computer to the workplace, some jobs will be eliminated, some new ones created, and many others will be modified. The question then becomes the same as it has always been, how to best manage a disruptive change in the work environment.

As Employers, the best approach is an open discussion with your employees about how and when new AI tools will be implemented in the workplace. While it is not necessary to have absolute answers to the following, Employers should be prepared to discuss questions along the lines of the following:

  • What impact will technological changes have on employee workloads?
  • Will jobs be lost due to automation?
    • If so, how many?
    • Will there be retraining for other positions available?
    • Will there be exit packages for those displaced?
  • Is work going to become easier or harder?
  • Will there be more or less work?
  • Will my performance be evaluated by AI or my supervisor?

Providing an understanding of the planned use of new AI tools, the type of data collected, and how it will be used may ease employees’ physical and psychosocial stress. Involving employees in the implementation and evaluation of the AI processes will increase the confidence of the overall workforce as they can know that decisions are being made fairly, accurately, and honestly in the best interest of the company as a whole and not by management at the expense of employees. Offering training and appropriate reskilling to employees who will interact with the AI tools will be necessary.  Finally, a demonstration period where the AI output is compared with the prior manual process where any apparent discrepancies are researched and understood before elimination of the previous manual process will allow for smoother integration of AI into the workplace. 

Employers should understand that implementation of AI processes in the HR environment will change their HR liability. While AI based employment decisions will greatly reduce liability for individual discriminatory actions, any unknown or undiscovered bias incorporated into the AI decisions through the AI’s ability to self-learn from prior human examples, called Machine Learning Bias or Artificial Intelligence Bias, could leave management with an unrecognized discriminatory process that makes all of its HR decisions suspect.  If the data used to “teach” the AI is biased due to the designer’s implicit biases then the AI will likely be biased.  AI tools are unlikely to intentionally discriminate, but if workplace decisions have been discriminatory – for example, more members of a non-protected class have been hired over time over a protected class, then the AI will have learned that those decisions are the appropriate ones to make and the subsequent AI hiring practices will be discriminatory. Employers should protect themselves by (1) routinely validating the AI output against EEO mandates and (2) obtaining indemnification for unrecognized discriminatory processes from the third-party software provider. 

State and Federal Legislation

Because of the great uncertainty of how AI is likely to impact the workplace, there has been a significant level of Federal and state legislative and regulatory process to address the real and perceived concerns with AI and also to specify how AI technologies may be lawfully introduced into the workplace.  Some examples that have broad impact are:

  • The Equal Employment Opportunity Commission (EEOC) does consider an employer’s use of an AI algorithmic decision-making tool to be a “selection procedure” for making informed decisions about whether to hire, promote, terminate, or take similar actions under the EEOC’s 1978 Uniform Guidelines on Employee Selection Procedures under Title VII of the Civil Rights Act of 1964.  Title VII makes it illegal to discriminate against a person with protected characteristics (e.g., race, color, religion, sex, pregnancy, sexual orientation, gender identity, or national origin).
  • The Equal Employment Opportunity Commission (EEOC) also takes the position that an AI algorithmic decision-making violates Title VII if it causes adverse impact, that is, use of the tool causes a selection rate for individuals with protected characteristics that is substantially less than the selection rate for another group.
  • Since 2020, Maryland requires that employers must notify applicants when facial recognition tools are used during an interview and must obtain the applicant’s consent via a signed waiver.
  • Since 2021, Illinois has a similar consent requirement for applicant submitted interview videos. The company must notify the applicant that AI is being used, explain how the AI works, and what the AI evaluates.
  • Since 2021, New York requires companies to conduct a bias audit on automated employment decision tools (AEDT) prior to the use of the tool.

This is just a representative sample of legislative and regulatory actions being taken.  More are being introduced on a regular basis and AI users are cautioned to check for actions taken within their own states.

Conclusion

With an insights-driven, inclusive approach, companies can use AI tools to make more informed decision and tackle talent issues by sourcing and hiring candidates who are better matches in less time. This drives more equitable talent management practices by discovering underrepresented talent and surfacing candidates with the capabilities to drive an organization further. Managers have the power to find and promote current employees who are great matches for openings and AI tools can also identify areas where employees need upskilling or reskilling. These capabilities are likely to reduce churn and burnout, save money, and empower people to grow their careers.  On the other hand, implementation of AI, like the implementation of any disruptive new technology, brings with it disruptive effects that need to be recognized and managed in order to have a successful implementation. 

***********

C2 is a Professional Employer Organization (“PEO”) that provides outsourced HR services to businesses across a variety of service industries with a focus on federal government contractors.  Utilizing our PEO model allows our clients to transfer the responsibilities and liability of payroll, benefits administration, employee onboarding, and employee relations to C2 and to focus their attention on satisfying their clients and growing their business.  C2 blog posts are intended for educational and information purposes only.

More information about C2’s PEO and other related HR services is available at www.c2essentials.com.