How To Be “Smart” About Using Artificial Intelligence In The Workplace | Akerman LLP – HR Defense

Artificial intelligence (AI) is undoubtedly revolutionizing the workplace. More and more employers are relying on algorithms or automated tools to determine who should be interviewed, hired, promoted, compensated, disciplined, or fired. If properly designed and implemented, AI can help workers find jobs, match employers with valuable workers, and improve diversity, access, and inclusion in the workplace. However, despite its positive effects, AI has also created new risks of employment discrimination, particularly when designed or used improperly, and has become a focus of targeted efforts by federal and state law enforcement agencies and legislators. Employers need to be smart, transparent and knowledgeable about how they use AI in the workplace. When used correctly, AI tools can make the hiring process faster and more efficient while eliminating conscious and unconscious biases.

EEOC’s AI Challenges – Recruiting and Hiring

The use of artificial intelligence in the workplace has been on the radar screen of federal regulators such as the US Equal Employment Opportunity Commission (EEOC) for several years. The EEOC now intends to develop technical assistance, guidance, audit tools or other parameters for the development, understanding, and responsible use of artificial intelligence. AI is a “high priority” topic in the EEOC’s 2023-2027 Strategic Implementation Plan (SEP), which we recently summarized. For the first time, the EEOC said it will take into account “employers’ increasing use of automated systems such as artificial intelligence or machine learning” when making hiring and selection decisions. The EEOC requires employers to “use software that incorporates algorithmic decision-making or machine learning, such as artificial intelligence; use automated recruitment, selection or production or performance management tools; or other existing or emerging technology tools used to make employment decisions.” It cautions employers to be intentional and cautious when using new technology to aid decision-making.

Also Read :  Thank You, Hi-Fi Rush, For Sparing Us From Video Game Marketing

One of the EEOC’s top priorities is removing barriers to recruitment and hiring. According to the EEOC, the use of artificial intelligence can cause discrimination in hiring and employment:

  • First, artificial intelligence is unlawfully used to “target job advertisements, recruit applicants, or make or assist in hiring decisions where such systems intentionally exclude or adversely affect protected groups.” The EEOC cited examples where it allegedly programmed its application software to automatically reject applicants over a certain age.
  • Second, the use of artificial intelligence leads to “restricted application processes or systems, including online systems that are difficult to access for people with disabilities or other protected groups.” The EEOC previously issued guidance on the Americans with Disabilities Act and the Use of Software, Algorithms, and Artificial Intelligence to Evaluate Job Applicants and Employees (May 12, 2022) to help private employers comply. Americans with Disabilities Act (ADA) when using AI. The EEOC states that ADA liability can arise in three situations: (i) the employer did not provide the reasonable conditions necessary to provide the individual with a fair and accurate assessment, and (ii) the instrument tests the individual with a disability. (iii) the facility violates the ADA’s restrictions on disability inquiries and medical examinations, even if the individual is able to perform the job with a reasonable accommodation;
  • Third, there is a risk that the use of artificial intelligence tools will “proportionately affect workers based on their protection.” Title VII of the Civil Rights Act of 1964, the Age Discrimination in Employment Act (ADEA), and the ADA have long prohibited selection procedures such as pre-employment screening, interviews, and promotion tests that discriminate against workers based on their protected status. The EEOC intends to prioritize screening tools that use artificial intelligence, long-standing hiring and selection rules, and the development of more modern criteria and guidelines.
Also Read :  Lightelligence Expands Executive Management Team with

What’s next?

On January 31, 2023, the EEOC held a public hearing on “Targeting Employment Discrimination with Artificial Intelligence and Automated Systems: A New Frontier for Civil Rights.” The hearing is part of the EEOC’s AI and Algorithmic Fairness Initiative, in which the Commission seeks to leverage technology resources in the interests of accessibility, diversity, equity, and inclusion. The EEOC ultimately seeks to “direct employers, employees, job applicants, and vendors to ensure these. [AI] The technology is used fairly and consistently in accordance with federal equal employment opportunity laws.” According to EEOC Chair Charlotte A. Burrows, the Commission is currently evaluating exactly how to do this. The EEOC is “interested in collecting additional data and using AI tools aims to educate parties and combat algorithmic discrimination.

The EEOC is not the only federal agency concerned with technology in the workplace. We’ve previously warned about the NLRB General Counsel’s plan to crack down on electronic surveillance in the workplace based on concerns that it violates workers’ rights to engage in protected collective action. If successful, the new General Counsel system could slow the growth of “smart” jobs in the United States.

Many states have already addressed workplace surveillance and other privacy concerns stemming from the use of technology, and we expect this trend to continue. Some states and local governments are implementing or proposing robust laws and regulations targeting automated employment decision tools that could further change the legal landscape. As a result, it is important for employers to be aware of state and local laws regarding the use of artificial intelligence, electronic surveillance and other technologies.

Also Read :  NSW govt to add coding, cybersecurity to secondary school curriculum - Training & Development

Employer AI Best Practices

After all, as technology continues to advance, legal issues and potential liability related to the use of artificial intelligence in employment decision-making will continue to emerge. As the legal implications remain unclear, there are a number of best practices employers can follow to manage the risks of AI tools.

(1) Know your data. It is important that employers exercise caution when processing, using, or modifying data to train or operate AI used to make employment decisions. Incomplete or erroneous data can negatively affect the machine learning of an AI tool. Ask vendors about the technology they use and make sure they understand the algorithms and mechanics behind the automated process.

(2) Please clarify the topic and methodology. Be transparent and explain to applicants and employees how AI will be used, as this will increase trust and credibility and, in turn, the benefits of AI systems.

(3) Consider a partial audit. Monitor and audit AI applications and processes to proactively detect intentional misuse or discriminatory consequences.

(4) Exercise human control. Consider the point at which people should be involved in the employment decision-making process. Employers should appoint a team to review the processes and results of AI tools to ensure that they meet their legitimate goals and avoid discriminatory consequences.

(5) Review vendor contracts. Carefully review vendor agreements that provide automated decision-making systems to ensure the integrity and integrity of the AI ​​tool is proven to the vendor.

Take notes: This is not a brief overview of the interaction between artificial intelligence and anti-discrimination laws, and the topic is a moving target.

Source

Leave a Reply

Your email address will not be published.

Related Articles

Back to top button