The Ethics of Using Artificial Intelligence in Hiring

00Business owners place high value on systems that help them streamline operations, improve efficiency, and reduce tedious workload. It’s no wonder that Artificial Intelligence (AI)’s popularity has exploded in recent years. Developers continue to discover new applications for the technology.

With regards to hiring processes, AI systems can improve an organization’s ability to screen and identify appropriate applicants. More to the point, AI can potentially get the job done faster than before, and without paying excessive hours for recruitment.

AI hiring systems might benefit applicants, too. Job applicants would receive data-driven feedback on their strengths, weaknesses, and potential career paths. AI can assess everything from a candidate’s linguistic skills to their emotional state.

These non-human tools can potentially provide valuable data, but ethical questions loom. Can we really trust bots to analyze human behavior? Are the results fair? Or, could the technology carry negative consequences and impact equal opportunity among job applicants? After all, AI hasn’t been around long, nor has it been tested over many decades. Can we trust AI as much as other methods of data gathering?

Federal laws, such as the Americans with Disabilities Act (ADA), along with numerous state laws, protect the privacy of job applicants. Employers cannot or should not ask questions regarding family status, political orientation, sexual orientation, pregnancy, physical disability, mental illness, and other highly personal information. But new AI technologies might be capable of discerning this information, without the consent of the job candidate.

On the other hand, psychometric evaluations of personality have long been accepted as job screening tools, provided that they do not impact protected groups (based on age, gender, sexual orientation, religion, etc.). In many cases, personality assessments can accurately predict an applicant’s suitability for and potential success with certain jobs.

As for AI systems that scan information on applicants using social media activity, is that ethical when social media users do not agree to having their content used in this way? Ostensibly, those using social media apps are not doing so for the purpose of applying for employment and could therefore argue that they did not consent to their content being evaluated in that light.

Numerous cases have already demonstrated the ability for AI systems to develop biases, especially with regard to race and gender. For example, Amazon abandoned an automated talent search program when they discovered the algorithm had begun to prefer candidates of one gender over another.

Since we know that emerging technologies already blur the lines between public and private lives, we have no reason to believe this trend will reverse. As developers continue to hone their products, employers will have increasing access to private information on applicants and employees. We’ve not yet reached any easy answers to the ethical challenges posed by AI technology. But the law will surely be taking up the matter in our nation’s courtrooms. Until then, consult with our business planning attorneys about your ethical and legal questions regarding your organization’s use of technology.

 

Categories