Posted

By Asif Upadhye – HR Technologists

Artificial Intelligence is taking the world of Human Resources by storm. However, HR has always been an area that has faced some skepticism. With AI promising to enhance HR process productivity by leaps and bounds, here are 5 possible pitfalls to avoid.

The use of artificial intelligence in HR is a rising trend today, right from recruiting and talent acquisition to payroll and reporting functions. In the context of several administrative tasks that are often repetitive and therefore eat into much of an employee’s time, AI is being heralded as a rescuer. And it’s not just time that gets saved. Repetitive and monotonous tasks often suffer from poor execution when carried out manually over time. AI promises better consistency and higher accuracy and precision. Using quality data and rapid computational services AI sure seems like the answer to everything.

However, just like any new technology, the use of AI needs to be viewed multi-dimensionally before incorporating it into work processes. The more aware we are of possible pitfalls, the more we can optimize machine learning and artificial intelligence to our advantage.

Here are five mistakes to be avoided for HR departments to use AI to its maximum potential, thereby enabling HR professionals to focus more on strategy and ideation.

Not respecting the importance of bias elimination

In 2017, Amazon had to stop using its AI tool that aided in hiring because it was found to be biased against female applicants. Though the hiring process seemed smoothly mechanized, a deep-dive showed that the computer models were using machine learning to determine patterns in resumes sent to the company over the last ten years. Considering the male dominant nature of the tech sector, most of these resumes were from men. The system thus downgraded profiles sent in by women by catching terms where the word “woman” or “women” occurred, essentially assuming that the profiles devoid of these words were most desirable.

The key in such cases is to exclude certain data from consideration and put in place a system where diversity metrics like gender are not considered as parameters by the machine-learning program, thereby retaining focus solely on job-related metrics. The whole concept of AI is that humans are training technology to behave human. And if humans have harbored biases over centuries, it is only natural that these get unconsciously taught to artificial intelligence programs. And that is why it is advisable to be extra conscious of human biases and consequently leave no scope for associated machine-made errors.

Assuming that AI can take over purely human roles

While narrowing down a lengthy list of applicants to a more manageable shortlist based on certain parameters is definitely a role that can be transferred to artificial intelligence, it is unreasonable to expect AI to carry out more human roles than this. A case in point is the AI tech that an Israeli-based start-up, Faception released. They claimed that this tool could analyze the facial structure of candidates, and from it, gauge their IQ, personality type, and even their tendency to exhibit violent behavior. On their website, they showed various types of facial structures and labeled them “high IQ”, “academic researcher”, “professional poker player” and “terrorist”.

This is when claims of technology’s potential go from ridiculous to unquestionably discriminatory. Personality tests, personal interviews, group discussions, and psychiatric evaluations are all just as relevant and necessary today as they were a decade ago. This is something that must never be overlooked because HR decisions are about people and should thus remain people-driven.

Not valuing the privacy and consent of employees

AI can most certainly be a force that affects positive change in the workplace environment. From monitoring workplace issues like interpersonal relations and engagement levels to keeping track of individual stress levels and productivity, AI tools can not only alert leader’s about their team’s general morale, but can also predict incidents of data theft and cyber-attack by observing the unusual activity.

As a part of the Human Behaviour Change Project, a Knowledge System is under development which will provide all this information, and also help to determine the best interventional measures to help reverse or improve behavioral changes that adversely affect a team or an individual. There’s a lot of good here, but there’s also a lot of invasion of privacy if viewed from an employee’s point of view. Not only do such processes require utmost transparency to be implemented but their implementation should also be based on the consent of the workforce. It is imperative, also, to put in place extra security measures so that information is confidential and only accessible to authorized personnel, and if individuals do have access to their own data then machine-learning systems must possess masking capabilities that disallow anyone else from viewing the same data.

Not laying emphasis on sizeable, good quality data sets

The management and implementation of AI in HR processes require treading with some caution and a large number of skills and expertise. The quality of the data and the size of the data-set, both matter greatly. It is a well-known fact that effective statistical interpretation requires samples of significant size. It is the same for data analysis. The larger the volume of the data, the higher is the accuracy of decision-making.

Additionally, allowing for a large amount of data to be fed into machine-learning models also enables the inclusion of all possible scenarios, thereby side-stepping glitches in decision-making that could cause serious problems later.

Not up-skilling before widespread implementation of AI processes

The most obvious, but perhaps the most important point – HR professionals need to become more data fluent. It’s important for those in the Human Resources field to view their function as data-driven. Implementing a range of AI-backed processes without the requisite training and proper insight and guidance from domain experts, can not only leave the HR department in the lurch but can also lead to faulty use of AI tools, thereby affecting decision-making and costing the entire organization, not to mention a whole host of potential employees.

Not just this, but poorly managed AI systems then succumb to the very same human biases that they are designed to side-step. Thus, starting small is advisable. Once preliminary training, acclimatization to the rapid digitization of businesses, and expert advice are absorbed, the HR department will be ready to incorporate sophisticated tech tools.

We are an entire era of humans teaching machines to be more human. In the sometimes overwhelming irony of it all, it is important to remember that the machines will do their job well only if we do ours.

Leave a Reply