Automation for the sake of automation is of no value. To be useful, a tool has to solve a real problem or make some aspect of work easier, faster, or of higher quality than it was without the tool.
As we see more and more technology and A.I. applied to recruitment, I get worried that we are often using it only because it is “cool” or because we believe that A.I. is better at something than people are without any proof.
As most of you know, I am a strong proponent of incorporating technology into our inefficient recruitment processes. But I am concerned that we are starting to use technology in ways that appear useful, but that make a simple problem more complicated, promise to do something they cannot do, or infringe on candidate rights.
What are we really achieving when we apply A.I. to analyzing people facial reactions? How critical to an employment decision is delving into a candidate’s social life, or to superficially assessing their cultural fit or personality? No one is perfect and what many of these tools do is bring out our imperfections without balancing positive characteristics.
What is the net gain from using these kinds of tools? Are the candidates performing better than those who are somehow judged “better?” Where is the data that supports the use of these tools? And most importantly, what are we losing in trust and relationship?
Being a good recruiter requires creating relationships, nurturing those relationships with conversation, sharing information, and building community. It is helping a candidate find the right opportunity and the hiring manager to find a competent worker.
As long as any piece of technology can help us to do that, I think it is useful. Community and engagement building tools are good examples. Job boards, referral tools and tools that combine A.I. with human intelligence are also great. Any tool that can give a candidate information on their skill level or match their skills to available opportunities is also good.
But we need to carefully examine and filter out those that infringe on privacy, work in clandestine ways, or that apply judgements that are superficial or not based on good science. Many tools make bold claims that are not substantiated with any empirical evidence and can potentially be harmful to a candidate’s reputation or ability to find a good opportunity.
I have developed a few rules of thumb that I apply when assessing a particular tool or product. They are in no particular order.
#1 Are its outputs unbiased (or as unbiased as possible)? What is the evidence?
#2. If it is being used for assessment, is it clear to a candidate what is being assessed and how it is being assessed?
#3. If it is assessing a candidate, are the results available to the candidate and does the candidate have the right to discuss them?
#4. Is it built on sound, generally accepted science? Would the majority of experts agree with the algorithms it uses?
#5. Does it foster community and help expand conversation?
#6. Does it make it easier for a candidate to apply or to get answers to questions?
#7 Does it educate or coach the candidate?
#8. Does it make it easier for a recruiter to communicate with a candidate?
#9. Does it foster trust and help strengthen a relationship?
#0. Does it ease communication and connect interested people together?
#11. Does it do something a recruiter cannot do as well?
#12 Is the vendor willing to work with you in a collaborative way.
I suggest that you always conduct pilots and do extensive trials along with verifying all the data used and produced. Measure the validity of the results before commiting to a wide scale implementation.
Let’s try to use better judgement in both creating and implementing new tools. And let’s try to live by a set of rules that guide our decisions and make it easier to build trust with candidates and credibility with hiring managers.