Every thought leader after W. Edwards Deming (an American engineer, statistician, professor, author, lecturer and management consultant 1900-1993) has extolled the virtue of measuring whatever we need to improve. I recently read “7 Recruiting Metrics You Should Really Care About” by Paul Slezak – it suggests seven metrics to measure a recruiter’s performance. Many more articles and suggestions for building performance scorecards are published. We need some simple metrics that could be quickly ascertained without investing in specialized software, etc. In this blog posting, we explore two such metrics to measure software recruiters’ performance and ways of improving the same. By definition a software recruiter specializes in hiring software professionals.
In this post, we focus on the recruitment process, including recruiters, hiring managers, other members of the interview panels, recruitment consultants and agencies, and candidates. Combined effect of their individual behaviors results in the inefficiencies of the recruitment process.
Here are some typical characteristics of the software job-seekers’ market. Candidates often claim more in their resumes than their actual “hands-on” experience. Recruiters – particularly those who are experts at Boolean search – rely on what is claimed in the resume. They base their search on keywords and extrapolate an individual’s capabilities based on the companies worked for and the schools attended. The best way to separate substance from hype is by having a short telephone conversation. Just a few questions would have the candidate telling where his or her real strengths lie and what should be ignored.
At this stage let us introduce two metrics to measure the efficiency of a source such as a recruiter or an agency providing candidates.
Recall of a source measures its reliability or spread of coverage of the total population of suitable candidates. This is tough to measure as we don’t know the “total population” of suitable candidates who are currently looking for a change. As a proxy, we can replace the “total population” with “total known number” by adding number of suitable candidates sourced from all sources including employee referrals, direct applicants, agencies and recruiters.
Precision of a source measures how many suitable candidates were sourced as a percentage of the total number of candidates sourced. This shows what percentage of the sourced candidates were useful and what percentage of the sourcing effort resulted in “waste”. This is measured easily by taking a ratio of candidates who are found worthy of a second interview over total number of resumes coming from the source.
Candidates sourced but not found suitable are called false positives – our effort on interviewing these candidates is wasted and needs to be minimized. Similarly candidates who were suitable but were not sourced are called false negatives– indicating lower reliability of the source in terms of its ability to find suitable resources.
The main reason for false positives is due to the fact that many recruiters and agencies are singularly focused on improving recall. Their intent is to improve the probability of finding a match by sourcing as many resumes as possible. This “spray and pray “approach results in a lot of wasted effort in interviewing false positives.
On the contrary, if a recruiter applies a filter and reduces the total number sourced by having a preliminary phone screening round, it will reduce false negatives and improve precision. An upside of this approach results in a better deal for the hiring managers who have less interviewing but better results.
The majority of hiring managers believe that recruiters can’t really do any technical screening. Recruiters do “keyword” based search – not really going deeper to find out if the candidate really has the relevant technical skills. This results in a communication gap between the recruiters and the hiring managers. Hiring managers don’t think that feedback any more detailed than “technically unsuitable” would be understood by the recruiters.
We believe recruiters can be trained to do preliminary technical screening. Some amount of guidance, in the form of technical questions that weed out obviously unsuitable candidates, can improve the recruiters’ ability to judge.
If we have more meaningful feedback coming more frequently, it will improve the precision and reduce wasted effort and interviewing fatigue. Smaller batch sizes would help get early feedback resulting in corrective action of improved technical filtering. Baby steps of small batches each one improving precision in an iterative way seems like the way we should hire technical talent.