Measuring and Improving Software Recruiters’ Performance

Every thought leader after Deming has extolled the virtue of measuring whatever we need to improve. I recently read this article  – it suggests seven metrics to measure a recruiter’s performance. Many more articles and suggestions for building performance scorecards are published. We need some simple metrics that could be quickly ascertained without investing in specialized software etc. In this article we are exploring two such metrics to measure software recruiters’ performance and ways of improving the same. By definition a software recruiter specializes in hiring software professionals.

In this post we will focus on the recruitment process including recruiters, hiring managers, other members of the interview panels, recruitment consultants and agencies and candidates. Combined effect of their individual behaviors results in the inefficiencies of the recruitment process.

TallClaims

Here are some typical characteristics of the software job-seekers’ market. Candidates often claim a lot more in their resumes than their real “hands-on experience”. Recruiters – particularly those who are experts at Boolean search rely a lot on what is claimed in the resume and base their search on keywords and extrapolate an individual’s capabilities based on the companies worked for and the schools attended. The best way to separate substance from hype is by having a short telephonic conversation. Just a few questions would have the candidate himself telling where his or her real strengths are and what should be ignored.

Spray&Pray

At this stage let us introduce two metrics to measure the efficiency of a source such as a recruiter or an agency providing candidates.

Recall of a source measures its reliability or spread of coverage of the the total population of suitable candidates. This is tough to measure as we don’t know the “total population” of suitable candidates who are currently looking for a change. As a proxy we can replace the “total population” with “total known number” by adding number of suitable candidates sourced from all sources including employee referrals, direct applicants, agencies and recruiters.

Precision of a source measures how many suitable candidates were sourced as a percentage of the total number of candidates sourced. This shows what percentage of the sourced candidates were useful and what percentage of the sourcing effort resulted in “waste”. This is measured easily by taking a ratio of candidates who are found worthy of second interview over total number of resumes coming from the source.

Candidates sourced but not found suitable are called false positives – our effort on interviewing these candidates is wasted and needs to be minimized. Similarly candidates who were suitable but were not sourced are called false negatives– indicating lower reliability of the source in terms of its ability to find suitable resources.

The main reason for false positives is due to the fact that many recruiters and agencies are singularly focused on improving recall. Their intent is to improve the probability of finding a match by sourcing as many resumes as possible. This “spray and pray “approach results in a lot of wasted effort in interviewing false positives.

TelephonicRound

On the contrary if a recruiter applies a filter and reduces the total number sourced by having a preliminary telephonic round , it will reduce false negatives and improve precision. An upside of this approach results in a better deal for the hiring managers who have less interviewing but better results.

Majority of hiring managers believe that recruiters can’t really do any technical screening. Recruiters do “keyword” based search – not really going deeper to find out if the candidate really has the relevant technical skills. This results in a communication gap between the recruiters and the hiring managers. Hiring managers don’t think that feedback any more detailed than “Technically Unsuitable” would be understood by the recruiters.

We believe that recruiters can be trained to do preliminary technical screening. Some amount of guidance in the form of technical questions that weed out obviously unsuitable candidates can improve the recruiters’ ability to judge.

SmallBatches

If we have more meaningful feedback coming more frequently; it will improve the precision and reduce wasted effort and interviewing fatigue. Smaller batch sizes would help get early feedback resulting in corrective action of improved technical filtering. Baby steps of small batches each one improving precision in an iterative way seems like the way we should hire technical talent.

 

 

Dos and Don’ts of Lean Startup- Top Takeaways from The Lean Startup Conference 2014

Lean Startup Conference 2014

Lean Startup Conference 2014

Main Takeaway- Continuous Experimentation Well Beyond The Startup Stage

Contrary to the generally held belief that lean startup principles advice experiments in early stages of a startup; many speakers at the conference showed how they are experimenting continuously at all stages of their ventures.

Eric Ries in said that “Product market fit and experimentation is not a one time activity. It’s a continuous flow of activities. There are no discrete big jumps! Think of these steps in continuous flow that lend themselves to go back if an experiment fails”

Hiten Shah of Kissmetrics reiterated that a meaningful metric leads to a hypothesis and then to an experiment to validate it. Startups should always be A/B testing. Empirically 1 out of 5 tests succeed. Strive to win 1.67 out of 5.

A/B testing can help not only at different stages of a startup; but also for various activities including website traffic, app installs, welcome emails, Web/Mobile onboarding, E-mail digests, Triggered notifications, dormant/churned users.

Des Traynor also said that having continuous feedback is more valuable than one time event driven feedback.

Experiments helped even established brands like Rally, Google and Vox Media to validate hypotheses at later stages of their product lifecycle

  • Rally launched a dummy brand waffle.io targeted towards developers to protect the parent brand from the impressions created by the experiments. Finally Rally decided to have both brands.
  • Google Adsenses team validated Partner Problems using Lean Startup Principles. Blair Beverly said that they faced problem with new projects scaling too early and failing as they lacked historical data to go by. He got coworkers at Google ad senses team to use the Lean Startup. They scheduled office time to read the book, being helpful and not pushy. They also gave them a reading guide with questions. In the end they identified three hypotheses; put together templates like the partner problem hypothesis. People felt good about invalidating their own hypotheses as it saved them work that would’ve been wasted.
  •  Vox Media launched Vox.com in 9 weeks using analytics to guide customer validation. Melissa Bell got her co founders and others from Vox media in the same room to get everyone on the same page about her vision. Many editorial staff came from Washington Post whereas Vox was an agile technology company. They used card stacks for flexibility. They had problems with the way editors used card stacks, as it was difficult to navigate-hence they analytics were used to solve the problem. Now Vox.com has 22m users. Delivering content to users where they are-on social channels such as Facebook or YouTube instead of own URL.

Lean Startup- Dos & Don’ts

Max Ventilla

  • Pivoting statistics- 80% of failures didn’t pivot, 65% of successes Pivoted but 85% of Huge Successes (>$1B exit) didn’t pivot. Those didn’t pivot felt that evolution is safer than betting on intelligent design.
  • You need to eat your own dog food. Use your product to solve your own problems. If not you are at an enormous disadvantage.
  • Invert the org chart :customers & customer facing team should be on top. They should be heard and not told what to do.
  • Force yourself to pretend at the earliest possible moment what you want to be- to learn whether its worth being what you want to be. Landing pages, Concierge or Wizard of Oz are ways to pretend.
  • Don’t speed up for the sake of it. For startups not going fast enough is not the main risk. False summit is the reality. Journey of a startup is slow like that of a mountaineer. A new goal appears once you have reached what seemed like the ultimate goal.

Grace Ng

  • According to Grace Ng success criterion for any experiment is the weakest outcome that will give you enough confidence to move forward.
  • Testing the riskiest assumption on buy side in a two-sided market place could be deceptive in a sellers’ market. Sellers may not automatically follow even if you find many buyers.
  • Validated hypothesis doesn’t necessarily lead to a viable business. Grace Ng tested a hypothesis whether birdwatchers will post photos to ask questions. The Hypothesis was valid but the problem turned out to be too small – not a big pain-point.
  • Don’t validate the solution before validating the problem. As in the case above; the problem was not big enough though the solution was right.

Eric Ries

  • When it takes too long to learn as end results take time, use proxy metric like number of likes or start a cohort.
  • Don’t depend on one experiment to determine the product market fit. Keep testing and validating along the way as you grow. Growing too fast by taking product-market fit for granted is dangerous.
  • Don’t get misled by corporate America’s habit to underinvest or overinvest. “All Hands On Deck” sounds great but surely is a sign of overenthusiasm.
  • Avoid handing off innovation between silos. Handoffs kill innovation. What is learnt in one silo can’t be handed off to another silo.
  • Don’t add features for the sake of it. Its better to err on the side of being too minimal to get early feedback and learning. Its easy to add a missing feature later.
  • Pay more attention to paid users’ feedback than free users’ feedback. Free users ask for more; paid users ask for better.
  • Don’t use vanity metrics- Eric’s law: At any time no matter how badly you are doing there is at least one Google analytic graph that’s up into the right

Joanne Molesky

  • As you go through build-measure-learn cycles for product the same way you should be going thru build-measure-learn cycles for process compliance.
  • Beware of developers’ tendency to focus on how to do things than on outcomes. Developers tend to ignore security as they are dazzled by technology, so they focus on doing things faster-not safer. Security testing, threat model and risk metrics should be included right from the beginning and not at the end.

Dan Milstein

  • Don’t take idle pleasantries as positive feedback. People tend to be polite and cordial even though they are least interested.
  • Don’t choose to see what fits in a narrative that sounds good and makes you look awesome. That is self-deception. Realize that a startup is a series of unpleasant encounters with reality.
  • Don’t own a plan. Own questions. Plans will change.

Hiten Shah

  • Test small changes- Google sign on and changes to verbiage improved acquisition by 314% for KissMetrics.

Brant Cooper

  • Don’t as two questions that kill breakthrough innovation – what is the roi? When do we get it? In order to answer these questions we have to look at existing markets which kills innovation -innovator’s dilemma. We need to build cultures or safety net for innovators.

Conclusion

Most of the takeaways and dos and don’ts are common sense for any practicing entrepreneur. According to Eric Ries The Lean Startup process is more widely practiced than talked about. Most entrepreneurs are agents of long term change. They don’t think The Lean Startup is a big deal. As with most profound thoughts- it seems obvious after its well thought through, well organized and well presented.