Measuring and Improving Software Recruiters’ Performance

Every thought leader after Deming has extolled the virtue of measuring whatever we need to improve. I recently read this article  – it suggests seven metrics to measure a recruiter’s performance. Many more articles and suggestions for building performance scorecards are published. We need some simple metrics that could be quickly ascertained without investing in specialized software etc. In this article we are exploring two such metrics to measure software recruiters’ performance and ways of improving the same. By definition a software recruiter specializes in hiring software professionals.

In this post we will focus on the recruitment process including recruiters, hiring managers, other members of the interview panels, recruitment consultants and agencies and candidates. Combined effect of their individual behaviors results in the inefficiencies of the recruitment process.


Here are some typical characteristics of the software job-seekers’ market. Candidates often claim a lot more in their resumes than their real “hands-on experience”. Recruiters – particularly those who are experts at Boolean search rely a lot on what is claimed in the resume and base their search on keywords and extrapolate an individual’s capabilities based on the companies worked for and the schools attended. The best way to separate substance from hype is by having a short telephonic conversation. Just a few questions would have the candidate himself telling where his or her real strengths are and what should be ignored.


At this stage let us introduce two metrics to measure the efficiency of a source such as a recruiter or an agency providing candidates.

Recall of a source measures its reliability or spread of coverage of the the total population of suitable candidates. This is tough to measure as we don’t know the “total population” of suitable candidates who are currently looking for a change. As a proxy we can replace the “total population” with “total known number” by adding number of suitable candidates sourced from all sources including employee referrals, direct applicants, agencies and recruiters.

Precision of a source measures how many suitable candidates were sourced as a percentage of the total number of candidates sourced. This shows what percentage of the sourced candidates were useful and what percentage of the sourcing effort resulted in “waste”. This is measured easily by taking a ratio of candidates who are found worthy of second interview over total number of resumes coming from the source.

Candidates sourced but not found suitable are called false positives – our effort on interviewing these candidates is wasted and needs to be minimized. Similarly candidates who were suitable but were not sourced are called false negatives– indicating lower reliability of the source in terms of its ability to find suitable resources.

The main reason for false positives is due to the fact that many recruiters and agencies are singularly focused on improving recall. Their intent is to improve the probability of finding a match by sourcing as many resumes as possible. This “spray and pray “approach results in a lot of wasted effort in interviewing false positives.


On the contrary if a recruiter applies a filter and reduces the total number sourced by having a preliminary telephonic round , it will reduce false negatives and improve precision. An upside of this approach results in a better deal for the hiring managers who have less interviewing but better results.

Majority of hiring managers believe that recruiters can’t really do any technical screening. Recruiters do “keyword” based search – not really going deeper to find out if the candidate really has the relevant technical skills. This results in a communication gap between the recruiters and the hiring managers. Hiring managers don’t think that feedback any more detailed than “Technically Unsuitable” would be understood by the recruiters.

We believe that recruiters can be trained to do preliminary technical screening. Some amount of guidance in the form of technical questions that weed out obviously unsuitable candidates can improve the recruiters’ ability to judge.


If we have more meaningful feedback coming more frequently; it will improve the precision and reduce wasted effort and interviewing fatigue. Smaller batch sizes would help get early feedback resulting in corrective action of improved technical filtering. Baby steps of small batches each one improving precision in an iterative way seems like the way we should hire technical talent.



Top Takeaways from Nasscom Product Conclave 2014

Insights into startup ecosystems of the US and Israel

Technoratti of India descended to Bangalore for the annual Nasscom Product Conclave 2014 on 30th and 31st October. Here are some top takeaways from the conference with a few from the Pune Connect event that happened on 8th Nov.

New startups are being launched at a feverish pace in India. India has 3100 startups-taking it to # 3 ahead of Israel which has only 1000 . Technologies and infrastructure to build software products have become available and the domestic market has grown to become significant enough to take note. Devices at the edge and powerful technologies at the back end are throwing up unprecedented opportunities for startups to innovate. App to App communication is exceeding browsing traffic. John McIntyre and Zack Weisfeld presented the evolution of startup ecosystems in the Silicon Valley and Israel.

Startup EcosystemStartup Ecosystem

Startup Ecosystem

Strong universities which acted like feeders and presence of prominent MNCs provided the infrastructure needed for healthy startups in Israel. Few initial successes provided the much needed boost for the startup activity to take off. Military spending and a lenient tax regime by the Government helped. The Israel Government also promoted VCs and provided exit routes.

History of Silicon valley is similar in the role played by the US Government, world war II and electronic warfare research at MIT, Harvard and Stanford. John McIntyre said that Silicon Valley is a state of mind. “Free flow of people and ideas is natural. The team you build is more important than the idea itself. There is no stigma attached to failure- you have to fail and reinvent to finally succeed. Innovation happens when you address customer desire in a financially viable product that is technically feasible. Silicon valley is a melting pot where the magic happens because of diversity of people.”

India is following the footsteps of these countries by starting a Government funded innovation -the Aadhar card program. 700 million cards were issued in 4 years with a team of 20+ developers. Aadhar has developed an API for authentication and KYC (Know Your Customer) which is being consumed by about 500 independent developers. The Aadhar team showed some innovations that will drive the future roadmap. One of them developed at the MIT media labs was an app that does iris scans using 1.2 megapixel camera and retina display available in some mobile phones today. Soon Aadhar could make one click two factor authentication (like ApplePay) possible in rural India!

Like Appstore and Google Play there are many other platforms like Salesforce, Facebook, LinkedIn and Azure that have their own ecosystem of apps. Aadhar could become one such ecosystem.

Dhiraj Rajaram of Mu Sigma cautioned that we shouldn’t get carried away by the hype associated with product startups and seriously look at services. Services can dynamically provide solutions on the fly to problems as they arise whereas static products solve specific problems they are meant to solve. Tarken Maner also pointed our that out of $3.1 trillion global IT market only $1200 billion is accounted for by hardware and software products- balance $1.9 trillion is accounted for by services.

Tips on business and marketing

Business applications want to abstract trust broking to aggregators of services like Ola Cabs or Flipkart . Promod Haque said that App to App communication is exceeding browsing traffic. As users are demanding mobile first ; some applications are moving to mobile only. Zomato scrapped their web interface,built a mobile only app and then moved to build desktop app after 6 months. Omnichannel seems to be catching up – it not only accounts for various form factors but integrates digital and physical channels of conducting business. Users get a seamless experience across multiple channels – they can start in a new channel from where they left in an old channel. Tarken Maner said that you can strategically use channel to differentiate just the way you traditionally used customer profile or product features to differentiate. B.V.Jagdeesh said as business applications are starting to look more like consumer apps;  B2B market provides more opportunities than B2C. Once you acquire 20 customers in the B2B market you are safe to start building your business on that foundation. Though B2C appears more attractive ; sustainable customer acquisition in large numbers makes it more difficult.

Dhaval Patel of Kissmetrics described how their company scaled its outbound marketing communication. He said that they focused on low cost channels like Twitter and stayed away from paid conversions. They focused on creating content that their customers loved. He advised startups to join professional groups on social media like LinkedIn to study others’ content including competitors’ content and add a new twist to put across a different point of view. Once the content is up the same can be pumped up first by e-mail and then by social media campaigns. Both e-mail and social media are complimentary tools and need to be used in conjunction.

Campaigns need to be measured by studying sharing and social engagement metrics . Qualaroo is a great tool to ask questions to visitors. Vanity metrics can kill ROI . Metrics become meaningful only when they reach high thousands. Kissmetrics published over 50 info graphics and received more than 20k comments. Info graphics get hundreds of shares on LinkedIn, FB  and Twitter.

Dhaval advised startups to ” Treat content creation as customer service. Measure and optimize your content. Do a/b testing , stick to a regular schedule to publish content. Images are very important for content to make people click. Create content that teaches. Blogs are cost effective e.g.Kissmetrics’ cost per sign up is as low as $7. Always position top content in left panel so that it’s easy to find.”

Product Tips

Aakrit Vaish  co-founder of Haptik Inc said that mobile first is not just a business strategy but it changes the way we build and use applications. He said that everyone at Haptik uses low bandwidth 2g connection so that they can live the user experience of an average user. He said one should use mobile web if the use case starts in the browser e.g. with Google search- this way the user can reach your application in 1 click instead of 6 needed to download and install an app. Building an app would make more sense if one were leveraging native capabilities like geo-location or push notification. He said users download and install a number of free apps which they eventually delete.

Omni-channel means unification of web, mobile and in store experience- any user switching channels starts where he left off. Lowe’s – essentially a brick and mortar company now offers omni-channel experience to its customers. Associates who walk the floors of Lowe’s stores can capture the conversations about all the products and share it so that information is not lost. Product locator kiosks placed at prominent locations in the stores give stock position. Lowes planned ahead for iOS-8 and launched touch Id. They armed their associates with 42000 mobile phones not only for better operations but for better connection with customers. With more than 500K products online Lowe’s is a good example of digital-physical blur. Tesla is another example of digital-physical blur. Its more software than car.

Ramesh Raskar of MIT Media Labs shared his advice on how to invent. He explained it with his idea hexagon with some examples. The hexagon has a question at the center – “Given X whats next?” and the 6 corners show ways of inventing based on current state X.

Idea Hexagon

Idea Hexagon

  1. Xd– Add a new dimension. E.g. if Flickr shared photos.Youtube shares videos.
  2. X+Y. Pair X with Y – more dissimilar Y would be better. E.g. Retina display for eye checkup
  3. Xv – Given a hammer get all nails. E.g. Use mobile phone as a camera.
  4. ~X- Do exactly the opposite. E.g. reverse auction, toll free calls.
  5. X++- Add an adjective like faster, cheaper, cooler, more democratic to X. E.g. Skype for cheaper international calls.
  6. X^- Given a nail get all hammers – E.g. LensBricks- appstore for cameras.

Tips on culture

 Employees are demanding enterprises to provide more freedom. InMobi has given this freedom to bring about a cultural change in their company. They have stopped using traditional way of hiring – now they follow Hiring 2.0 to hire the best teams in hackathons conducted by them. Employees built their office to suit their liking instead of the standard cubicles.

Naveen Tewari said that “You can get 100X the valuation if you get the culture right. Culture is proving to be the disruptive differentiator.” He defined culture as experiences that the company gives to its customers and employees. Change, innovation, fast failure and learning ,fast iterative growth are difficult to implement without the right culture. InMobi has implemented an open door policy for employees who could leave to do their own startup and come back if they failed. They focused on growing instead of managing people. They did away with the performance appraisal system. Connecting with families including grandparents and also with ex-employees built the company’s soul.

Jim Ehrhart repeated what was said in an earlier post – boundaries of enterprises are blurring as we move from workforce to crowdsourcing. IT barely have the tight grip on what people do as they used to have. Employees want to use apps for everything they do. Many enterprises are planning to build their own enterprise appstore.

Resume Ranking using Machine Learning- Implementation.

In an earlier posting we saw how ranking resumes can save a lot of time spent by recruiters and hiring managers in the recruitment process. We also saw that it lends itself well to lean hiring by enabling selection of small batch sizes.

Experiment – Manually Ranking Resumes

We had developed a game for ranking resumes by comparing pairs with some reward for the winner. The game didn’t find the level of acceptance we were expecting it to find. So we thought of getting the ranking done by a human expert. It took half a day for an experienced recruiter to rank 35 resumes. Very often the recruiter asked which attribute was to be given higher weightage? Was it experience or location or communication or compensation?

These questions indicate that every time we judge a candidate by his resume; we assign some weightage to various profile attributes like experience, expected compensation, possible start date etc. Every job opening has its own set of weightages which are implicitly assigned as we try to compare the attributes of a resume with the requirements of the job opening.

So the resume ranking problem essentially reduces to find the weightages for each one the attributes.

Challenge – Training Set for standard ranking algorithms.

There are many algorithms to solve the ranking problem. Most of the ranking algorithms fall under the class of “Supervised Learning” which would need a training set consisting of resumes graded by an expert. As we saw earlier this task is quite difficult as the grade will not only depend on the candidate profile but also on the job requirements. Moreover we can’t afford the luxury of a human expert training the algorithm for every job opening. We have to use data that is easily available without additional efforts. We do have some data of every job opening as hiring managers screen resumes and select some for interview. Its easy to extract this data from any ATS (Applicant Tracking System) . Hence we decided to use “Logistic Regression” that predicts the probability of a candidate being shortlisted based on the available data.

We have seen that “Logistic Regression” forecasts the probability based on weightages for various attributes learned from which resumes were shortlisted or rejected in the past. This probability in our case would indicate if the candidate is suitable or not. We would use this number to rank candidates in descending order of suitability.

Available Data

In our company we had access to data of the following 13 attributes for about 3000 candidates that were screened for about 100 openings over the last 6 months.

1)Current Compensation, 2)Expected Compensation, 3)Education, 4)Specialization, 5)Location , 6)Earliest Start Date, 7)Total Experience, 8)Relevant Experience, 9)Communication, 10)Current Employer, 11)Stability , 12)Education Gap and 13)Work Gap.

We needed to quantify some of these attributes like education, stability , communication etc. We applied our own judgment and converted the textual data to numbers.

Data Cleaning

We were unsure whether we will get consistent results as we were falling short of historical data of resumes. We ignored openings that were barely having 10 or less resumes screened. On the other hand we also discovered a problem with large training sets – particularly in the case of job openings that drag and remain open for long. These job openings are likely to have had change of requirements. As we learned later; consistent accuracy was obtained for job openings having training sets whose population was in the range of 40 to 80 resumes.

Running Logistic Regression

We had listed 22 openings for which several hundred resumes were presented to the hiring managers in the last 6 months. We have record of interviews scheduled based on suitability of the resumes. We decided to use 75% of the available data to train (Training Set) and 25% to test (Test Set) our model. The program was written to produce the following output-

  • Vector of weightages for each one of the 13 attributes
  • Prediction whether the set of test cases would be “Suitable” or “Unsuitable”

The result was based on how accurate was the prediction Accuracy is defined as

Accuracy = (True Positives + True Negatives)/ (Total # of resumes in the Test Set)

Where “True Positives” is the number of suitable resumes correctly predicted to be suitable. Similarly “True Negatives” are the number of unsuitable resumes predicted as such. We achieved average accuracy of 80% ranging from 67% to 95%.

Efforts to improve accuracy

Pay Vs Experience -Plot of Suitable Candidates

Plot of suitable and unsuitable resumes on Experience vs Pay didn’t show any consistent pattern. The suitable resumes tended to be more of highly paid individuals who had lower experience. Which is kind of counterintuitive. Other than this the suitable resumes tended to cluster closer to the center of the graph as compared to the unsuitable ones.

Given the nature of the plot the decision boundary would be non linear- probably a quadratic or higher degree polynomial. We decided to test using a 6th degree polynomial – thus creating 28 attributes from 2 main attributes – viz. experience and pay. We ran the program again this time with these 28 sixth degree polynomial and remaining 11 attributes thus a total of 39 attributes. This improved the accuracy from 80% to 88%. We achieved 100% accuracy for 4 job openings.

Regularization had no impact on accuracy. Hence we didn’t use any cross validation set for testing various values of the regularization parameter.

Values of weightages or parameters varied slightly every time we ran to find the minimum of the cost function. Which indicates that the model found a new minimum in the same vicinity every time we ran the program with no changes to the training set data.

Some Observations

Varying Weightages for Candidate Profile Attributes

If you take a close look at the chart above ; we observe the following-

  • One job opening gives extremely negative weightage to “Current Compensation” – this means that candidates earning well are not suitable. While its just the opposite case for most other job openings.
  • C++ Developer position assigns positive weightage to “Total Experience” but negative weightage to “Relevant Experience”. The requirement was for a broader skillset beyond just C++.

We can go on verifying the reasons for what turns out to be a fairly distributed set of values for weightages for various attributes. Each job opening has pretty much independent assessment of the resumes and candidates.

As expected we observed that accuracy generally increases with sample size or size of the training set.As mentioned earlier accuracy was observed to be low in the case of job openings that remained open for long and the selection criteria underwent change.

Catch-22 of Hiring

There is an inherent conflict at the point of basic information acquisition in the process of hiring. The question is how much information should the candidate be required to fill up while uploading his resume? Too much information increases his/her work. On the other hand if minimal information is acquired ; hiring managers are left with a whole lot of resumes and very little information. Its frustrating for hiring managers to read a number of unsuitable resumes before getting one that is suitable.

Catch-22 Situation in Hiring

Catch-22 Situation in Hiring

To elaborate this situation let me take the example of my company. We were getting hardly any interesting resumes from our website. We decided to do away with the lengthy process and made it very simple. Now we have an apply button in front of each opening on the careers page; all that a candidate needs to do is to upload his latest resume. But simplifying this process resulted in a whole lot of resumes being uploaded. Now our HR executives are spending significant amount of their time managing resumes. Our hiring funnel in the chart below shows more than 99% of resumes being filtered out to make less than 1% offers.

Talent Acquisition Funnel

Talent Acquisition Funnel

Should we switch back to our old “elaborate” process? Will the “elaborate” process and form filling ensure that hiring managers get what they want? The reality as we learnt from our experience is quite the opposite. Really interesting candidates don’t bother to go through the ordeal of “registering” and uploading their resumes . And those who do are not really interesting and those who appear to be interesting are just that. They “hype up” their resumes to make themselves appear interesting.

I had this problem on my mind when I attended a day long event  focused on applications of machine learning


Of all the talks I was most inspired by this talk by Nilesh Phadke of BMC Software. He demonstrated applications developed for the IT support – far away from the world of hiring. However I felt that the problem I had on my mind could be solved by applying the machine learning approach.

Information Extraction for Filling up Forms

To automate any workflow ; one needs to enter long forms about entities – be it a support ticket or new candidate for a company. Long forms demotivate users and introduce an element of delay. There also is a tendency to skip non-mandatory fields even though the information is available.

Nilesh demonstrated “Formless Incident Creation” where the user was allowed to type a complaint in one Text Field. As he filled in details, based on the words that he was typing the fuzzy matching algorithm in real time matched the correct entities to those words. Not only did it complete the form needed but also searched and found similar past incidents from a myriad of templates of typical incidents.

Machine Learning Approach to Catch-22 of Hiring

I was immediately reminded of the catch-22 situation in hiring. Most of the information is already present in the resume that is being submitted. Can we not use information extraction algorithm to automate this tedious task? Can we also have different templates for Developers, QA Engineers, IT Support Engineers and Project Managers? Can we use the information extracted to synthesize micro-resumes or short summaries of less than 500 characters to help hiring managers quickly read resumes on their mobile devices?

This gave a new direction of thinking to resolve the catch-22 situation. Here are the set of tools that we plan to use.

Fuzzy Matching – Solr

Search- Lucene

Natural Language Processing – OpenNLP

Analytics for unstructured text-UIMA

Please stay tuned for updates on how we use machine learning for formless resume acquisition and more efficient search by hiring managers.



Ranking Resumes using Machine Learning

In a recent article we saw how ranking resumes can help us keep the WIP within limit to improve efficiency. We also saw an interesting way of achieving this is by playing a mobile game. In this article we will see how machine learning can be applied to rank resumes.


This article covers a “Quick and Dirty” way to get started. This is no way the ultimate machine learning solution to the resume ranking problem. What I did here took me less than a day of programming. This could serve as an example for students of machine learning.

Problem Formulation

We train the machine learning program by using a “training set” of resumes which are pre-screened by a human expert. The resume ranking problem can be seen as a simple classification problem. We are classifying resumes into suitable (y=1) or unsuitable (y=0).  We know from machine learning theory that classification problems are solved by using the logistic regression algorithm.

Sigmoid for Resume

Sigmoid Function Showing Probability of a Resume being Suitable

We know the predictor function represents a value that lies between 0 and 1 as shown in the diagram above. The predictor or hypothesis function hθ(X) is expressed as-

hθ(X)=1/1+e-Z where z= θTX

where X is a vector of various features like experience(x1), education(x2), skills(x3), expected compensation(x4) etc. which decide if a resume is suitable or not suitable. The first feature x0 is always equal to 1.

Features & Parameters

Features & Parameters

hθ(X) can also be interpreted as the probability of the resume being suitable for  given X and θ. So the resume ranking problem is essentially solved by evaluating the function hθ(X) with the resume yielding highest value of hθ(X) getting the top rank.

With this prior knowledge of machine learning and logistic regression we have to find θ by studying a training set of resumes some of which were selected to be suitable -remaining ones being unsuitable.

Simplification of the problem

To further simplify the problem let us not bother about all the attributes like experience, education , skills, expected compensation, notice period etc. while ranking the resumes. As we saw in this earlier post ; we need to worry only about the top constraints. We selected the top constraints as those constraints which address “must have” features that are “hard to find”. Another benefit of limiting  these top constraints is that the same can be quickly and easily evaluated by the recruiters in short telephonic conversations with the candidates. This makes the process more efficient as it precedes and serves as a filter before the preliminary interview by the technical panel.

Decision Boundary

Training set is a set of resumes that are already known to be suitable or not suitable based on past decisions taken by the recruiters or hiring managers. Let us plot the training set for a particular opening based on past records. For the purpose of this article let us say that resumes are ranked only on the basis of 2 top constraints viz. relevant experience (x1) expressed in number of years and expected gross compensation per month(x2). The plot would look somewhat like what we see below.

Decision Boundary

Decision Boundary


If you draw a 450 line cutting the X1 axis at X1=3, the same can be seen dividing the training set so that every point below the line represents a suitable resume and every point above it represents an unsuitable one. This line is machine learning terms is called the decision boundary. We can say that all the points on this line represent resumes where probability of them being suitable is 0.5.  This is also the point where z=0 as we have seen in the diagram above  showing the sigmoid function-


This equation represents a point on the sigmoid function where

Z=0   – replacing Z with θT X

θT X= 0

-3+X1+X2=0  – represents the Decision Boundary

Gradient Descent 

Though we have visually plotted the decision boundary ; it may not be the best fit for the training set data. To get the best fit we can use gradient descent to minimize the error represented by the following equation-

 J(θ)=-1/m[i=1m y(i)*log(hθ(x(i)) )– (1-y(i))*log(1- hθ(x(i)))]

– where m is the number of instances in the training set and X(i) is a vector representing x0,x1,x2 for the ith instance in the training set of resumes. y(i) takes value 1 if the ith instance was suitable and 0 otherwise. Here we are trying to minimize the function J(θ) by finding out a value of θ that minimizes the error function. Here θ is a vector of θ0, θ1 and θ2.

We can minimize J(θ) by iteratively replacing θ with new values as follows. Each iteration is  step of length α is for descending down the slope till we reach the minimum where the slope is zero.

θj:= θj-α(i=1m(hθ(x(i))- y(i))* x(i))


We wrote the code to execute this in octave – as it’s a known bug-free implementation of machine learning algorithms and vector algebra. There are libraries available in Python and Java to build a more robust “production grade” implementation.

Limitations and roadmap for further work

The logistic regression algorithm is useful only if you have a reasonably large training set – at least 25 to 30 resumes. We also need to have the same selection criteria for the algorithms to work – hence you can’t reuse training sets across different job positions.  There are some “niche” positions where its impossible to find enough resumes- its both difficult and unnecessary to implement machine learning in such cases.

There are many “to-dos” before this program can be made useful. We need to use more features – particularly those which are “Must Have” types. We also need to have more iterations of the gradient descent with different values of  α . Lastly we need to have more resumes in the learning set to be able to further break it down into training set, validation set and test set.


Its particularly challenging to rank 20 or more resumes even though the ranking is based only on 2 or 3 attributes. Recruiters often skip this step as it tends to be tedious and end up wasting a lot of hiring managers’ time. Its an error prone process if a junior recruiter is assigned the task.By automating resume ranking, we hope to avoid human error. We also hope to get early feedback and improved understanding of important attributes or top constraints by limiting the short list to top 3 resumes. Lastly it takes a few seconds for this crude Machine Learning program to rank 20 resumes- something that would take 10 minutes for an experienced recruiter.