Monday, June 15, 2020
5 Problems with The Ladders 6 Second Resume Study
5 Problems with The Ladders 6 Second Resume Study 5 Problems with The Ladders' 6 Second Resume Study November 14, 2014 The Ladders 6 second resume study is frequently cited around the web. But is it methodologically sound? We conducted our own experiments and the answer may surprise you. Build My Resume Now I know youâve heard this one before hiring managers only take an average of six seconds to glance over your resume before deciding to keep or trash it. If youâre in the resume business, you see this statistic from The Laddersâ famous resume study cited everywhere. Youâve probably even cited it a few times yourself. I know I have. Then it struck me. Has anyone even taken a close look at the studyâs methodology to see if it has scientific merit? I decided to examine their methodology in detail to see if the study could be improved, and if their conclusions were correct. The result? There are major problems with The Laddersâ famous study that may have led to hazy or incorrect results. Allow me to preface this post by saying that itâs admirable that The Ladders went through the effort to do bring a scientific lens to the hiring process, and attempt to bring some objectivity to the table. I think that is to be applauded and appreciated. However, it is also important not to accept the results of any study at face value. Conclusions should be peer-reviewed and tested to determine accuracy, and constructive criticism should be given to improve any studies performed in the future. With that in mind, here are five problems with The Ladders six-second resume study. 1. The study provides too few important methodological details This is a major issue throughout the study. Statistics should never be taken at face value, and itâs impossible to praise or criticize the methodology of a study that does not make its methods transparent and open. Hereâs the biggest missing detail from this study: Were the recruiters told in advance whether they were viewing professionally re-written or original resumes? If they were told in advance, it would bias the results in favor of the professional re-written samples. This would be like judging brownies, and being told in advance which ones were baked by Martha Stewart, and which ones were baked by a twelve-year old. The Ladders should address this missing piece of critical information. 2. The study uses scales and statistics incorrectly, generating questionable results The Laddersâ study used something called a Likert scale to help recruiters gauge the âusabilityâ and âorganizationâ of any given resume. Before I continue, hereâs what a Likert scale looks like: Iâm sure youâve filled one in several times in your life. Using the Likert scale was a good choice for this study. Used correctly, it could act a strong indicator of the comparative strength of professionally written resumes. Unfortunately, The Laddersâ study only gets it half right. What the study got right Recruiters were asked to rate the âusabilityâ and âorganizationâ of resumes on numerical rating scale from 1-7 (instead of Agree-Disagree as shown in the Likert scale above). 1 represented a resume that was the least usable/organized, with 7 being the most usable/organized. Because the scale is numerical, The Ladders calls it a âLikert-likeâ scale, not just a Likert scale. Hereâs where the study gets a bit sloppy. What the study got wrong The Ladders claims that professionally re-written resumes were given an average rating of 6.2 for âusabilityâ versus 3.9 before the rewrite. They then calculate this as a 60% increase in usability. You canât do that with a Likert scale, (or a Likert-like scale). Consider it this way â" make a list of three movies, assigned values 1-3. Your favorite movie (1) A movie that you like (2) A movie that you sort of like (3) Whatâs the percentage difference between the movie that you sort of like, and the movie that you like? How about between the movie you like, and your favorite movie? Are the intervals between them even? I know for me, they arenât. Itâs difficult to even choose between my favorite movies most of the time. If it doesnât work with movies, how could it work with the resumes in this study? Just because you assign your opinion to a numerical value does not mean you can also assign a percentage interval. Again, let me be clear â" the results stemming from the Likert-like scale probably reveal that professionally written resumes were better organized and more usable than the originals, but that cannot be calculated into percentages. (At least with this kind of statistical test.) 3. The study uses unclear language and words that are not defined Letâs take a look at the studyâs claims piece by piece: âProfessionally prepared resumes also scored better in terms of organization and visual hierarchy, as measured by eye-tracking technology. The âgaze traceâ of recruiters was erratic when they reviewed a poorly organized resume, and recruiters experienced high levels of cognitive load (total mental activity), which increased the level of effort to make a decision.â First of all, itâs unclear what the study means by âcognitive loadâ/ âtotal mental activityâ. Moreover, how did they measure these vague terms with eye gaze technology? Again, the lack of transparent methodology and clear definitions renders these terms impossible to make any comments about, and determine if the study is truly accurate. Secondly, how does one measure whether a âgaze traceâ is erratic? The fact is that though there may be ways to measure this kind of thing statistically, its hard to know if their conclusion has any merit when they just summarize the math in their own words without showing us any of the computations. Thirdly, the Likert scale is misused once again in this section to create the illusion of a hard statistic: â[Professional resumes] achieved a mean score of 5.6 on a seven-point Likert-like scale, compared with a 4.0 rating for resumes before the re-write â" a 40% increase.â Weâve already gone over why that is not a legitimate way to represent Likert scale data. 4. Industry HR Experts Dont Agree We interviewed seasoned human resources experts about resume screening, about how long they spend on a resume on average, and what they think of the 6-second rule. Here are a few of the responses: Matt Lanier, Recruiter, Eliassen Group I always go back and forth on the whole 6 seconds theory. I cant really put an average time for how long I look at each one; for me, it really depends on how a resume is constructed. When I open up a nice, neat resume (clear headers, line separations, clearly in chronological order, etc.) I am more likely to go through each section of the resume. Even if the experience is not that great, having a resume that looks professional and reads well will cause me to spend more time examining it. Kim Kaupe, Co-Founder, ZinePak Once I narrow down candidates from the cover letter filter I will spend 10-15 minutes reviewing individual resumes. Glen Loveland, HR Manager, CCTV The 6 second rule? It varies company to company. Heres what Ill say. Recruiters will spend less time reading a résumé for an entry or junior level role. Positions that are more senior will be reviewed quite carefully by HR before they pass them on to the hiring manager. Heather Neisen, HR Manager, Technology Advice Initially, an average resume takes 2-3 minutes for me to scan. Sarah Benz, Lead Recruiter, Messina Group the average time spent on the initial resume review is 15 seconds. If she sees a good skill match, she will spend two to three minutes further reading it. Josh Goldstein, Co-Founder, Underdog.io We spend, on average, 2:36 per application. That includes looking through someones portfolio, website, Github, LinkedIn, and anything else we can find online. Michelle Burke, Marketing Supervisor, WyckWyre Our hiring managers honestly spend time looking through resumes. They value every application that comes in and want to hire as many people as needed rather than screen through applications and end up with no one. 5. The study makes conjectures without data to back it up The study needs to be more careful about making conjecture and speculation, or give better reasoning to support its claims. For example, the study says: âIn some cases, irrelevant data such as candidatesâ age, gender or race may have biased reviewersâ judgments.â While the above is not necessarily an incorrect hypothesis, itâs pointless to include in this study unless The Ladders can prove it with actual data. If they are speculating, they need to be clear about that, or else be clear about the bits of data that substantiate their claims. Due to the opaqueness of the study, itâs impossible to know how they made that determination. Here are two other areas where critical information is missing: We donât know why The Ladders chose a sample of 30 people Hereâs why this is important: In general, ordinal data (IE. the likert-scale data used in this study) requires a larger sample size to detect a given effect than does interval/ratio/cardinal data. So, is 30 people enough for this study? If The Ladders did not set a clear rule for how large a sample they were going to recruit, they could theoretically continue to choose as many or as few people as necessary to come up with a result that that they wanted. Again, Iâm not accusing The Ladders of doing this, but just giving another example for why study methodology should be transparent and open â" results carry less meaning unless they can be examined. Another problem: We donât know if the differences were statistically different from zero This is a bit more difficult to understand, but essentially this means we canât tell if their results were just from sheer randomness or a real underlying difference. To determine whether the results were sheer randomness or actually reveal differences, the study needs to report z scores or t scores, Pearsons rs or Kendalls taus, etc. These values are the statisticians tools for making an inference about whether observed differences reflect some real, underlying difference, or whether theyre likely to have resulted from just random noise. The study reveals no such information. 6. Study doesnât answer The Big Question: Does any of this even matter? Hereâs The Big Question â" do these professionally rewritten resumes actually help people find jobs, or land more interviews? While itâs interesting to find out if professionally written resume get better ratings for usability and âmore orderly eye gazeâ, the important question is whether people are getting interviews and jobs because of these resume qualities. The most meaningful statistic would be to figure out how large an improvement in orderliness of gaze and resume ratings is required to move a resume from âno interview/not hiredâ to âinterview/hiredâ. (Granted, this question would be a complex problem to figure out, and impacted by a large number of variables.) But without knowing if there was a meaningful difference in hiring/interview trends, itâs hard to know if the increases reported by the study are large, small, unimportant, or important. Again, I think itâs great that The Ladders went through the effort to come up with a resume and recruiting study. However, we should be careful to remember that bulletproof studies can be hard to design, and statistics can be unintentionally misleading, and frequently tricky to interpret. Hopefully, this analysis can serve a jumping-off point for a new and improved study that may reveal some surprising information about how recruiters tend to behave, and how to help people find jobs more efficiently. We reached out to The Ladders for a comment on this article and a request for their in depth methodology for further review, and have not yet received a response.
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.