A Review of
Incentivized Resume Rating: Eliciting Employer Preferences without Deception
IRR helps Study Discrimination in the Labor Market
Incentivized resume ratings is an experimental technique that helps evaluate employer preferences without using deception.
Introduction
This study utilizes incentivized resume rating (IRR) to examine employer preferences for candidates graduating from an Ivy League university. The IRR model incentivizes employers to rate hypothetical candidate profiles by matching employers to real job seekers based on their reported preferences. Kessler, Low, and Sullivan find that employers highly value prestigious work experience during the summer before senior year and additional work experience during the summer before junior year. Using rating data, the authors detect employer preferences to be relatively stable across the distribution of candidate quality.
Although the study found no evidence that employers are less interested in female or minority candidates on average, they found evidence of employer discrimination against white women and minority men in STEM roles. Additionally, the research found evidence of lower returns to prestigious internships for all women and minorities. The authors suspect that this could be a result of employers expecting other firms to have a positive preference for diversity even if they do not display this preference themselves. This assumption results in distorted beliefs about aggregate preferences for diversity that hurt female and minority candidates in the job market. In other words, the mistaken notion that these candidates will surely get jobs because they are diverse means that each firm is less likely to hire them than would otherwise be the case.
Policy makers and economists have a vested interest in understanding employer preferences and the role of discrimination in the labor market. Correspondence audit studies, including resume audit studies, have been commonly used to research these topics. However, these audit studies’ use of deception raises questions about real-world validity if fake resumes systematically differ from real resumes. Additionally, many audit studies use call back rates (i.e., the rates at which employers call fake candidates) as a measure of employer interest, when call back rates are likely also a function of the employer’s expectation of a given candidate’s likelihood to accept. IRRs also have the advantage of avoiding deception, eliciting rich information about a single employer’s preferences across multiple resumes, and enabling researchers to randomize many candidate characteristics simultaneously.
Judd Kessler is an Associate Professor of Business Economics and Public Policy at the University of Pennsylvania researching public economics, behavioral economics, and market design. Corinne Low is an Assistant Professor of Business Economics and Public Policy at the University of Pennsylvania studying development economics, experimental economics, and gender. Colin D. Sullivan is a postdoctoral fellow in the Department of Economics at Stanford University where he studies labor markets, organ markets, and other matching markets, with a focus on eliciting preferences through incentive design.
Methods and Findings
The authors use IRR to study the preferences of employers hiring graduating seniors through on-campus recruiting in partnership with the University of Pennsylvania’s Career Services office. As a part of the study design, employers were required to rate hypothetical candidates along a 10-point Likert scale on two dimensions: their interest in hiring the candidate and the likelihood that the candidate would accept a job offer if given one.
Through their partnership with the Career Services at the University of Pennsylvania, the authors recruited 72 employers during Fall 2016 and Spring 2017. Each employer rated interest and likelihood of acceptance for 40 unique, hypothetical resumes with randomly assigned demographic and human capital characteristics (e.g., GPA, major, work experience, skills). The study found that these employers’ value higher GPAs in addition to the prestige, quality, and quantity of summer work experience.
● The study did not find evidence of discrimination on average, which the authors believe may be caused by the highly specific employer and candidate pool engaging in on-campus recruiting at the University of Pennsylvania
● There was evidence of a large significant preference for white males over white females and minority males among employers looking to hire STEM candidates. This discrimination is likely a function of implicit bias because these differential preferences were significantly larger in the latter half of each block of ten resumes, indicating a potential role of reviewer fatigue in the magnitude of the bias.
● The researchers find negative interaction effects between race, gender, and internship prestige on employer interest, indicating that non-white and non-male candidates received less advantage from prestigious internships. This is potentially due to reviewer beliefs that other employer’s advantage diverse candidates, making a prestigious internship a weaker signal of quality for candidates from underrepresented groups.
Conclusions
This study provides proof of concept that IRR methodology can be used to understand labor market trends and employer preferences without the use of deception. Because IRR elicits richer preference information compared to binary callback decisions, it can also provide more insight into employer hiring preferences and beliefs regarding the likelihood of job acceptance. Because IRR enables access to employers that recruit proactively and are therefore inaccessible with resume audit measures, it also allows researchers to gather data on preferences from a new subject pool. Additionally, researchers using IRR can randomize many candidate characteristics independently and simultaneously, which helps explore how employers respond to interactions of candidate characteristics. Finally, IRR allows researchers to collect supplemental data about research subjects, which can be correlated with subject preference measures and enables researchers to better understand the pool of employers.
However, because the IRR method informs subjects that responses will be used in research, it can be vulnerable to experimenter demand effects. For this reason, IRR may be less well-equipped to identify explicit biases because we cannot guarantee that employers treat hypothetical resumes as they would real job candidates. However, as a measure of employer preference and implicit bias, IRR is a useful tool for future research.
Thank you for visiting RRAPP
Please help us improve the site by answering three short questions.