Authors
Djurre Holtrop, Ward Van Breda, Janneke Oostrom, Reinout De Vries
Publication date
2019
Conference
EAWOP 2019
Description
INTRODUCTION/PURPOSE: Some assessment companies are already applying automated text-analysis to job interviews. We aimed to investigate if text-mining software can predict faking in job interviews. To our knowledge, we are the first to examine the predictive validity of text-mining software to detect faking. DESIGN/METHOD: 140 students from the University of Western Australia were instructed to behave as an applicant. First, participants completed a personality questionnaire. Second, they were given 12 personality-based interview questions to read and prepare. Third, participants were interviewed for approximately 15-20 minutes. Finally, participants were asked to—honestly—indicate to what extent they had verbally (α=.93) and non-verbally (α=.77) faked during the interview. Subsequently, the interview text transcripts (M[words]=1,755) were automatically analysed with text-mining software in terms of personality-related words (using a program called Sentimentics) and 10 other hypothesised linguistic markers (using LIWC2015). RESULTS: Overall, the results showed very modest relations between verbal faking and the text-mining programs’ output. More specifically, verbal faking related to the linguistic categories ‘affect’ (r=.21) and ‘positive emotions’ (r=.21). Altogether, the personality-related words and linguistic markers predicted a small amount of variance in verbal faking (R2=.17). Non-verbal faking was not related to any of the text-mining programs’ output. Finally, self-reported personality was not related to any of the faking behaviours. LIMITATIONS/PRACTICAL IMPLICATIONS: The present study shows that linguistic analyses …
Total citations
Scholar articles