logo

Dr Xander van Lill

The Value of Personality In selection

Xander van Lill

Points of debate

quote mark

Based on a panel discussion with, including the comments of Michael Champion, Robert Dipboye, John Hollenbeck, Kevin Murphy, and Neil Schmitt, Morgeson, et al. (2007) reach three conclusions about faking in self-report measures of personality.

Download page as pdf
{{ menuText }}

Introduction

Even though there seemingly was a resolution on the use of personality in selection in 2007, criticisms of the predictive validity of off-the-shelf measures of personality do emerge from time to time.

section introduction shapes

Points of critique that are expressed in practice include claims that these measures are (1) designed to describe a theory, not predict future behaviour, (2) developed for academic, not business application, and (3) that no evidence exists to suggest that these measures can impact business results.

We have previously discussed the history of personality assessments in the workplace (JvR Africa Group, 2012), but would like to make the focus of this paper one that addresses the criticisms that personality assessments face from time to time.

circles smiling woman

We will do this by reflecting on the conclusions by Morgeson, et al. (2007), as well as the counterarguments by Ones, et al. (2007) and Tett and Christiansen (2007).

Even though off-the-shelf measures of personality have distinct advantages in selection processes, it is important to acknowledge that the field of personality psychology is not stagnant and that research in the field continues to provide an ever more nuanced perspective on the use of personality in selection.

We consider a few more nuanced perspectives on personality that might increase predictive validity in selection.

section introduction shapes

Historical overview

One of the first authors to have criticised the use of personality assessments in selection was Guion and Gottier (1965).

In response to the proliferation of personality in selection processes, Morgeson, et al. (2007) convened a panel discussion with Michael Champion, Robert Dipboye, John Hollenbeck, Kevin Murphy, and Neil Schmitt to reinvigorate a healthy scepticism about the use of personality assessments in selection. The findings of this panel discussion, as well as the comments of these scholars, were captured in Morgeson, et al.'s (2007) article but soon rebutted by Ones, Dilchert, Viswesvaran, and Judge (2007), and Tett and Christiansen (2007).

The debate in 2007 will take central focus in this paper, however, it is important to note that other authors have also made important contributions to the broader discussion, such as the publication by Hogan, Barrett, and Hogan (2007). This paper is, therefore, by no means a complete literature review but instead an attempt to highlight the points of critique in a very specific debate, while also proposing a way forward.

The credibility of the business is also improved when applicants perceive the process to be fair (Bauer et al., 2020). There is some evidence that fairness in the selection process has a positive influence on job performance (Konradt et al., 2016).

Points of debate

As mentioned previously, this paper is by no means a thorough review of the debate on the use of personality assessment in selection.

section introduction shapes

The bibliographies and citations of Morgeson, et al. (2007), Ones, et al. (2007), and Tett and Christiansen (2007) are extensive enough to justify its own review. Rather, the intention is to provide a snapshot of the conclusions reached by Morgeson, et al. (2007) and Ones, et al. (2007). Tett and Christiansen (2007) will be referenced in conjunction with Ones, et al. (2007) to outline the complexity of the issues raised.

If you are interested in a more thorough review, you are encouraged to read the full-length articles, browse through the bibliography for further references, and conduct a search on subsequent citations of the authors’ articles.

3.1 (Non) effects of faking on self-report measures of personality

Based on a panel discussion with, including the comments of Michael Champion, Robert Dipboye, John Hollenbeck, Kevin Murphy, and Neil Schmitt, Morgeson, et al. (2007) reach three conclusions about faking in self-report measures of personality, which are quoted on the next page.

Tests and information requested of the candidate should be related to the job requirements.

“(b) Faking or the ability to fake may not always be bad. In fact, it may be job-related or at least socially adaptive in some situations” (Morgeson, et al. 2007, p. 720).

“(c) Corrections for faking do not appear to improve validity. However, the use of bogus items may be a potentially useful way of identifying fakers” (Morgeson, et al. 2007, p. 720).

Ones, et al. (2007) counter point (a) in Morgeson, et al.'s (2007) article by highlighting evidence from Hough (1998) that supports the criterion-related validity of personality measures under high-stake situations, such as selection. Ones, et al. (2007) further indicate that the construct validity of self-report measures of personality remains consistent across selection and non-selection situations, providing less evidence that social desirability could distort results on personality measures (Bradley & Hauenstein, 2006; Robie, Zickar, & Schmit, 2001).

In disagreement with point (b), Ones, et al. (2007) further argue that, based on the meta-analytical evidence of Li and Bagger (2006) and others, that social desirability scales are not predictive of job performance. Ones, et al. (2007) contradict the use of faking measures since the cumulative evidence suggests that the use of faking measures do not maximise the prediction of performance (Schmitt & Oswald, 2006). Tett and Christiansen (2007) concur with Ones, et al. (2007) on the use of scales of faking in order to predict performance. A direct quotation on the relevant conclusions from the debate by Ones, et al. (2007) is provided below.

“4. Faking does not ruin the criterion-related or construct validity of personality scores in applied settings” (Ones, et al. 2007, p. 1020).

Tett and Christiansen (2007) further outline the complexity of research on faking by critiquing the traditional way in which social desirability was determined and concludes that:

“8: Past research suggesting that faking does not affect personality test validity under true applicant conditions is uninformative to the degree it relies on social desirability measures” (Tett & Christiansen, 2007, p. 982).

“9: Past research suggesting that faking does not affect personality test validity is uninformative to the degree it relies on statistical partialing techniques” (Tett & Christiansen, 2007, p. 982).

“10: Applicant faking attenuates personality test validity but enough trait variance remains to be useful for predicting job performance” (Tett & Christiansen, 2007, p. 984).

“12: It has not been shown that faking indicates social competence or that faking predicts future job success. Claims that faking may be desirable, even for some jobs, are premature.”

A study conducted by Odendaal (2015) suggests that the validity and fairness in the use of social desirability scales in South Africa should be seriously questioned. Odendaal (2015) provides evidence that social desirability scales measure faking differently across cultural and language groups. As a result, discrimination might occur based on factors that are unrelated to the requirements of the job, thereby adversely impacting black applicants for jobs.

Summary of POINT 3.1

As per the arguments of Ones, et al. (2007) and Tett and Christiansen (2007) we conclude that faking, in as far as it is assessed using social desirability measures, does not negatively impact the accuracy with which personality predicts job performance. In fact, inferences based on social desirability scales could cause harm in selection processes (Odendaal, 2015).

However, we are still open to considering ongoing research on novel ways to determine the impact of applicant faking, such as a recent study performed by Dunlop, et al. (2019), which indicates that overclaiming might occur more in questionnaires that contain job-relevant instead of job-irrelevant content. These studies might provide invaluable guidelines to refine and improve existing ways of measuring personality.

quote mark

"...the validity and fairness of social desirability scales to detect applicant faking in the operational setting should be seriously questioned"
(Odendaal, 2015, p. 11)

3.2 Predictive validity of self-report personality measures

Morgeson, et al. (2007) take a dim view on the predictive validity of self-report measures of personality. They stated:

“(d) We must not forget that personality tests have very low validity for predicting overall job performance. Some of the highest reported validities in the literature are potentially inflated due to extensive corrections or methodological weaknesses” (Morgeson, et al. 2007, p. 720).

In conducting quantitative meta-analytical summaries, Ones, et al. (2007) not only found practically meaningful relationships between personality and (1) job performance, but also personality and (2) leadership effectiveness, (3) entrepreneurship, and (4) work motivation and attitudes. A meta-analysis of South African studies reaffirmed the importance of the Big Five personality traits as important predictors of job performance and, based on the invariance of the personality-performance relation across countries, claims that this relation is culturally universal (Van Aarde, Meiring, & Wiernik, 2017).

Ones, et al. (2007) also investigated the predictive validity of conscientiousness, including its facets in the selection process and found that it was on par with other frequently used predictors. Ones, et al. (2007) finally indicated that, if one were to properly review the literature on personnel psychology (Barrick & Mount, 1991), the corrections used in meta-analyses for personality were more conservative than meta-analysis conducted on alternative selection measures.

Direct quotations on the relevant conclusions from the debate by Ones, et al. (2007) are provided below

“1. Personality variables, as measured by self-reports, have substantial validities, which have been established in several quantitative reviews of hundreds of peer-reviewed research studies” (Ones, et al. 2007, p. 1020).

“2. Vote counting and qualitative opinions are scientifically inferior alternatives to quantitative reviews and psychometric meta-analysis” (Ones, et al. 2007, p. 1020).

3. “Self-reports of personality, in large applicant samples and actual selection settings (where faking is often purported to distort responses), have yielded substantial validities even for externally obtained (e.g. supervisory ratings, detected counterproductive behaviours) and/or objective criteria (e.g. production records)” (Ones, et al., 2007, p. 1020).

Tett and Christiansen (2007) outline the complexity of predicting job performance from personality in meta-analyses and argue that critique against the use of personality in selection is often based on inaccurate inferences by concluding that:

“1: Mean r between personality tests and job performance measures underestimates the potential validity of personality tests by ignoring variability in the strength and direction of population validity coefficients” (Tett & Christiansen, 2007, p. 973).

“2: Mean r between personality tests and job performance measures underestimates the potential validity of personality tests by ignoring the value of confirmatory over exploratory research strategies, failing to reflect how personality tests are used and should be used in actual selection practice” (Tett & Christiansen, 2007, p. 974).

“3: Mean r between personality tests and job performance measures, even when derived using a confirmatory strategy, underestimates the potential validity of personality tests by ignoring the added value of personality-oriented job analysis” (Tett & Christiansen, 2007, p. 975).

“4: Mean r between personality tests and job performance measures underestimates the potential validity of personality tests by ignoring the value of narrow over broad trait and criterion measures” (Tett & Christiansen, 2007, p. 976).

“5: Mean r between personality tests and job performance measures underestimates the potential validity of personality tests by ignoring incremental validity expected from combining scores on multiple trait measures” (Tett & Christiansen, 2007, p. 977).

“6: Personality test validity in predicting job performance can be expected to improve over currently available estimates in light of untapped theory and corresponding developments in job analysis methods targeting the situations in which specific traits are expressed and then evaluated as job performance” (Tett & Christiansen, 2007, p. 978).

“7: Personality test validity in predicting job performance can be expected to improve over currently available estimates in light of possible interactions among traits in their relations with relevant workplace criteria” (Tett & Christiansen, 2007, p. 979).

Illustration of a lightbulb

Summary of POINT 3.2

In agreement with the meta-analytical findings of Ones, et al. (2007) and Van Aarde, et al. (2017), we conclude that personality is a valid predictor of job performance and can, therefore, be used for selection purposes.

We would also like to point out that, in agreement with one of the panellists listed in Morgeson, et al.'s (2007) article, that practitioners should start to think in a more nuanced way about the job criteria used to infer the predictive validity of personality measures. This issue will receive more attention later in this paper (Schmitt, 2014). In agreement with Schmitt (2014), we also acknowledge that, given the lack of correlation between cognitive ability and personality, their combination might be a powerful predictor of job performance (Schmidt & Hunter, 1998).

3.3 Off-the-shelf vs home-grown assessments of personality

Morgeson, et al. (2007) reach two conclusions about the use of off-the-shelf assessments in selection processes, which are quoted below.

(e) Due to the low validity and content of some items, many published self-report personality tests should probably not be used for personnel selection. Some are better than others, of course, and when those better personality tests are combined with cognitive ability tests, in many cases validity is likely to be greater than when either is used separately (Morgeson, et al. 2007, p. 720-721).

“(f) If personality tests are used, customized personality measures that are clearly job-related in face valid ways might be more easily explained to both candidates and organisations” (Morgeson, et al. 2007, p. 721).

Ones, et al. (2007) provides a rebuttal to points (e) and (f) in Morgeson, et al.'s (2007) article, stating that it is unclear why home-grown (or more customised) assessments would necessarily lead to higher predictive validity. In this respect, Ones, et al. (2007) argue that home-grown assessments can be lacking in construct and criterion-related validity, unless a considerable amount of resources are invested in the construction of the assessment, thereby matching the psychometric properties of established personality assessments.

Furthermore, off-the-shelf personality assessments might also have more extensive norm groups for comparison (Ones, et al. 2007). Direct quotations on the relevant conclusions from the refutation by Ones, et al. (2007) are provided below.

“6. Customized tests are not necessarily superior to traditional standardised personality tests” (Ones, et al. 2007, p. 1020).

In agreement with Ones, et al. (2007), we believe that established measures of personality have a competitive advantage in terms of the sheer amount of evidence that support their validity. That does not preclude the development of new measures, but cautions against potential errors in judgement when generalisations are made based on smaller samples (Ones, Viswesvaran, & Schmidt, 2016). This might be a particularly important issue given the replication crisis in which psychology currently finds itself (Ones, et al. 2016).

decorative circles decorative circles decorative circles decorative circles

3.4 An evaluation of alternatives

In an evaluation of alternatives, Morgeson, et al. (2007) reach two conclusions:

“(g) Future research might focus on areas of the criterion domain that are likely to be more predictable by personality measures” (Morgeson, et al. 2007, p. 721).

“(h) Personality constructs certainly have value in understanding work behaviour, but future research should focus on finding alternatives to self-report personality measures. There is some disagreement among the authors in terms of the future potential of the alternative approaches to personality assessment currently being pursued” (Morgeson, et al. 2007, p. 721).

Ones, et al. (2007) refutes the alternative raised in Morgeson, et al.'s (2007) article, namely the use of conditional reasoning tests (implicit measures of the extent to which individuals use justification mechanisms to rationalise their behaviour) and ipsative scale formats. Ones, et al. (2007) indicate that the predictive validities of conditional reasoning measures are comparable at best with personality assessments and, with the added disadvantage of the costs involved with the development of these measures, might not be a feasible solution for selection (Ones, et al. 2007).

Scores derived from ipsative scales from a classical test theory perspective, impose certain psychometric difficulties in terms of reliability estimation and threats to construct validity (Brown & Maydeu-Olivares, 2013). Ones, et al. (2007), however, acknowledge the potential of item response theory in deriving normative scores from ipsative scales in the future. The findings of Brown and Maydeu-Olivares (2013) suggest that Thurstonian Item Response Theory could be used to perform inter-individual comparisons based on ipsative scales.

Finally, Ones, et al. (2007) recognise the incremental validity of others’ ratings on personality traits in predicting job performance. Direct quotations on the relevant conclusions from Ones, et al. (2007) are provided below.

“7. When feasible, utilising both self- and observer ratings of personality likely produces validities that are comparable to the most valid selection measures” (Ones, et al. 2007, p. 1020).

“8. Proposed palliatives (e.g. conditional reasoning, forced-choice ipsative measures), when critically reviewed, do not currently offer viable alternatives to traditional self-report personality inventories” (Ones, et al. 2007, p. 1020).

Summary of POINT 3.4

The research on the use of personality in selection is not stagnant and it is important to consider various alternatives that might help practitioners to increase the predictive validity of personality assessments. In the section to follow, we will consider alternatives to ensure the appropriate prediction of job performance from established measures of personality.

decorative circles
section introduction shapes

The way forward

In contrast to the three points of critique expressed in the introduction, we believe that it is dangerous not to follow a theory-driven approach.

However, we affirm the notion that well-reasoned conclusions can emerge from top-down (theories developed by academics, which are tested in practice) or bottom-up (insights that emerge from practice that leads to theory development) approaches to theory development (Latham & Locke, 2006; McAbee, Landis, & Burke, 2017; Spector, Rogelberg, Ryan, Schmitt, & Zedeck, 2014). Both top-down and bottom-up approaches can be useful to, firstly construct psychological measures, and secondly to inspect the predictive validity of personality traits for job performance.

In fact, a well-reasoned conclusion is probably more likely to help decision makers explain why using one construct in predicting job performance is fairer than using another. When opportunistic relationships between predictor and outcome variables are derived from one-shot correlational studies, especially when there are not sufficient resources to replicate findings or logical reasons for it, spurious conclusions can be drawn based on arbitrary relationships, which is a fear that has been reiterated with emergence of big data methods (McAbee, et al. 2017; Wax, Asencio, & Carter, 2015).

However, even with well-designed big data research projects, the scientist and practitioner still require some theoretical orientation to help clients make sense of people-based findings (McAbee, et al. 2017). In summary, as phrased by the late Professor Kurt Lewin (1952, p. 169), there is “nothing more practical than a good theory”.

Point 2 of the critique raised in the introduction is in contrast with the scientist-practitioner model, which recognises the importance of basing practice on scientific findings (Briner & Rousseau, 2011). Improving performance in the workplace does not have to exclusively depend on scientists or practitioners, as both parties could contribute in meaningful ways to the improvement of theory and practice when using personality in the workplace (Latham, 2019).

We also disagree with point 3 of the critique expressed in the introduction. The meta-analytical study conducted by Ones, et al. (2007) proves that off-the-shelf measures of the Big Five model of personality adds value to selection processes (Dilchert, Ones, & Krueger, 2019).

As concluded in point (e) of Morgeson, et al.'s (2007) article, we agree that not all off-the-shelf measures of personality assessments should be thrown out with the proverbial bathwater, but that practitioners should be cautious with what they purchase for selection purposes.

Furthermore, the research conducted on the added value of personality to selection processes is not stagnant and developments in research must be considered.

In this respect, the too-much-of-good-thing effect, interactions effects between personality traits, focus on broad vs narrow traits, the influence of contextual factors, and the importance of others’ ratings have to be taken into consideration by practitioners who are serious about using personality assessments for selection.

4.1 The too-much-of-a-good-thing-effect

The too-much-of-good-thing effect dispels the notion that excess in a psychological characteristic is always a good thing (Pierce & Aguinis, 2013). For example, Whetzel, Mcdaniel, Yost, and Kim (2010) reported a curvilinear relationship between conscientiousness and job performance, where too-high levels of conscientiousness had an unintended negative consequence for job performance.

However, Le, et al. (2011) indicated that the inflexion point where the effect of conscientiousness and job performance goes down on the curve depends on the complexity of the task (Le, et al. 2011). Tasks that are less complex require greater speed and, therefore, less persistence and dutifulness.

By contrast, when tasks are more complex, greater accuracy is required, in which case being more dutiful and persistent will count in an employee’s favour (Le, et al. 2011). Hogan and Hogan (2009) also provides compelling evidence on areas where excesses in eleven personality traits could lead to derailment in the workplace.

Excesses in these traits are linked to personality disorders (Hogan & Hogan, 2001), which include the traits excitable, sceptical, cautious, reserved, leisurely, bold, mischievous, colourful, imaginative, diligent, and dutiful.

The findings caution practitioners to not assume that the relationship between “good” traits and job performance are straightforward, but to consider the nature of the role and the unintended negative effects of too high levels of what might be assumed to be “preferable” traits.

It also brings to light that the techniques used to investigate the relationship between personality and job performance should account for the non-linear relationships by, for example, conducting polynomial regressions, accounting for moderation effects, or using artificial neural networks (Minbashian, Bright, & Bird, 2010).

4.2 Interaction effects between personality traits

Building on the complexity of the relationship between personality and job performance, it is important to note that traits do not predict performance in isolation (Ones, et al. 2007). For example, in the case where cooperation with others is required, the interaction effect of conscientiousness and agreeableness provide a clearer picture as to why some individuals, who have the same levels of conscientiousness, perform better than others in interpersonal setting such as teams (Witt, Burke, Barrick, & Mount, 2002).

Practitioners’ selection decisions should, therefore, not be based on a single trait from a personality questionnaire but consider the evidence that supports the interaction of several traits that might be relevant for the requirements of a role.

4.3 Broader versus narrower traits

A new perspective that is emerging, namely the proposition of meta-traits, suggests that the Big Five traits of personality could be reduced to two meta-traits, which can be arranged in a circumplex model (Strus & Cieciuch, 2017, 2019; Strus, Cieciuch, & Rowiński, 2014).

According to Strus and Cieciuch (2019), the circumplex model provides a more comprehensive integration of personality traits, making its applications more specific and dynamic. The two meta-traits that are differentiated include alpha (stability) and beta (plasticity) (Strus & Cieciuch, 2017, 2019; Strus, et al. 2014).

Taken together, alpha reflects social self-regulation or the stability of employees’ emotional, motivational, and social functioning (Strus & Cieciuch, 2017, 2019; Strus, et al. 2014). Beta, in contrast, encapsulate the covariance between openness to experience and extroversion and reflects an employee’s plasticity (Strus & Cieciuch, 2017, 2019; Strus, et al. 2014).

Plasticity, in this context, refers to employees’ proclivity to explore and voluntarily engage in new experiences (Strus & Cieciuch, 2017, 2019; Strus, et al. 2014). Integrity, which similarly to the meta-trait of stability is a compound trait consisting of conscientiousness, neuroticism, and agreeableness (Ones, Viswesvaran, & Dilchert, 2005), is reported to be a strong predictor of job performance (Ones, et al. 2007; Schmidt & Hunter, 1998).

On the other side of the continuum, there is a call for narrowing the use of traits in predicting performance (Anglim & O’Connor, 2019).

Narrow traits refer to facets that a factor such as conscientiousness are composed of, for example, order, self-discipline, dutifulness, effort, and prudence (Anglim & O’Connor, 2019; Taylor & de Bruin, 2013). There is evidence that narrow traits offer enhanced predictive validity, especially when narrow aspects of job performance are predicted (Anglim & Grant, 2016, 2014; Dudley, Orvis, Lebiecki, & Cortina, 2006; Pletzer, Oostrom, Bentvelzen, & de Vries, 2020).

Given the opposing need for both broadening and narrowing traits at the same time, the wisdom of Hough, Oswald, and Ock (2015) might provide a meaningful and practical way to discern what is required in the selection process.

In this respect, it is important that practitioners determine (1) what the breadth of the measure of job performance is, (2) how much time is available for assessments, and (3) whether the organisation has become more reliant on traditional representations of personality. Subsequently, the practitioner can strike a balance as to whether broader or more narrower traits of personality should be measured ( Hough, et al. 2015). The ideal, however, would remain to measure as many aspects of personality in conjunction with clear aspects of performance, in an as detailed way as possible (He, Donnellan, & Mendoza, 2019). In doing so, the practitioner can move between different levels of abstraction when making selection decisions.

4.4 Contextual factors

Personality and its relationship with job performance were viewed as constant across different situations in the past (Huang & Ryan, 2011). However, as with the relationship between cognitive ability and job performance, the characteristics of the situation might influence the relationship between personality and job performance ( Hough & Oswald, 2008).

In the case of the relationship between cognitive ability and job performance, the complexity of tasks (situational variable) has a practical meaningful effect on the strength of the relationship (Schmidt & Hunter, 1998). When tasks are more complex, the relationship between cognitive ability and job performance is stronger (Schmidt & Hunter, 1998).

In personality, for example, the relationship between conscientiousness and job performance might be higher for more autonomous jobs (Barrick & Mount, 1993; Hough & Oswald, 2008). Rather than viewing personality as static, it might be meaningful for practitioners to consider the ways in which situations can activate personality traits to the benefit or detriment of job performance in an organisation ( Hough & Oswald, 2008; Tett & Burnett, 2003; Tett & Christiansen, 2007).

In personality, for example, the relationship between conscientiousness and job performance might be higher for more autonomous jobs (Barrick & Mount, 1993; Hough & Oswald, 2008). Rather than viewing personality as static, it might be meaningful for practitioners to consider the ways in which situations can activate personality traits to the benefit or detriment of job performance in an organisation ( Hough & Oswald, 2008; Tett & Burnett, 2003; Tett & Christiansen, 2007).

It has also been found that when items in a questionnaire are rephrased to be more context-specific, the predictive validity of personality measures increases (Holtrop, Born, De Vries, & De Vries, 2014; Holtrop, Born, & De Vries, 2014). For example, instead of phrasing an item as “I follow through with my plans”, an item can be contextualised by phrasing it as “I follow through with my plans at work”.

4.5 The importance of others’ ratings

Hogan and Sherman (2020) acknowledge that self-reports contain identity claims, however, Hogan and Sherman (2020) also indicated that well-constructed self-report items could be highly correlated with important reputation-based outcomes in the workplace.

According to Hogan and Sherman (2020), identity refers to a person’s imagined self, which is loosely based on reality. By contrast, reputation refers to the “you” that other people know (Hogan & Sherman, 2020), which is purported to be a more important factor for productive social life in organised groups.

Oh, Wang, and Mount (2011) and Connelly and Ones (2010) indicate that, if it can be established that another person knows an employee well, then others’ ratings do add incremental validity to predicting job performance. However, it might be difficult to obtain others’ ratings of employees in selection processes where candidates were recruited from external to the organisation.

Creative ways have to be found around this challenge, such as asking family members or very close friends to complete personality assessments based on their observations of the person in question (Connelly & Ones, 2010).

conclusion

In this paper, we revisit the debate between Morgeson, et al. (2007); Ones, et al. (2007); and Tett and Christiansen (2007) in order to determine what the value of off-the-shelf measures of personality are in selection processes.

section introduction shapes

It was evident, from meta-analytical research (Ones, et al. 2007; Van Aarde, et al. 2017), that personality plays a valuable role in predicting performance and can, therefore, be a valuable part in selection processes.

However, it is acknowledged that research on personality in the workplace is not stagnant. Subsequently, (relatively) new developments in the field of personality psychology were considered, namely the too-much-of-a-good thing effect, the interaction of traits, focus on broader versus narrower personality traits, the influence of situational variables, and importance of others’ ratings. In summary, we believe that personality measures add valuable and essential information in selection.

However, given the dynamic nature of personality research, we should continue to enhance and think more nuanced about the measurement of personality and its ability to predict job performance.

references

Anglim, J., & Grant, S. (2016). Predicting psychological and subjective well-being from personality: Incremental prediction from 30 facets over the Big 5. Journal of Happiness Studies, 17(1), 59–80. https://doi.org/10.1007/s10902-014-9583-7

Anglim, J., & Grant, S. L. (2014). Incremental criterion prediction of personality facets over factors: Obtaining unbiased estimates and confidence intervals. Journal of Research in Personality, 53, 148–157. https://doi.org/10.1016/j.jrp.2014.10.005

Anglim, J., & O’Connor, P. (2019). Measurement and research using the Big Five, HEXACO, and narrow traits: A primer for researchers and practitioners. Australian Journal of Psychology, 71(1), 16–25. https://doi.org/10.1111/ajpy.12202

Barrick, M. R., & Mount, M. K. (1991). The big five personality dimensions and job performance: A meta-analysis. Personnel Psychology, 44(1), 1–26. https://doi.org/10.1111/j.1744-6570.1991.tb00688.x

JVR Africa Group © Copyright 2024. All Rights Reserved.