# Social Accuracy Model (SAM)

A “brief” overview of SAM based on Biesanz (2021).

Jeremy Biesanz
2022-06-26

The social accuracy model (SAM) is a model that estimates individual differences in aspects of interpersonal perception. SAM is a componential model of accuracy breaks assessments of accuracy into different elements (components) that have specific and hopefully useful interpretations. Perceiver and target effects refer to estimates of accuracy components for each perceiver (averaged across targets) and each target (averaged across perceivers) and are analogous to main effects in ANOVA. Dyadic effects are simply the residual effect that is left over for each dyad after accounting for the perceiver and target average effects. For instance, Percy may have a large perceiver effect in that she is generally accurate in her perceptions of the personality of others and thus high in perceptive accuracy. Taylor may have a large target effect is is generally perceived accurately by others.

Conceptually, SAM represents an integration of Cronbach’s (1955) componential approach to assessing accuracy with Kenny’s social relations model. To do this, SAM adopts Funder’s (1995) realistic accuracy model (RAM) for modeling accuracy in that the goal is to assess the validity of the perceiver’s impressions. Initial elements of SAM can be seen in earlier work that adapted Cronbach’s components of accuracy. Biesanz (2010) provided the first overview of the social accuracy model as a general model that can provide answers to questions, such as those raised in Funder (1995), that cannot be addressed through other frameworks or models. Since then, SAM has been used extensively to examine many different research questions in our lab as well as by others.

### The Basic Social Accuracy Model

There are only two measures that are needed for the social accuracy model: perceiver ratings/impressions of the target and target validity measures. These measures are needed for a set of different items or traits. Larger sets of items and more items will produce more reliable results. In Equation (1), perceiver ratings are $$Y_{pti}$$ which indicate perceiver $$p$$’s ratings of target $$t$$ on item $$i$$. The target validity measure for target $$t$$ on item $$i$$ is $$V_{ti}$$. Validity measures should be as reliable as possible (e.g., averaged across multiple sources such as peer and self-reports). These are the only two assessments that are needed for a social accuracy model analysis. However, to conduct the analysis requires separating out the validity measure into two different components — the average person’s validity measure and how different the target is from the average person. First, the average validity measure across targets is determined for each item $$(Vmean_{i}= {\bar{V}}_i)$$ by calculating the mean for each item across the $$t$$ targets on $$V_{ti}$$. This provides the mean validity profile which is the profile of the average target. The validity profile of the average target, $$Vmean_{i}$$, is then used to center the target validity measure within items by calculating $$Vc_{ti} = V_{ti} - Vmean_{i}$$. The centered validity measures for each target and the average validity profile are then used to predict the perceivers’ ratings as shown in Equation (1). All that is required to actually estimate Equation (1) is a research design where multiple perceivers evaluate multiple targets across a large set of items. This could be a round-robin design where each perceiver meets and evaluates every target, a half-block design where a set of perceivers all evaluate a common set of targets (e.g., all of the perceivers watch and evaluate the same 10 targets in videos), or various hybrids of these designs. Formally this is a crossed-random effects model and can be estimated with standard multilevel model software.

$$$Y_{pti} = \gamma_{0_{pt}} + \gamma_{1_{pt}}Vc_{ti} + \gamma_{2_{pt}}Vmean_{i} + e_{pti} \tag{1}$$$

$$$\begin{split} \gamma_{0_{pt}} = \gamma_{00} + u_{0_p} + u_{0_t} + u_{0_{(pt)}}\\ \gamma_{1_{pt}} = \gamma_{10} + u_{1_p} + u_{1_t} + u_{1_{(pt)}}\\ \gamma_{2_{pt}} = \gamma_{20} + u_{2_p} + u_{2_t} + u_{2_{(pt)}} \end{split} \tag{2}$$$

Equation (1) provides the full social accuracy model which is simply a two-predictor regression equation: validity measures, centered within item, and the average target validity profile are used to predict perceiver impressions. The unstandardized regression coefficients from this two-predictor regression equation, denoted by gamma (e.g., $$\gamma_{0}$$, $$\gamma_{1}$$, and $$\gamma_{2}$$) as is the common notational practice for multilevel models, can vary across perceivers, targets, and dyads, and are captured by the $$u$$s in Equation (2).

• Fixed Effects. The fixed effects represent the average relationship across perceivers and targets. These coefficients represent the (weighted) average two-predictor regression model coefficients across perceivers and targets. Specifically, given the grand mean centering of both validity measures:
• $$\hat{\gamma}_{00}$$ is the predicted rating at the mean validity measure for the average personality item. This is often not of substantive import.
• $$\hat{\gamma}_{10}$$ is the average distinctive accuracy slope across perceivers and targets and items. This estimates the average relationship between how the target t is different from the average person on the validity measures across a series of traits and perceiver p’s impressions of the target on those same traits. Distinctive accuracy assesses the perceiver’s ability to accurately recognize the unique characteristics of the target. In other words, on average in the present sample, how accurate was this group of perceivers in recognizing the unique traits of this group of targets?
• $$\hat{\gamma}_{20}$$ is the average normative accuracy slope across perceivers and targets. This estimates the average relationship between the average target on the validity measures across a series of traits and the perceiver’s impressions of the target on those same dimensions. In other words, on average in the present sample, how accurate was this group of perceivers in recognizing the average target’s personality? This represents a blend of positivity as well as normative knowledge and recent work has shown that these represent different effects and processes . SAM can easily be extended by separating out normativity and positivity in the modeling process by simply adding the social desirability of each personality trait (item) into the model.
• Random Effects. The random effects represent variability attributable to perceiver, targets, and dyads around the fixed effects. For instance, $$\hat{\tau}_{1_t}$$ represents the standard deviation in distinctive accuracy — good target scores $$(u_{1_t})$$ — after adjusting for sampling variability. These random effects are one of the core elements of SAM as they represent estimates of the extent to which people differ from each other in their ability to form accurate impressions of each other as well as be accurately seen by others. Potential moderators (i.e., correlates) of these individual differences can be added to Equation (2) to better understand who more accurately perceives others and who is more accurately perceived.

Why not simple correlations? To begin with, why are complex models like SAM needed? Can’t correlations suffice to examine questions of accuracy? After all, the simple Pearson correlation has been instrumental to the field of personality psychology. Correlations are foundational to factor analysis and efforts to uncover the important dimensions of basic personality traits such as the Big Five (e.g., John & Srivastava, 1999). Simply put, correlational analyses on a single trait — such as the correlation between self and informant ratings on extraversion — do not provide the inferential leverage to examine individual differences in interpersonal perception. Such analyses provide a good estimate of the average level of self-other agreement for a single trait, but do not allow any insight into whether some perceivers are better than others, whether some individuals are more accurately perceived than others, or whether some dyads or pairs are more accurate than others. Although it is possible to examine moderators of accuracy — for instance, to examine whether judges higher in intelligence are more accurate than judges lower in intelligence — such analyses are indirect and presume both individual differences in the good judge and that intelligence is a good proxy for those individual differences. That is, we would only observe assessments of judges’ intelligence moderating accuracy if (a) there are individual differences across judges in their levels of accuracy and (b) the judge’s level of intelligence is associated with their level of accuracy. Importantly, the absence of moderation does not imply that the good judge does not exist and there are no meaningful individual differences in accuracy across judges. However, to really address this question properly requires a model that allows one to directly estimate individual differences in accuracy. This is the reason why I developed the social accuracy model. To what extent are there individual differences in the good judge, the good target, and the good pair or dyad? Once these individual or group differences are estimated it becomes easy to examine predictors of these individual differences and understand those results. If these individual differences in the good judge are essentially nil, then there is no point in examining predictors of the good judge. If there are strong individual differences associated with accuracy but proposed moderators are not associated with those individual differences, that simply means our theories and understanding of accuracy and factors associated with accuracy are incomplete and we need to look elsewhere to predict and understand those individual differences. Finally, if there are individual differences in accuracy across perceivers, targets, and/or dyads, failing to appropriately model those individual differences can lead to inflated Type I error rates and an inability to appropriately generalize results (e.g., see Judd et al., 2012).

SAM is a profile analysis. Isn’t this different from trait-level analyses? Distinctive accuracy is computed as profile agreement across a series of items or traits. However, SAM does not just estimate a single dyad’s accuracy but instead considers all of the data present across perceivers, targets, and domains of personality. Consequently, the fixed effect for distinctive accuracy ($$\hat{\gamma}_{10}$$ provides an estimate of the average trait-level accuracy across traits (i.e., the average relationship between the validity measure and perceiver ratings when analyzed separately for each item in the SAM analysis). Kenny & Winquist (2001) and Biesanz (2010) note that variable and person-centered analyses provide the same overall summary of the available data. Indeed, Allik et al. (2015) provide this mathematical equivalence for correlations, which requires double standardization (both rows and columns). Biesanz (2018) provides an empirical demonstration of how the average level of (unstandardized) distinctive accuracy is exactly the same across different units of analysis including trait/item, perceiver, target, and dyad.1 In short, a profile analysis such as SAM provides one with an estimate of the average level of accuracy observed across perceivers, targets, and traits.

Allik, J., Borkenau, P., Hřebíčková, M., Kuppens, P., & Realo, A. (2015). How are personality trait and profile agreement related? Frontiers in Psychology, 6, 1–11. https://doi.org/10.3389/fpsyg.2015.00785
Biesanz, J. C. (2010). The social accuracy model of interpersonal perception: Assessing individual differences in perceptive and expressive accuracy. Multivariate Behavioral Research, 45(5), 853–885. https://doi.org/10.1080/00273171.2010.519262
Biesanz, J. C. (2018). Interpersonal perception models. In V. Zeigler-Hill & T. K. Shackelford (Eds.), The SAGE handbook of personality and individual differences: Volume I: The science of personality and individual differences (pp. 519–534). SAGE Publications. https://doi.org/10.4135/9781526451163.n24
Biesanz, J. C. (2021). The social accuracy model. In T. D. Letzring & J. S. Spain (Eds.), The Oxford handbook of accurate personality judgment (pp. 61–82). Cambridge University Press. https://doi.org/10.1093/oxfordhb/9780190912529.013.5
Biesanz, J. C., & West, S. G. (2000). Personality coherence: Moderating self–other profile agreement and profile consensus. Journal of Personality and Social Psychology, 79(3), 425–437. https://doi.org/10.1037//0022-3514.79.3.425
Biesanz, J. C., West, S. G., & Graziano, W. G. (1998). Moderators of self–other agreement: Reconsidering temporal stability in personality. Journal of Personality and Social Psychology, 75(2), 467–477. https://doi.org/10.1037/0022-3514.75.2.467
Biesanz, J. C., West, S. G., & Millevoi, A. (2007). What do you learn about someone over time? The relationship between length of acquaintance and consensus and self-other agreement in judgments of personality. Journal of Personality and Social Psychology, 92(1), 119–135. https://doi.org/10.1037/0022-3514.92.1.119
Cronbach, L. J. (1955). Processes affecting scores on understanding of others and assumed similarity. Psychological Bulletin, 52(3), 177–193. https://doi.org/10.1037/h0044919
Funder, D. C. (1995). On the accuracy of personality judgment: A realistic approach. Psychological Review, 102(4), 652–670. https://doi.org/10.1037/0033-295X.102.4.652
John, O. P., & Srivastava, S. (1999). The Big Five trait taxonomy: History, measurement, and theoretical perspectives. In L. A. Pervin & O. P. John (Eds.), Handbook of personality: Theory and research (Vol. 2, pp. 102–138). Guilford.
Judd, C. M., Westfall, J., & Kenny, D. A. (2012). Treating stimuli as a random factor in social psychology: A new and comprehensive solution to a pervasive but largely ignored problem. Journal of Personality and Social Psychology, 103(1), 54–69. https://doi.org/10.1037/a0028347
Kenny, D. A. (1994). Interpersonal perception: A social relations analysis. Guilford Press.
Kenny, D. A., & La Voie, L. (1984). The social relations model. In L. Berkowitz (Ed.), Advances in Experimental Social Psychology: Vols. Volume 18 (pp. 141–182). Academic Press. https://doi.org/10.1016/S0065-2601(08)60144-6
Kenny, D. A., & Winquist, L. (2001). The measurement of interpersonal sensitivity: Consideration of design, components, and unit of analysis. In J. A. Hall & F. J. Bernieri (Eds.), Interpersonal sensitivity: Theory and measurement (pp. 265–302). Lawrence Erlbaum Associates.
Rogers, K. H., & Biesanz, J. C. (2015). Knowing versus liking: Separating normative knowledge from social desirability in first impressions of personality. Journal of Personality and Social Psychology, 109(6), 1105–1116. https://doi.org/10.1037/a0039587
Wessels, N. M., Zimmermann, J., Biesanz, J. C., & Leising, D. (2020). Differential associations of knowing and liking with accuracy and positivity bias in person perception. Journal of Personality and Social Psychology, 118(1), 149–171. https://doi.org/10.1037/pspp0000218
West, S. G., Ryu, E., Kwok, O.-M., & Cham, H. (2011). Multilevel modeling: Current and future applications in personality research. Journal of Personality, 79(1), 2–50. https://doi.org/10.1111/j.1467-6494.2010.00681.x

1. The data, materials, and code in R to reproduce that exact equivalence is archived and available at https://osf.io/5u6hw/. All that is required to see this exact relationship is (1) center the validity scores within target and (2) weight each regression coefficient (slope) by the variance of the predictor (the variance of the validity measures for each target) to give each coefficient the same weight when computing the average.↩︎