People accurately evaluate various types of facial information (gaze direction, facial expression, facial identity, and gender) of multiple faces. Considering such varieties, summarizing abilities of facial information might vary depending on its type because it is either changeable (e.g., gaze direction and expression) or invariant (e.g., identity and gender). The current study investigated the relationship between the averaging abilities of facial information using an individual difference approach and a dual-task paradigm to understand the effect of facial information type on the ensemble coding of facial information. We conducted two online experiments on the relationship between the averaging abilities of facial expressions and gaze direction (Experiment 1) and those of facial expressions and gender (Experiment 2). Participants were asked to estimate the average of each piece of facial information in the first and second blocks (single task), respectively, and both sequentially in the third and fourth blocks (dual task). We found that most of the error autocorrelations of facial information were high, indicating high measurement reliability. Participants’ abilities to average facial expressions were correlated with those to average gaze directions, but not with those to average gender information. That is, the ensemble processing of facial expressions is related to gaze directions, but not genders. These results suggest that ensemble representations of facial information regarding changeable properties differ from those of invariant ones.
|Publication status||Published - 2023 Feb|
Bibliographical noteFunding Information:
This work was supported by the National Research Foundation of Korea (NRF) grant funded by the Korean government (MSIT) (NRF-2019R1A2B5B01070038). The raw data for the current study is available on the Open Science Framework ( https://osf.io/pcnd3/ ).
© 2022 Elsevier Ltd
All Science Journal Classification (ASJC) codes
- Sensory Systems