Participants: Previous studies investigating the influence of social category cues like race or sex on emotion categorisation and vice versa have typically recruited around 28–32 participants (e.g. Aguado et al., 2009; Lipp, Craig, & Dat, 2015); as such, 32 undergraduate volunteers at The University of Queensland (5 males, M = 19.31, SD = 4.86) participated in Experiment 1. Participants were compensated with partial course credit. Stimuli: All stimuli were gathered from the FACES database (Ebner et al., 2010). Both the happy and the angry expression displayed on eight older and eight young adult male faces were selected (older adult posers 004, 015, 033, 042, 053, 059, 065, 076, young adult posers 008, 013, 016, 031, 037, 049, 057, 072). These images were edited to remove the neck and clothing so just the head remained. They were then converted to greyscale and placed on a grey background 340 × 390 pixels in size. Procedure: The experiment took place in a group computer laboratory with no more than six participants taking part within a testing session. Each participant was seated in front of a 17′′ CRT monitor (screen resolution: 1024 × 786 pixels, refresh rate: 85 Hz). Participants completed two categorisation tasks which were executed by DMDX (Forster & Forster, 2003). They were instructed that faces would appear on the screen one at a time. For the age categorisation task, participants were instructed to categorise the face as “old” or “young”. No further instructions were provided to participants to indicate how old or young the faces in each age group would be; however, high accuracy in the age categorisation task suggests that participants accurately distinguished the faces in the manner intended. In the emotion categorisation task, participants were instructed to categorise the faces as “happy” or “angry”. They were asked to respond as quickly and accurately as possible. Instructions were presented on the computer screen in writing and also given verbally. Responses were made using the right and left shift keys on a standard keyboard and response mapping and task order were counterbalanced across participants. On each trial, a white fixation cross was presented on a black background in the centre of the screen for 1000 ms. This was replaced by a single face which was presented until a response was made or for 3000 ms. Response times were measured from the onset of the stimulus until a response was made. In both the emotion and the age categorisation task, participants were presented with all 16 individuals (8 older adult and 8 young adult) expressing happiness and anger. Each poser was presented expressing happiness and anger four times resulting in 128 trials in each task. Data reduction and analysis: Responses faster than 100 ms or faster or slower than three standard deviations from each participant’s mean response time were identified as outliers and excluded from analysis along with incorrect responses. Average response times and error rates were then submitted to separate 4 B. M. CRAIG AND O. V. LIPP 2 (Age: young, old) × 2 (Emotion: happy, angry) repeated measures ANOVAs, for the emotion and the age categorisation tasks. As previous research in our lab (Craig & Lipp, 2017) has indicated that performance on tasks of this nature can be influenced by recently completed tasks, we analysed for task sequence effects in each experiment. The results of these analyses were reported only if task sequence affected the pattern of results.