1 Introduction Scott & Johnsrude, 2003 Bernstein et al., 2002; Calvert et al., 1997; Calvert et al., 1999 Calvert, Campbell, & Brammer, 2000 Ludman et al., 2000; MacSweeney et al., 2000; Paulesu et al., 2003; Pekkola et al., 2005 Ruytjens, Albers, van Dijk, Wit, & Willemsen, 2006 Buccino et al., 2004; Campbell et al., 2001 Nishitani & Hari, 2002 Paulesu et al., 2003 Watkins, Strafella, & Paus, 2003 Hall, Fussell, and Summerfield (2005) Bernstein, Demorest, & Tucker, 2000 Mohammed, Campbell, MacSweeney, Barry, & Coleman, 2006 MacSweeney et al., 2001; MacSweeney et al., 2002 n MacSweeney et al.'s (2002) Sadato et al. (2005) Mohammed et al., 2006 To summarise, this study examines cortical correlates for the perception of lists of speechread words under lexical target detection conditions. We aimed to identify regions that may be activated during observation of silently spoken lexical items that are not drawn from a closed set, and when the contrast (baseline) condition was a speaker at rest. The questions posed were: (1) To what extent do prelingually deaf people who are proficient signers and speechreaders show activation in superior temporal regions, including auditory cortical processing regions? (2) Are the patterns of activation different in deaf and hearing people? (3) In which regions is speechreading ability positively correlated with activation? 2 Method 2.1 Participants Mohammed et al., 2006 t p t p Mohammed et al., 2006 Table 1 z Mohammed et al.'s (2006) Table 1 All participants gave written informed consent to participate in the study according to the Declaration of Helsinki (BMJ 1991; 302: 1194) and the study was approved by the Institute of Psychiatry/South London and Maudsley NHS Trust Research Ethics Committee. 2.2 Stimuli Stimuli were full-colour motion video of silently mouthed English words. Stimuli were modelled by a deaf native signer of BSL, who spoke English fluently (i.e., a BSL-English bilingual). The model was viewed full-face and torso. The words to be speechread were piloted on adult hearing volunteers who were not scanned. The final stimuli comprised only those words that were speechreadable by the hearing pilots. Stimuli consisted of both content words (nouns) and descriptive terms (both adjectival and adverbial). 2.3 fMRI experimental design and task The speechreading task was one of four conditions presented to participants. The other three conditions comprised signed language (BSL) material (not reported here). The speech stimuli were presented in blocks, alternating with blocks of the other three experimental conditions (30-s blocks for each condition), and with a 15-s baseline condition. The total run duration for all four conditions and baseline was 15 min. Both deaf and hearing participants were given the same target-detection task and instructions. During the speechreading condition, participants were instructed to watch the speech patterns produced by the model and to try to understand them. They were required to make a push-button response whenever the model was seen to be saying ‘yes’. This relatively passive task was chosen in preference to a ‘deeper’ processing task (such as semantic classification) for several reasons. First, it allowed for relatively automatic processing of non-target items to occur (as confirmed in post-scan tests). Second, it ensured similar difficulty of the task across stimulus conditions. As hearing non-signers would not be able to perform a semantic task on the sign stimuli, using a sparse target detection task enabled all participants to perform the same task during all experimental conditions. Over the course of the experiment, participants viewed 96 stimulus items, 24 in each of the four experimental conditions. Items were not repeated within the same block and were pseudorandomised to ensure that repeats were not clustered at the end of the experiment. Each participant saw five blocks of the speechreading condition. The baseline condition comprised video of the model at rest. The model's face and torso were shown, as in the experimental conditions. During the baseline condition, participants were directed to press a button when a grey fixation cross, digitally superimposed on the face region of the resting model, turned red. To maintain vigilance, targets in both the experimental and baseline conditions occurred randomly at a rate of one per block. Prior to the scan, participants practiced the tasks and were shown examples of the ‘yes’ targets outside the scanner using video of a model and words that were similar but not identical to those used in the experiment. Following the experiment, a sample of the hearing participants (8 of 13) and all of the deaf participants were asked to identify the items they had seen. Stimuli in the experimental conditions appeared at a rate of 15 items per block. The rate of articulation across all experimental conditions, including the speechreading blocks, was approximately one item every 2 s. All stimuli were projected onto a screen located at the base of the scanner table via a Sanyo XU40 LCD projector and then projected to a mirror angled above the participant's head. 2.4 Imaging parameters T 2 * Talairach & Tournoux, 1988 2.5 Data analysis F Edgington (1995) Bullmore et al. (2001) Donoho and Johnstone (1994) Bullmore et al., 2001 Talairach & Tournoux, 1988 Brammer et al., 1997; Bullmore et al., 1996 T 2 * 2.6 Group analysis Bullmore et al., 1999 2.7 ANOVA Y a bX e Y X a b e b b b Bullmore et al., 1999 2.8 ANCOVA Table 1 R R a a H a X e H X e a a H 2.9 Correlational analysis z r 3 Results 3.1 Behavioural data t p t p t p 3.2 fMRI data 3.2.1 Speechreading vs. baseline Table 2 Fig. 1 3.2.2 Deaf vs. hearing x y z x y z x y z z x y z Penhune, Zatorre, MacDonald, and Evans (1996) Westbury, Zatorre, and Evans (1999) Fig. 2 3.2.3 Cortical activation for speechreading: correlations with speechreading skill Table 1 z 3.2.4 Deaf group Table 3 Talairach and Tournoux (1988) Penhune et al. (1996) Penhune et al., 1996 3.2.5 Hearing group z 4 Discussion Table 1 Bernstein et al., 2000; Mohammed et al., 2006 Zeki et al., 1991 Calvert et al., 1997; Paulesu et al., 2003 Pekkola et al., 2005 Buccino et al., 2004; Campbell et al., 2001; Paulesu et al., 2003; Watkins et al., 2003 Sadato et al. (2005) MacSweeney et al., 2001; MacSweeney et al., 2002 MacSweeney et al., 2002 4.1 Deaf vs. hearing Fig. 2 Calvert et al., 1999; Calvert et al., 2000 greater Fine, Finney, Boynton, & Dobkins, 2005 Finney, Fine, & Dobkins, 2001 Sadato et al., 2005 Bavelier, Dye, & Hauser, 2006 MacSweeney et al., 2004 4.2 Correlations of activation with individual differences in speechreading skill r p r p z Hall et al. (2005) n Hall et al. (2005) Chan, Chan, Kwok, & Yu, 2000 Rhoades & Chisholm, 2000 Bergeson, Pisoni, & Davis, 2005