Jump to content

Dichotic listening

From Wikipedia, the free encyclopedia
(Redirected from Dichotic listening test)
Dichotic listening
SynonymsDichotic listening test
Purposeused to investigate auditory laterality and selective attention

Dichotic listening is a psychological test commonly used to investigate selective attention and the lateralization of brain function within the auditory system. It is used within the fields of cognitive psychology and neuroscience.

In a standard dichotic listening test, a participant is presented with two different auditory stimuli simultaneously (usually speech), directed into different ears over headphones.[1] In one type of test, participants are asked to pay attention to one or both of the stimuli; later, they are asked about the content of either the stimulus they were instructed to attend to or the stimulus they were instructed to ignore.[1][2]

History

[edit]

Donald Broadbent is credited with being the first scientist to systematically use dichotic listening tests in his work.[3][4] In the 1950s, Broadbent employed dichotic listening tests in his studies of attention, asking participants to focus attention on either a left- or right-ear sequence of digits.[5][6] He suggested that due to limited capacity, the human information processing system needs to select which channel of stimuli to attend to, deriving his filter model of attention.[6]

In the early 1960s, Doreen Kimura used dichotic listening tests to draw conclusions about lateral asymmetry of auditory processing in the brain.[7][8] She demonstrated, for example, that healthy participants have a right-ear superiority for the reception of verbal stimuli, and left-ear superiority for the perception of melodies.[9] From that study, and others studies using neurological patients with brain lesions, she concluded that there is a predominance of the left hemisphere for speech perception, and a predominance of the right hemisphere for melodic perception.[10][11]

In the late 1960s and early 1970s, Donald Shankweiler[12] and Michael Studdert-Kennedy[13] of Haskins Laboratories used a dichotic listening technique (presenting different nonsense syllables) to demonstrate the dissociation of phonetic (speech) and auditory (nonspeech) perception by finding that phonetic structure devoid of meaning is an integral part of language and is typically processed in the left cerebral hemisphere.[14][15][16] A dichotic listening performance advantage for one ear is interpreted as indicating a processing advantage in the contralateral hemisphere. In another example, Sidtis (1981)[17] found that healthy adults have a left-ear advantage on a dichotic pitch recognition experiment. He interpreted this result as indicating right-hemisphere dominance for pitch discrimination.

An alternative explanation of the right-ear advantage in speech perception is that most people being right-handed, more of them put a telephone to their right ear.[18][19] The two explanations are not necessarily incompatible, in that telephoning behavior could be partly to do with hemispheric asymmetry. Some of the converse findings for nonspeech (e.g. environmental sounds [20][21]) are readily interpretable in this framework too.

During the early 1970s, Tim Rand demonstrated dichotic perception at Haskins Laboratories.[22][23] In his study, the first stimuli: formant (F1), was presented to one ear while the second and third stimuli:(F2) and (F3) formants, were presented to the opposite ear. F2 and F3 varied in low and high intensity. Ultimately, in comparison to the binaural condition, "peripheral masking is avoided when speech is heard dichotically."[23] This demonstration was originally known as "the Rand effect" but was later renamed "dichotic release from masking". The name for this demonstration continued to evolve and was finally named "dichotic perception" or "dichotic listening." Around the same time, Jim Cutting (1976),[24] an investigator at Haskins Laboratories, researched how listeners could correctly identify syllables when different components of the syllable were presented to different ears. The formants of vowel sounds and their relation are crucial in differentiating vowel sounds. Even though the listeners heard two separate signals with neither ear receiving a 'complete' vowel sound, they could still identify the syllable sounds.

Dichotic listening test designs

[edit]

Dichotic fused words test (DFWT)

[edit]

The "dichotic fused words test" (DFWT) is a modified version of the basic dichotic listening test. It was originally explored by Johnson et al. (1977)[25] but in the early 80's Wexler and Hawles (1983)[26] modified this original test to ascertain more accurate data pertaining to hemispheric specialization of language function. In the DFWT, each participant listens to pairs of monosyllabic rhyming consonant-vowel-consonant (CVC) words. Each word varies in the initial consonant. The significant difference in this test is "the stimuli are constructed and aligned in such a way that partial interaural fusion occurs: subjects generally experience and report only one stimulus per trial."[27] According to Zatorre (1989), some major advantages of this method include "minimizing attentional factors, since the percept is unitary and localized to the midline" and "stimulus dominance effects may be explicitly calculated, and their influence on ear asymmetries assessed and eliminated."[27] Wexler and Hawles study obtained a high test-retest reliability (r=0.85).[26] High test-retest reliability is good, because it proves that the data collected from the study is consistent.

Testing with emotional factors

[edit]

An emotional version of the dichotic listening task was developed. In this version individuals listen to the same word in each ear but they hear it in either a surprised, happy, sad, angry, or neutral tone. Participants are then asked to press a button indicating what tone they heard. Usually dichotic listening tests show a right-ear advantage for speech sounds. Right-ear/left-hemisphere advantage is expected, because of evidence from Broca's area and Wernicke's area, which are both located in the left hemisphere. In contrast, the left ear (and therefore the right hemisphere) is often better at processing nonlinguistic material.[28] The data from the emotional dichotic listening task is consistent with the other studies, because participants tend to have more correct responses to their left ear than to the right.[29] It is important to note that the emotional dichotic listening task is seemingly harder for the participants than the phonemic dichotic listening task, meaning more incorrect responses were submitted by individuals.

Manipulation of voice onset time (VOT)

[edit]

The manipulation of voice onset time (VOT) during dichotic listening tests has given many insights regarding brain function.[30] To date, the most common design is the utilisation of four VOT conditions: short-long pairs (SL), where a Consonant-Vowel (CV) syllable with a short VOT is presented to the left ear and a CV syllable with a long VOT is presented to the right ear, as well as long-short (LS), short-short (SS) and long-long (LL) pairs. In 2006, Rimol, Eichele, and Hugdahl[31] first reported that, in healthy adults, SL pairs elicit the largest REA while, in fact, LS pairs elicit a significant left ear advantage (LEA). A study of children 5–8 years old has shown a developmental trajectory whereby long VOTs gradually start to dominate over short VOTs when LS pairs are being presented under dichotic conditions.[32] Converging evidence from studies of attentional modulation of the VOT effect shows that, around age 9, children lack the adult-like cognitive flexibility required to exert top-down control over stimulus-driven bottom-up processes.[33][34] Arciuli et al.(2010) further demonstrated that this kind of cognitive flexibility is a predictor of proficiency with complex tasks such as reading.[30][35]

Neuroscience

[edit]

Dichotic listening tests can also be used as lateralized speech assessment task. Neuropsychologists have used this test to explore the role of singular neuroanatomical structures in speech perception and language asymmetry. For example, Hugdahl et al. (2003), investigated dichotic listening performance and frontal lobe function[36] in left and right lesioned frontal lobe nonaphasiac patients compared to healthy controls. In the study, all groups were exposed to 36 dichotic trials with pairs of CV syllables and each patient was asked to state which syllable he or she heard best. As expected, the right lesioned patients showed a right ear advantage like the healthy control group but the left hemisphere lesioned patients displayed impairment when compared to both the right lesioned patients and control group. From this study, researchers concluded "dichotic listening as into a neuronal circuitry which also involves the frontal lobes, and that this may be a critical aspect of speech perception."[36] Similarly, Westerhausen and Hugdahl (2008)[37] analyzed the role of the corpus callosum in dichotic listening and speech perception. After reviewing many studies, it was concluded that "...dichotic listening should be considered a test of functional inter-hemispheric interaction and connectivity, besides being a test of lateralized temporal lobe language function" and "the corpus callosum is critically involved in the top-down attentional control of dichotic listening performance, thus having a critical role in auditory laterality."[37]

Language processing

[edit]

Dichotic listening can also be used to test the hemispheric asymmetry of language processing. In the early 60s, Doreen Kimura reported that dichotic verbal stimuli (specifically spoken numerals) presented to a participant produced a right ear advantage (REA).[38] She attributed the right-ear advantage "to the localization of speech and language processing in the so-called dominant left hemisphere of the cerebral cortex."[39]: 115  According to her study, this phenomenon was related to the structure of the auditory nerves and the left-sided dominance for language processing.[40] It is important to note that REA doesn't apply to non-speech sounds. In "Hemispheric Specialization for Speech Perception," by Studdert-Kennedy and Shankweiler (1970)[14] examine dichotic listening of CVC syllable pairs. The six stop consonants (b, d, g, p, t, k) are paired with the six vowels and a variation in the initial and final consonants are analyzed. REA is the strongest when the sound of the initial and final consonants differ and it is the weakest when solely the vowel is changed. Asbjornsen and Bryden (1996) state that "many researchers have chosen to use CV syllable pairs, usually consisting of the six stop consonants paired with the vowel \a\. Over the years, a large amount of data has been generated using such material."[41]

Selective attention

[edit]

In selective attention experiments, the participants may be asked to repeat aloud the content of the message they are listening to. This task is known as shadowing. As Colin Cherry (1953)[42] found, people do not recall the shadowed message well, suggesting that most of the processing necessary to shadow the attended to message occurs in working memory and is not preserved in the long-term store. Performance on the unattended message is worse. Participants are generally able to report almost nothing about the content of the unattended message. In fact, a change from English to German in the unattended channel frequently goes unnoticed. However, participants are able to report that the unattended message is speech rather than non-verbal content. In addition to this, if the content of the unattended message contains certain information, such as the listener's name, then the unattended message is more likely to be noticed and remembered.[43] A demonstration of this was done by Conway, Cowen, and Bunting (2001) in which they had subjects shadow words in one ear while ignoring words in the other ear. At some point, the subject's name was spoken in the ignored ear, and the question was whether the subject would report hearing their name. Subjects with a high working memory (WM) span were more capable of blocking out the distracting information.[44] Also if the message contains sexual words then people usually notice them immediately.[45] This suggests that the unattended information is also undergoing analysis and keywords can divert attention to it.

Sex differences

[edit]

Some data gathered from dichotic listening test experiments suggests that there is possibly a small-population sex difference in perceptual and auditory asymmetries and language laterality. According to Voyer (2011),[46] "Dichotic listening tasks produced homogenous effect sizes regardless of task type (verbal, non-verbal), reflecting a significant sex difference in the magnitude of laterality effects, with men obtaining larger laterality effects than women."[46]: 245–246  However, the authors discuss numerous limiting factors ranging from publication bias to small effect size. Furthermore, as discussed in "Attention, reliability, and validity of perceptual asymmetries in the fused dichotic words test,"[47] women reported more "intrusions" or words presented to the uncued ear than men when presented with exogenous cues in the Fused Dichotic Word Task which suggests two possibilities: 1) Women experience more difficulty paying attention to the cued word than men and/or 2) regardless of the cue, women spread their attention evenly as opposed to men who may possibly focus in more intently on exogenous cues.[46]

Effect of schizophrenia

[edit]

A study conducted involving the dichotic listening test, with emphasis on subtypes of schizophrenia (particularly paranoid and undifferentiated), demonstrated that people with paranoid schizophrenia have the largest left hemisphere advantage – whereas people with undifferentiated schizophrenia (where psychotic symptoms are present but the criteria for paranoid, disorganized, or catatonic types have not been met) having the smallest.[48] The application of the dichotic listening test helped to further the beliefs that preserved left hemisphere processing is a product of paranoid schizophrenia, and in contrast, that the left hemisphere's lack of activity is a symptom of undifferentiated schizophrenia. In 1994, M.F. Green and colleagues tried to relate "the functional integration of the left hemisphere in hallucinating and nonhallucinating psychotic patients" using a dichotic listening study. The study showed that auditory hallucinations are connected to a malfunction in the left hemisphere of the brain.[49]

Emotions

[edit]

Dichotic listening can also be found in the emotion-oriented parts of the brain. Further study on this matter was done by Phil Bryden and his dichotic listening research focused on emotionally loaded stimuli (Hugdahl, 2015).[50] More research, focused on how lateralization and the identification of the cortical regions of the brain created inquiries on how dichotic listening is implicated whenever two dichotic listening tasks are provided. In order to obtain results, a Functional  magnetic resonance imaging (fMRI) was used by Jancke et al. (2001) to determine the activation of parts of the brain in charge of attention, auditory stimuli to a specific emotional stimuli. Following results on this experiment clarified that the dependability of the provided stimuli (Phonetic, emotion) had a significant presence on activating the different parts of the brain in charge of the specific stimuli. However, no concerning difference in cortical activation was found.[51]

See also

[edit]

References

[edit]
  1. ^ a b Westerhausen, René; Kompus, Kristiina (2018). "How to get a left-ear advantage: A technical review of assessing brain asymmetry with dichotic listening". Scandinavian Journal of Psychology. 59 (1): 66–73. doi:10.1111/sjop.12408. PMID 29356005.
  2. ^ Daniel L. Schacter; Daniel Todd Gilbert; Daniel M. Wegner (2011). Psychology (1. publ., 3. print. ed.). Cambridge: Worth Publishers. p. 180. ISBN 978-1-429-24107-6.
  3. ^ Hugdahl, Kenneth (2015), "Dichotic Listening and Language: Overview", International Encyclopedia of the Social & Behavioral Sciences, Elsevier, pp. 357–367, doi:10.1016/b978-0-08-097086-8.54030-6, ISBN 978-0-08-097087-5
  4. ^ Kimura, Doreen (2011). "From ear to brain". Brain and Cognition. 76 (2): 214–217. doi:10.1016/j.bandc.2010.11.009. PMID 21236541. S2CID 43450851.
  5. ^ Broadbent, D. E. (1956). "Successive Responses to Simultaneous Stimuli". Quarterly Journal of Experimental Psychology. 8 (4): 145–152. doi:10.1080/17470215608416814. ISSN 0033-555X. S2CID 144045935.
  6. ^ a b Broadbent, Donald E. (Donald Eric) (1987). Perception and communication. Oxford [Oxfordshire]: Oxford University Press. ISBN 0-19-852171-5. OCLC 14067709.
  7. ^ "Canadian Society for Brain, Behaviour & Cognitive Science: Dr. Doreen Kimura". www.csbbcs.org. Retrieved 2019-12-05.
  8. ^ Kimura, Doreen (1961). "Cerebral dominance and the perception of verbal stimuli". Canadian Journal of Psychology. 15 (3): 166–171. doi:10.1037/h0083219. ISSN 0008-4255.
  9. ^ Kimura, Doreen (1964). "Left-right differences in the perception of melodies". Quarterly Journal of Experimental Psychology. 16 (4): 355–358. doi:10.1080/17470216408416391. ISSN 0033-555X. S2CID 145633913.
  10. ^ Kimura, Doreen (1961). "Some effects of temporal-lobe damage on auditory perception". Canadian Journal of Psychology. 15 (3): 156–165. doi:10.1037/h0083218. ISSN 0008-4255. PMID 13756014.
  11. ^ Kimura, Doreen (1967). "Functional Asymmetry of the Brain in Dichotic Listening". Cortex. 3 (2): 163–178. doi:10.1016/S0010-9452(67)80010-8.
  12. ^ "Donald P. Shankweiler". Archived from the original on 2006-06-26. Retrieved 2013-08-28.
  13. ^ "Michael Studdert-Kennedy". Archived from the original on 2006-03-05. Retrieved 2013-08-28.
  14. ^ a b Studdert-Kennedy, Michael; Shankweiler, Donald (19 August 1970). "Hemispheric specialization for speech perception". Journal of the Acoustical Society of America. 48 (2): 579–594. Bibcode:1970ASAJ...48..579S. doi:10.1121/1.1912174. PMID 5470503.
  15. ^ Studdert-Kennedy M.; Shankweiler D.; Schulman S. (1970). "Opposed effects of a delayed channel on perception of dichotically and monotically presented CV syllables". Journal of the Acoustical Society of America. 48 (2B): 599–602. Bibcode:1970ASAJ...48..599S. doi:10.1121/1.1912179.
  16. ^ Studdert-Kennedy M.; Shankweiler D.; Pisoni D. (1972). "Auditory and phonetic processes in speech perception: Evidence from a dichotic study". Journal of Cognitive Psychology. 2 (3): 455–466. doi:10.1016/0010-0285(72)90017-5. PMC 3523680. PMID 23255833.
  17. ^ Sidtis J. J. (1981). "The complex tone test: Implications for the assessment of auditory laterality effects". Neuropsychologia. 19 (1): 103–112. doi:10.1016/0028-3932(81)90050-6. PMID 7231655. S2CID 42655052.
  18. ^ Williams, Stephen (1982-01-01). "Dichotic lateral asymmetry: The effects of grammatical structure and telephone usage". Neuropsychologia. 20 (4): 457–464. doi:10.1016/0028-3932(82)90044-6. ISSN 0028-3932. PMID 7133383.
  19. ^ Surwillo, Walter W. (1981-12-01). "Ear Asymmetry in Telephone-Listening Behavior". Cortex. 17 (4): 625–632. doi:10.1016/S0010-9452(81)80069-X. ISSN 0010-9452. PMID 7344827.
  20. ^ González, Julio; McLennan, Conor T. (2009). "Hemispheric Differences in the Recognition of Environmental Sounds". Psychological Science. 20 (7): 887–894. doi:10.1111/j.1467-9280.2009.02379.x. hdl:10234/23841. ISSN 0956-7976. PMID 19515117.
  21. ^ Curry, Frederic K. W. (1967-09-01). "A Comparison of Left-Handed and Right-Handed Subjects on Verbal and Non-Verbal Dichotic Listening Tasks". Cortex. 3 (3): 343–352. doi:10.1016/S0010-9452(67)80022-4. ISSN 0010-9452.
  22. ^ "Rand, T. C. (1974)". Haskins Laboratories Publications-R. Archived from the original on 2006-06-26. Retrieved 2013-08-28.
  23. ^ a b Rand, Timothy C. (1974). "Dichotic release from masking for speech". Journal of the Acoustical Society of America. 55 (3): 678–680. Bibcode:1974ASAJ...55..678R. doi:10.1121/1.1914584. PMID 4819869.
  24. ^ Cutting J. E. (1976). "Auditory and linguistic processes in speech perception: inferences from six fusions in dichotic listening". Psychological Review. 83 (2): 114–140. CiteSeerX 10.1.1.587.9878. doi:10.1037/0033-295x.83.2.114. PMID 769016.
  25. ^ Johnson; et al. (1977). "Dichotic ear preference in aphasia". Journal of Speech and Hearing Research. 20 (1): 116–129. doi:10.1044/jshr.2001.116. PMID 846195.
  26. ^ a b Wexler, Bruce; Terry Hawles (1983). "Increasing the power of dichotic methods: the fused rhymed words test". Neuropsychologia. 21 (1): 59–66. doi:10.1016/0028-3932(83)90100-8. PMID 6843817. S2CID 6717817.
  27. ^ a b Zatorre, Robert (1989). "Perceptual asymmetry on the dichotic fused words test and cerebral speech lateralization determined by the caroid sodium amytal test". Neuropsychologia. 27 (10): 1207–1219. doi:10.1016/0028-3932(89)90033-x. PMID 2480551. S2CID 26052363.
  28. ^ Grimshaw; et al. (2003). "The dynamic nature of language lateralization: effects of lexical and prosodic factors". Neuropsychologia. 41 (8): 1008–1019. doi:10.1016/s0028-3932(02)00315-9. PMID 12667536. S2CID 13251643.
  29. ^ Hahn, Constanze (Jul 2011). "Smoking reduces language lateralization: A dichotic listening study with control participants and schizophrenia patients". Brain and Cognition. 76 (2): 300–309. doi:10.1016/j.bandc.2011.03.015. PMID 21524559. S2CID 16181999.
  30. ^ a b Arciuli J (July 2011). "Manipulation of voice onset time during dichotic listening". Brain and Cognition. 76 (2): 233–8. doi:10.1016/j.bandc.2011.01.007. PMID 21320740. S2CID 40737054.
  31. ^ Rimol, L.M.; Eichele, T.; Hugdahl, K. (2006). "The effect of voice-onset-time on dichotic listening with consonant-vowel syllables". Neuropsychologia. 44 (2): 191–196. doi:10.1016/j.neuropsychologia.2005.05.006. PMID 16023155. S2CID 2131160.
  32. ^ Westerhausen, R.; Helland, T.; Ofte, S.; Hugdahl, K. (2010). "A longitudinal study of the effect of voicing on the dichotic listening ear advantage in boys and girls at age 5 to 8". Developmental Neuropsychology. 35 (6): 752–761. doi:10.1080/87565641.2010.508551. PMID 21038164. S2CID 12980025.
  33. ^ Andersson, M.; Llera, J.E.; Rimol, L.M.; Hugdahl, K. (2008). "Using dichotic listening to study bottom-up and top-down processing in children and adults". Child Neuropsychol. 14 (5): 470–479. doi:10.1080/09297040701756925. PMID 18608228. S2CID 20770018.
  34. ^ Arciuli, J.; Rankine, T.; Monaghan, P. (2010). "Auditory discrimination of voice-onset time and its relationship with reading ability". Laterality. 15 (3): 343–360. doi:10.1080/13576500902799671. PMID 19343572. S2CID 23776770.
  35. ^ Arciuli J, Rankine T, Monaghan P (May 2010). "Auditory discrimination of voice-onset time and its relationship with reading ability". Laterality. 15 (3): 343–60. doi:10.1080/13576500902799671. PMID 19343572. S2CID 23776770.
  36. ^ a b Hugdahl, Kenneth (2003). "Dichotic Listening Performance and Frontal Lobe Function". Cognitive Brain Research. 16 (1): 58–65. doi:10.1016/s0926-6410(02)00210-0. PMID 12589889.
  37. ^ a b Westerhausen, Rene; Kenneth Hugdahl (2008). "The corpus callosum in dichotic listening studies of hemispheric asymmetry: A review of clinical and experimental evidence". Neuroscience and Biobehavioral Reviews. 32 (5): 1044–1054. doi:10.1016/j.neubiorev.2008.04.005. PMID 18499255. S2CID 23137612.
  38. ^ Kimura D (1961). "Cerebral dominance and the perception of verbal stimuli". Canadian Journal of Psychology. 15 (3): 166–171. doi:10.1037/h0083219.
  39. ^ Ingram, John C.L. (2007). Neurolinguistics: an introduction to spoken language processing and its disorders (1. publ., 3. print. ed.). Cambridge: Cambridge University Press. ISBN 978-0-521-79640-8.
  40. ^ Kimura D (1967). "Functional asymmetry of the brain in dichotic listening". Cortex. 3 (2): 163–178. doi:10.1016/s0010-9452(67)80010-8.
  41. ^ Asbjornsen, Arve; M.P. Bryden (1996). "Biased attention and the fused dichotic words test". Neuropsychologia. 34 (5): 407–11. doi:10.1016/0028-3932(95)00127-1. PMID 9148197. S2CID 43071799.
  42. ^ Cherry E. C. (1953). "Some experiments on the recognition of speech, with one and two ears". Journal of the Acoustical Society of America. 25 (5): 975–979. Bibcode:1953ASAJ...25..975C. doi:10.1121/1.1907229. hdl:11858/00-001M-0000-002A-F750-3.
  43. ^ Moray N (1959). "Attention in dichotic listening: Affective cues and the influence of instructions". Quarterly Journal of Experimental Psychology. 11: 56–60. doi:10.1080/17470215908416289. S2CID 144324766.
  44. ^ Engle Randall W (2002). "Working Memory Capacity as Executive Attention". Current Directions in Psychological Science. 11: 19–23. doi:10.1111/1467-8721.00160. S2CID 116230.
  45. ^ Nielson L. L.; Sarason I. G. (1981). "Emotion, personality, and selective attention". Journal of Personality and Social Psychology. 41 (5): 945–960. doi:10.1037/0022-3514.41.5.945.
  46. ^ a b c Voyer, Daniel (2011). "Sex differences in dichotic listening". Brain and Cognition. 76 (2): 245–255. doi:10.1016/j.bandc.2011.02.001. PMID 21354684. S2CID 43323875.
  47. ^ Voyer, Daniel; Jennifer Ingram (2005). "Attention, reliability, and validity of perceptual asymmetries in the fused dichotic word test". Laterality: Asymmetries of Body, Brain and Cognition. 10 (6): 545–561. doi:10.1080/13576500442000292. PMID 16298885. S2CID 33137060.
  48. ^ Friedman, Michelle S.; Bruder, Gerard E.; Nestor, Paul G.; Stuart, Barbara K.; Amador, Xavier F.; Gorman, Jack M. (September 2001). "Perceptual Asymmetries in Schizophrenia: Subtype Differences in Left Hemisphere Dominance for Dichotic Fused Words" (PDF). American Journal of Psychiatry. 158 (9): 1437–1440. doi:10.1176/appi.ajp.158.9.1437. PMID 11532728.
  49. ^ Green, MF; Hugdahl, K; Mitchell, S (March 1994). "Dichotic listening during auditory hallucinations in patients with schizophrenia". American Journal of Psychiatry. 151 (3): 357–362. doi:10.1176/ajp.151.3.357. PMID 8109643.
  50. ^ Hugdahl, Kenneth (2016). "Dichotic Listening and attention: the legacy of Phil Bryden". Laterality: Asymmetries of Body, Brain and Cognition. 21 (4–6): 433–454. doi:10.1080/1357650X.2015.1066382. PMID 26299422. S2CID 40077399.
  51. ^ Jäncke, L.; Buchanan, T.W.; Lutz, K.; Shah, N.J. (September 2001). "Focused and Nonfocused Attention in Verbal and Emotional Dichotic Listening: An FMRI Study". Brain and Language. 78 (3): 349–363. doi:10.1006/brln.2000.2476. PMID 11703062. S2CID 42698136.

Further reading

[edit]