Residential College | false |
Status | 已發表Published |
Visual–Auditory Integration and High-Variability Speech Can Facilitate Mandarin Chinese Tone Identification | |
Wei, Yanjun1; Jia, Lin2; Gao, Fei3,4; Wang, Jianqin1 | |
2022-11-01 | |
Source Publication | JOURNAL OF SPEECH LANGUAGE AND HEARING RESEARCH |
ISSN | 1092-4388 |
Volume | 65Issue:11Pages:4096-4111 |
Abstract | Purpose: Previous studies have demonstrated that tone identification can be facilitated when auditory tones are integrated with visual information that depicts the pitch contours of the auditory tones (hereafter, visual effect). This study investigates this visual effect in combined visual–auditory integration with high-and low-variability speech and examines whether one’s prior tonal-language learning experience shapes the strength of this visual effect. Method: Thirty Mandarin-naïve listeners, 25 Mandarin second language learners, and 30 native Mandarin listeners participated in a tone identification task in which participants judged whether an auditory tone was rising or falling in pitch. Moving arrows depicted the pitch contours of the auditory tones. A priming paradigm was used with the target auditory tones primed by four multi-modal conditions: no stimuli (A−V−), visual-only stimuli (A−V+), auditory-only stimuli (A+V−), and both auditory and visual stimuli (A+V+). Results: For Mandarin naïve listeners, the visual effect in accuracy produced under the cross-modal integration (A+V+ vs. A+V−) was superior to a unimodal approach (A−V+ vs. A−V−), as evidenced by a higher d prime of A+V+ as opposed to A+V−. However, this was not the case in response time. Addition-ally, the visual effect in accuracy and response time under the unimodal approach only occurred for high-variability speech, not for low-variability speech. Across the three groups of listeners, we found that the less tonal-language learning experience one had, the stronger the visual effect. Conclusion: Our study revealed the visual–auditory advantage and disadvan-tage of the visual effect and the joint contribution of visual–auditory integration and high-variability speech on facilitating tone perception via the process of speech symbolization and categorization. Supplemental Material: https://doi.org/10.23641/asha.21357729. |
DOI | 10.1044/2022_JSLHR-21-00691 |
URL | View the original |
Indexed By | SCIE ; SSCI |
Language | 英語English |
WOS Research Area | Audiology & Speech-language Pathology ; Linguistics ; Rehabilitation |
WOS Subject | Audiology & Speech-language Pathology ; Linguistics ; Rehabilitation |
WOS ID | WOS:000891439000006 |
Scopus ID | 2-s2.0-85142085192 |
Fulltext Access | |
Citation statistics | |
Document Type | Journal article |
Collection | INSTITUTE OF COLLABORATIVE INNOVATION |
Corresponding Author | Wei, Yanjun |
Affiliation | 1.Center for Cognitive Science of Language, Beijing Language and Culture University, China 2.Beijing Chinese Language and Culture College, China 3.Faculty of Arts and Humanities, University of Macau, China 4.Centre for Cognitive and Brain Sciences, University of Macau, China |
Recommended Citation GB/T 7714 | Wei, Yanjun,Jia, Lin,Gao, Fei,et al. Visual–Auditory Integration and High-Variability Speech Can Facilitate Mandarin Chinese Tone Identification[J]. JOURNAL OF SPEECH LANGUAGE AND HEARING RESEARCH, 2022, 65(11), 4096-4111. |
APA | Wei, Yanjun., Jia, Lin., Gao, Fei., & Wang, Jianqin (2022). Visual–Auditory Integration and High-Variability Speech Can Facilitate Mandarin Chinese Tone Identification. JOURNAL OF SPEECH LANGUAGE AND HEARING RESEARCH, 65(11), 4096-4111. |
MLA | Wei, Yanjun,et al."Visual–Auditory Integration and High-Variability Speech Can Facilitate Mandarin Chinese Tone Identification".JOURNAL OF SPEECH LANGUAGE AND HEARING RESEARCH 65.11(2022):4096-4111. |
Files in This Item: | There are no files associated with this item. |
Items in the repository are protected by copyright, with all rights reserved, unless otherwise indicated.
Edit Comment