It seems obvious to me that it is easier to follow a speech if you have audio-visual input vs. auditory input only (eg. a live speech vs. a recording, TV vs. radio), because language is made up of so much more than just words. You have a whole load of paralinguistic features (prosody etc.) and kinesics (“body language”). Prosody, obviously, is auditory. But I’m interested in the visual input part. And I need facts.
So: linguists, psychologists, etc., recommend reading for me, please! There must be a lot of research on this topic, more than just the McGurk effect (which describes how conflicting visual input can change the auditory perception). I’m not that interested in lipreading, rather other visual nonverbal cues, such as head and body movements, gestures, facial expressions, etc. Kinesics that can add information to the words, contradict them, or even replace them.
I’ve found a paper by Massaro, and one by Munhall et.al. on head movements, but I’m sure there’s some studies out there that address the whole bandwith of kinesics.
Why? I am writing my thesis on the importance of visual contact in simultaneous interpreting, and while Poyatos deals with the issue extensively, I would like to have some more material by other researchers. The papers need not be available on the internet, I’m sure I could get some of the books at the library or buy them, as long as they’re not out of print.
I hope this doesn’t count as asking for homework help, as I’m only searching for more material.
I appreciate any help you can give me. 