Research

1. How do language learners use gesture and iconicity in language learning?

How do children learn language so fast? There are many possible mechanisms for this old problem. Recently researchers have proposed that iconicity within the language form itself can help with initial referential mapping. I propose that systematicity might be a more useful cue for language learning due to its predictability and generalizatization power. I study this topic with children and adult learners of American Sign Language.

Screen Shot 2019-08-16 at 8.42.38 AM
Structure Mapping Engine. Image from Gentner (2010)

2. How does type of input change the way we approach language learning?

Some children receive typical language input from birth, but other children get limited or no language input, such as in the case of homesigners – deaf children born to hearing parents who are not exposed to a signed language and do not have full access to a spoken language. I’m interested in understanding how varied language input changes the way we perceive and analyze communicative signals.

  • Sign language experience affects how gesture is processed: How does sign language experience shape the way we perceive and glean meaning from iconic co-speech gesture? Using eye tracking method, we measured how deaf and hearing adults perceive and understand gesture (in collaboration with Nicole Burke, Amanda Woodward, and Susan-Goldin-Meadow)

Screen Shot 2019-08-09 at 11.07.45 AM

  • Points are not treated the same if you get different linguistic vs gestural input: evidence from hearing, deaf, and home signing children: Everyone points at things, and points emerge early in development. However, points also have a grammatical function in ASL – they are used as pronouns. Do children treat points in a purely iconic or in a grammatical way depending on their input? I’m investigating this question with home signers, hearing, and deaf children (in collaboration with Diane Lillo-Martin and Susan Goldin-Meadow).

 


3. Why does gesture help with learning new concepts and generalization?

There are two theories for why gesture helps with learning new concepts: 1) gesture helps because of its imagistic representational format or 2) because it is produced in a visual modality which is different from speech. We test these hypotheses by looking at children who produce language in a different modality from gesture (speaking children) and children who produce language and gesture in the same modality (signing children; in collaboration with Ryan Lepic, Breckie Church, and Susan-Goldin-Meadow)

 


4. How does experience influence how we interpret and produce different types of meanings (arbitrary and iconic mappings)?

Language has both arbitrary and iconic mappings. For example, we can speed up our speech in order to highlight the speed of an event (compare “The horse is brown” with Thehorseisbrown. Are these iconic mappings intuitive or hard to learn? We investigate this question by studying how native and non-native speakers perceive analog acoustic expression (in collaboration with Shannon Heald and Howard Nussbaum)

Screen Shot 2019-08-16 at 8.41.51 AM
Images from Shintel & Nusbaum (2007)

5. How does experience shape our perception of language vs action?

What neural mechanisms allow people to comprehend language across sensory modalities? We find that signers show neural sensitivity in tracking visual changes within signs. We’re now investigating whether it is important to have language-specific experience with sign in order to show that sensitivity (in collaboration with Geoffrey Brookshire, Howard Nussbaum, Susan Goldin-Meadow, and Daniel Casasanto)

 

 

%d bloggers like this: