Learning Spoken Language

$10 for the Learning Spoken Language document.

how do babies learn language

VectorStock Image

How do babies learn language?

NEW PARENTS ~ Visit this site:  Lena, and buy a LENA device! Please read below to find out how CRITICAL it is to communicate with your babies, so they learn language! The device can help you analyze your conversations! Speech comprehension and visual recognition influence how quickly one learns to read (p. 9).

 

Check out this neat brain development site: Vroom

Learning Language

  • By recognizing and trying out speech sounds, the brain establishes neural networks needed to: manipulate sounds, acquire and comprehend vocabulary, detect language accents, tone, and stress, and map out visual sentence structure (p. 9).
  • A few years later, the brain will call on its visual recognition system to connect the sounds to abstract optical systems (alphabet) to learn to read (p. 9).
  • To be successful readers, children need a broad vocabulary, little to no grammatical errors in speech, sophisticated sentence structure, and comprehension of variations in sentence structure (p. 9).
  • See pages 10 and 11 for Typical Development of Language and Visual Recognition Systems from Birth-3 years. (Visual ~ tracking objects, recognizing faces)
  • We are born with an innate capacity to distinguish the distinct sounds (phonemes) of all of the languages (nearly 7,000) on the planet (p. 12).
  • The voice becomes so fine-tuned that it makes only one sound error per million sounds and one-word error per million words (Pinker, 1994) (p. 12).
  • Broca’s area: Paul Broca identified injured brains in 1861 ~ area near the left temple understood language but had difficulty speaking. This is called aphasia (p. 12). In addition, Broca discovered the left hemisphere of the brain is specialized for language (p. 13).
  • Broca’s area: region of the left frontal lobe responsible for processing vocabulary, syntax (how word order affects meaning), and grammar rules, AND it is involved in the MEANING of sentences (p. 13).
  • Wernicke’s area: part of the left temporal lobe that is thought to process the sense and meaning of language. Works closely with Broca’s area (p. 13).
  • Wernicke described a different type of aphasia in 1881 ~ patients could not understand what they heard. This is due to damage in the left temporal lobe. They could speak fluently but meaninglessly (p. 13).
  • The right hemisphere governs the emotional content of language (p. 13).
  • An infant’s ability to perceive and discriminate sounds begins after a few months and develops rapidly (p. 13).
  • When preparing to say a sentence, the brain calls on Broca’s and Wernicke’s areas and several other neural networks scattered throughout the left hemisphere. (p. 13).
  • Nouns are processed through 1 set of patterns and verbs by separate neural networks (p. 13).
  • The more complex the sentence structure, the more activated areas, including some on the right hemisphere (p. 13).
  • Brain imaging of 4-month-olds confirms that the brain processes neural networks that specialize in responding to the auditory components of language (p. 13).
  • Dehaene-Lambertz (2000) used an EEG on sixteen 4-month-olds as they listened to syllables and tones. Syllables and tones were processed primarily in different areas of the left hemisphere (p. 13).
  • Separate neural networks encode a syllable’s voice and phonetic category into sensory memory. This shows that the brain is organized into functional networks that distinguish between language fragments and other sounds (p. 13).
  • Graham and Fisher (2013) found that the ability to acquire spoken language is encoded in our genes. Thus, people with mutated genes may have severe speech and language disorders (p. 13).
  • The genetic predisposition of the brain to the sounds of language explains why normal young children respond to and acquire spoken language so quickly (p. 13).
  • After the 1st year, the child can differentiate sounds heard in the native language and lose the ability to perceive other sounds (p. 14).
  • When children learn two languages, all language activity is found in the same brain area. However, how long the brain retains the responsiveness to language sounds is still open to question (p. 14).
  • The window for acquiring language within the language-specific brain area diminishes during the middle years of adolescents (p. 14).
  • A new language in the brain will be spatially separated from the native language (p. 14).
  • Functional imaging shows males and females process language differently (p. 14).
  • Males process language in the left hemisphere, while females process it in both hemispheres (p. 14).
  • The same areas are activated during reading (p. 14).
  • The corpus callosum ~ the larger number of neurons connecting the right and left hemispheres that allow them to communicate ~ is larger and thicker in a female than a male. As a result, information travels more efficiently in females than in males (p. 14).
  • Girls acquire spoken language quicker because of dual-hemisphere language processing and more efficient between-hemisphere communication (p. 14). This is up for debate ~ some say the gender difference is minimal and declines as we age, but others say it continues and affects us as adults (p. 14).
  • There are close to 7,000 languages globally, but there are only 170 phonemes that comprise the world’s languages (p. 15).
  • Although the infant’s brain can perceive the entire range of phonemes, only those repeated get attention, as the neurons reacting to the unique sound patterns are continually stimulated and reinforced (p. 15).
  • At birth or even before, the infant responds to the prosody ~ rhythm, cadence, pitch ~ of the caregiver’s voice ~ not the words. (p. 15).
  • Six months:  babbling ~ a sign of early language acquisition. Infants’ production of phonemes results from genetically determined neural programs; however, language exposure is environmental (p. 15).
  • Baby’s brain develops phonemic awareness ~ it prunes phonemes that occur less frequently ~ and at one year of age ~ the neural networks focus on the sounds of language being spoken in the environment (p. 15).
  • The next step is for the brain to detect words within sound streams. Parents tend to talk in parentese to children. This is instinctive (p. 16).
  • By eight months, children can detect word boundaries. They acquire 7-10 new words/per day. This is the mental lexicon (p. 16).
  • Toddlers ignore foreign sounds by 10-12 months (p. 16).
  • Morphemes are added to speaking vocabulary ~ s, ed, ing ~ morphemes are the smallest units of language that carry some meaning, such as prefixes and suffixes (p. 16).
  • Working memory and Wernicke’s area become more functional as the child can attach meaning to words, but putting words together to make sense is another more complex skill (p. 16).
  • Swaab used EEGs to measure the brain’s responses to concrete and abstract words in a dozen young adults. EEGs measure changes in brainwave activity called event-related potential or ERP when the brain experiences a stimulus. The image-loaded words produced more ERPs in the frontal lobe ~ the part associated with imagery, while abstract words had more ERPs in the top central and rear areas (parietal and occipital lobes). When processing words, these areas had little interaction (p. 17).
  • Therefore we believe the brain may hold two separate stores for semantics (meaning) ~ one for verbal-based information and the other for image-based information. Therefore, teachers should use concrete images when presenting abstract concepts (p. 17).
  • Adult-toddler conversations are critical. (p. 17).
  • After analyzing and recording 1,300 hours of conversations between parent and child from ages nine months through 3 years from different SES backgrounds, Hart and Risley found that the upper SES spoke 2,153 words/hour. The 3-year-old child had 1,116 words, while the low SES family said 616 words/hour, and the child had 525 words. Even in 3rd grade, the low SES family’s children still struggled (p. 18).  ***BUY A LENA DEVICE!***
  • Early vocabulary learning strongly predicts test scores at ages 9 or 10 on vocabulary, listening, speaking, syntax, and semantics (p. 18).
  • An enormous vocabulary gap between children of different socioeconomic backgrounds continues to widen (p. 18).
  • Preschool programs can save money, with fewer children retained, fewer in special ed., and less remedial education (p. 18).
  • Lower SES groups spend more time in front of the TV. This delays language as the parent is not interacting with the child (p. 19).
  • A study by Tomopoulos et al. in 2010 showed the more time spent watching TV from 9 months to 14 months, the lower the cognitive development and language scores at 14 months. The type of programming did not matter (p. 19).
  • Other studies show TV watching before age 3 had adverse cognitive outcomes on 6-year-olds (p. 19).
  • Infant language learning needs other clues to attach meaning to spoken words. They need facial cues, intonation, intensity, and rhythm ~ this is why TV is not good. No TV before the age of 2 (p. 19).
  • Phonemes can be combined into morphemes, and morphemes can be combined into words. These words may accept prefixes, suffixes, insertions (infixes) and may change consonants or vowels (p. 19).
  • Words can be put together according to the rules of syntax or word order to form phrases and sentences with meaning (p. 19).
  • Toddlers show evidence of their progression through syntactic (word order) and semantics (meaning) by going from “candy” to “give me candy.” They recognize that shifting words in sentences can change their meaning (p. 20).
  • English follows SVO format ~ or subject, verb, object (p. 20).
  • The front of the temporal lobe establishes meaning when words are combined into sentences (p. 20).
  • Over time, the child hears patterns of word combinations, and they discern tense. By age 3, 90% of sentences they utter are grammatically correct because the child has constructed a syntactic network that stores perceived grammar rules (p. 21).
  • The toddler has to learn that over 150 of the most commonly used English verbs are irregular. For example, you don’t add ed to the past tense, but the “always add ed rule” becomes part of the toddler’s syntactic network and operates without conscious thought (p. 21).
  • The child needs adult correction and other environmental exposures (repetition is essential to memory). The syntactic network is modified to block the rule for the past tense ed on certain words. A new word is added to their lexicon (p. 21) (for ex., held instead of holded).
  • The principle of blocking is an essential component of accurate language and, eventually, reading fluency (p. 22).
  • Long-term memory is vital in helping the child remember the correct past tense of irregular verbs (p. 22).
  • No one knows how much grammar a child learns by listening or how much is prewired. Still, the more children are exposed to spoken language in the early years, the more quickly they can discriminate between phonemes, recognize word boundaries, and detect emerging rules of grammar that result in meaning ( p. 23).
  • Topic prominent languages downplay the role of passive voice and avoid “dummy subjects” such as it, as in It is raining. ELLs need to learn how English syntax differs from their native tongue (p. 23).
  • Semantics is the study of meaning. Meaning occurs at three different levels of language: morphology (word parts), vocabulary, and sentence level (p. 23).
  • Morphology helps children learn and create new words, and it can help them spell and pronounce words correctly. They should learn that words with common roots have common meanings, like nation and national (p. 23).
  • People can learn vocabulary based on context, but most words must be understood (p. 23).
  • The mental lexicon is organized according to the meaningful relationships between words (p. 23).
  • A study with a prime paired with a target word, such as swan/goose and tulip/goose ~ subjects, was faster and more accurate in making decisions about target words being actual words rather than nonsense words if the target word was related to the prime. Researchers feel the reduced time for identifying related pairs results from these words being physically closer to each other among the neurons that make up the semantic network and that associated words may be stored together in specific cerebral regions (p. 24).
  • Another study showed that naming items in the same category activated the same brain area (p. 24).
  • Activating words between networks takes longer (p. 24).
  • Grammar rules of the order of words~  so speakers understand each other (p. 25).
  • The girl ate the candy. The candy was eaten by the girl. It means the same but with different syntax (word order) (p. 25).
  • Context helps. A man ate a hot dog at the fair. We know it is a frankfurter and not a barking dog (p. 25).
  • Pinker (1999) says the young brain processes the structure of sentences by a noun phrase and a verb phrase. (A verb can be combined with its direct object to form a verb phrase “eat the hay”) (p. 26).
  • By grouping or chunking words into phrases, the processing time is decreased (p. 26).
  • The young adult brain can determine the meaning of a spoken word in 1/5 of a second, and it needs ¼ of a second to name an object and ¼ of a second to pronounce it. For readers, the meaning of the printed word is registered at 1/8 of a second (p. 27).
  • The brain’s ability to recognize different meanings in sentence structure is possible because Broca’s and Wernicke’s areas and other smaller cerebral regions establish linked networks (p. 27).
  • In an fMRI study (functional magnetic resonance imaging), Dapretto and Bookheimer (1999) found that Broca’s and Wernicke’s areas work together to determine whether changes in syntax or semantics result in changes in meaning (p. 27).
  • Broca’s area was highly activated with sentences of different syntax but the same meaning, such as The policeman arrested the thief. The thief was arrested by the policeman (p. 28).
  • Wernicke’s area was activated by semantically but not syntactically different sentences. The car is in the garage. The automobile is in the garage (p. 28).
  • Neurons in Wernicke’s area are spaced about 20% farther apart and are cabled together with longer interconnecting axons than the corresponding area to the right of the brain. The implication is that language practice during early human development resulted in longer and more connected neurons in the Wernicke region, allowing greater sensitivity to meaning (p. 28).
  • Wernicke’s area also can recognize predictable events. An MRI study found that Wernicke’s area was activated when subjects were shown colored symbols in various patterns. The capacity of the Wernicke area to detect predictability suggests that our ability to make sense of language is rooted in our ability to recognize syntax. Language is predictable because it is constrained by the rules of grammar and syntax (p. 28).
  • Memory systems develop to store and recall. There are several different types of memory banks (p. 28).
  • Dog enters the ear canal, and the listener decodes the sound pattern. The acoustic analysis separates the word sounds from background noise, decodes the phonemes in the word, and translates them into a code recognized by the mental lexicon. The lexicon selects the best representation from memory and then activates the syntactic and semantic networks, which work together to form the mental image. This occurs in a fraction of a second, thanks to the extensive neural pathways and memory sites established during the early years (p.29).
  • If the semantic network finds no meaning, it may signal for a repetition of the original spoken word to reprime the process (p. 29).
  • Reading words shares several steps with the model of spoken language processing (p. 29).
  • The most basic type of language comprehension is explicit. For example, I need a haircut (p. 30).
  • Inferred comprehension requires inferences. Vegetables are good for you. The child must assume the parent is requesting they be eaten (p. 30).
  • Context clues can help with inferred comprehension (p. 30).
  • Children need to develop an awareness that language comprehension exists on several levels: different speech styles that reflect the formality of conversation, the context in which it occurs, and the explicit and underlying intent of the speaker. When children understand speech, they will better comprehend what they read (p. 31).

Further Resources  

Copyright 10/12/2015

Edited on 03/12/2024

 

 

Reference

Sousa, David A. How the Brain Learns to Read. Thousand Oaks, CA: Corwin, a SAGE, 2014. Print.

Loading

error: Content is protected !!