Learning Spoken Language



NEW PARENTS ~ Visit this site:  http://www.lenafoundation.org and buy a LENA device! Read below to find out how CRITICAL it is to communicate with your infants!  The device can help you analyze your conversations! How quickly one learns to read is influenced by speech comprehension and visual recognition (p. 9).

Read the Raising Readers site!  

  • By recognizing and trying out speech sounds, the brain establishes neural networks needed to: manipulate sounds, acquire and comprehend vocabulary, detect language accents, tone, stress, and map out visual sentence structure (p. 9).
  • A few years later the brain will call on its visual recognition system to connect the sounds to abstract visual systems (alphabet) so it can learn to read (p. 9).
  • To be successful readers, children need: broad vocabulary, little to no grammatical errors in speech, sophisticated sentence structure, and comprehension of variations in sentence structure (p. 9).
  • See pages 10 and 11 for Typical Development of Language and Visual Recognition Systems from Birth-3 years. (Visual ~ tracking objects, recognizing faces. . . .)
  • We are born with an innate capacity to distinguish the distinct sounds (phonemes) of all of the languages (nearly 7,000) on the planet (p. 12).
  • The voice becomes so fine-tuned that it makes only one sound error per million sounds and one word error per million words (Pinker, 1994) (p. 12).
  • Broca’s area: Paul Broca identified in injured brains in 1861, area near left temple understood language but has difficulty speaking. This is called aphasia (p. 12). Broca discovered the left hemisphere of the brain is specialized for language (p. 13).
  • Broca’s area: region of the left frontal lobe responsible for processing vocabulary, syntax (how word order affects meaning), rules of grammar, AND it is involved in the MEANING of sentences (p. 13).
  • Wernicke’s area: part of left temporal lobe that is thought to process sense and meaning of language. Works closely with Broca’s area (p. 13).
  • Wernicke described a different type of aphasia in 1881 ~ patients could not make sense out of what they heard. This is due to damage in left temporal lobe. They could speak fluently, but meaninglessly (p. 13).
  • Emotional content of language governed by right hemisphere (p. 13).
  • An infant’s ability to perceive and discriminate sounds begins after a few months and develops rapidly (p. 13).
  • When preparing to say a sentence, the brain calls on Broca’s and Wernicke’s areas and several other neural networks scattered throughout the left hemisphere. (p. 13).
  • Nouns are processed through 1 set of patterns, and verbs by separate neural networks (p. 13).
  • The more complex the sentence structure, the more areas are activated, including some on the right hemisphere (p. 13).
  • Brain imaging of 4 month olds confirms that the brain processes neural networks that specialize in responding to the auditory components of language (p. 13).
  • Dehaene-Lambertz (2000) used an EEG on 16 4 month olds as they listened to syllables and tones. Syllables and tones were processed primarily in different areas of the left hemisphere (p. 13).
  • Voice and the phonetic category of a syllable were encoded by separate neural networks into sensory memory. This shows that the brain is already organized into functional networks that can distinguish between language fragments and other sounds (p. 13).
  • Graham and Fisher (2013) found that the ability to acquire spoken language is encoded in our genes. People with mutated genes may have severe speech and language disorders (p. 13).
  • The genetic predisposition of the brain to the sounds of language explains why normal young children respond to and acquire spoken language so quickly (p. 13).
  • After the 1st year, the child becomes able to differentiate sounds heard in the native language and begins to lose the ability to perceive other sounds (p. 14).
  • When children learn 2 languages, all language activity is found in the same area of brain ~ how long the brain retains the responsiveness to the sounds of language is still open to question (p. 14).
  • The window for acquiring language within the language specific brain area diminishes during the middle years of adolescents (p. 14).
  • A new language in the brain will be spatially separated in the brain from the native language (p. 14).
  • Functional imaging shows males and females process language differently (p. 14).
  • Males process language in the left hemisphere, while females process in both hemispheres (p. 14).
  • The same areas are activated during reading (p. 14).
  • The corpus callosum ~ the larger number of neurons connecting the right and left hemisphere that allows them to communicate ~ is larger and thicker in a female than a male. Information travels more efficiently in females than in males (p. 14).
  • Girls acquire spoken language quicker because of dual-hemisphere language processing and more efficient between hemisphere communication (p. 14).
  • What this means is up for debate ~ some say the gender difference is minimal and it declines as we age, but others say it continues and even affects us as adults (p. 14).
  • There are close to 7,000 languages in the world, but there are only 170 phonemes that comprise the world’s languages (p. 15).
  • Although the infant’s brain can perceive the entire range of phonemes, only those that are repeated get attention, as the neurons reacting to the unique sound patterns are continually stimulated and reinforced (p. 15).
  • At birth or maybe even before, the infant responds to the prosody ~ rhythm, cadence, pitch ~ of caregiver’s voice ~ not the words. (p. 15).
  • 6 months:  babbling ~ a sign of early language acquisition. The production of the phonemes by infants is the result of genetically determined neural programs; however, language exposure is environmental (p. 15).
  • Baby’s brain develops phonemic awareness ~ it prunes phonemes which occur less frequently ~ and at 1 year of age ~ the neural networks focus on the sounds of language being spoken in the environment (p. 15).
  • The next step is for the brain to detect words within streams of sound. Parents tend to talk in parentese to children. This is instinctive (p. 16).
  • By 8 months, children can detect word boundaries. They acquire 7-10 new words/day. This is the mental lexicon (p. 16).
  • By 10-12 months toddlers ignore foreign sounds (p. 16).
  • Morphemes are added to speaking vocabulary ~ s, ed, ing ~ morphemes are the smallest units of language that carry some meaning such as prefixes and suffixes (p. 16).
  • Working memory and Wernicke’s area become more functional as the child can attach meaning to words, but putting words together to make sense is another more complex skill (p. 16).
  • Swaab used EEGs to measure the brain’s responses to concrete and abstract words in a dozen young adults. EEGs measure changes in brainwave activity called event related potential or ERP, when the brain experiences a stimulus. The image loaded words produced more ERPs in the frontal lobe ~ the part associated with imagery, while abstract words produced more ERPs in the top central and rear areas (parietal and occipital lobes). There was little interaction between these areas when processing words (p. 17).
  • Therefore we believe the brain may hold 2 separate stores for semantics (meaning) ~ one for verbal based information and the other for image based information. Teachers should use the concrete images when presenting abstract concepts (p. 17).
  • Adult-toddler conversations are critical. (p. 17).
  • Hart and Risley found, after analyzing and recording 1,300 hours of conversations between parent and child from ages 9 months through 3 years, from different SES backgrounds, the upper SES spoke 2,153 words/hour and the 3 year old child had 1,116 words, while the welfare family spoke 616 words/hour, and the child had 525 words. Even in 3rd grade the welfare family’s children still struggled (p. 18).  ***BUY A LENA DEVICE!***
  • Early vocabulary learning is a strong predictor on test scores at ages 9 or 10 on vocabulary, listening, speaking, syntax, and semantics (p. 18).
  • An enormous vocabulary gap is formed between children of different socioeconomic backgrounds. The gap continues to widen (p. 18).
  • Having preschool programs in place can save money with less children retained, less in special ed., and less remedial education (p. 18).
  • Lower SES groups spend more time in front of TV. This delays language as the parent is not interacting with the child (p. 19).
  • A study by Tomopoulos et al., in 2010 showed the more time spent watching TV from 9 months – 14 months the lower the cognitive development and language scores at 14 months. The type of programming did not matter (p. 19).
  • Other studies show TV watching prior to the age of 3 had negative cognitive outcomes on 6 year olds (p. 19).
  • Infant’s language learning needs other clues to attach meaning to spoken words. They need facial cues, intonation, intensity and rhythm ~ this is why TV is not good. No TV before the age of 2 (p. 19).
  • Phonemes can be combined into morphemes and morphemes can be combined into words. These words may accept prefixes, suffixes, insertions (infixes), and may undergo a change in consonants or vowels (p. 19).
  • Words can be put together according to the rules of syntax or word order to form phrases and sentences with meaning (p. 19).
  • Toddlers show evidence of their progression through syntactic (word order) and semantics (meaning) by going from “candy” to “give me candy.” They recognize that shifting words in sentences can change their meaning (p. 20).
  • English follows SVO format ~ or subject, verb, object (p. 20).
  • The front of the temporal lobe establish meaning when words are combined into sentences (p. 20).
  • Over time, the child hears patterns of word combinations, and they discern tense. By the age of 3, 90% of sentences they utter are grammatically correct because the child has constructed a syntactic network that stores perceived rules of grammar (p. 21).
  • The toddler has to learn that over 150 of the most commonly used English verbs are irregular ~ you don’t add ed to past tense, but the always add ed rule becomes part of the toddler’s syntactic network and it operates without conscious thought (p. 21).
  • To remedy this, the child needs adult correction and other environmental exposures (repetition is important to memory) so the syntactic network is modified to block the rule for the past tense ed on certain words.  A new word is added to their lexicon (p. 21) (for ex., held instead of holded).
  • The principle of blocking is an important component of accurate language fluency, and eventually, reading fluency (p. 22).
  • Long term memory plays an important role in helping the child remember the correct past tense of irregular verbs (p. 22).
  • No one knows how much grammar a child learns by listening or how much is prewired, but the more children are exposed to spoken language in the early years, the more quickly they can discriminate between phonemes, recognize word boundaries, and detect emerging rules of grammar that result in meaning ( p. 23).
  • Topic prominent languages downplay the role of passive voice and avoid “dummy subjects” such as it, as in It is raining. For ELLs, they need to learn how English syntax differs from their native tongue (p. 23).
  • Semantics is the study of meaning. Meaning occurs at 3 different levels of language: morphology (word parts), vocabulary, sentence level (p. 23).
  • Morphology helps children learn and create new words and it can help them spell and pronounce words correctly. They should learn that words with common roots have common meanings like in nation and national (p. 23).
  • People can learn vocabulary based on context, but most of the words must be understood (p. 23).
  • The mental lexicon is organized according to the meaningful relationships between words (p. 23).
  • On a study with a prime paired with a target word, such as swan/goose and tulip/goose ~ subjects were faster and more accurate in making decisions about target words being real words rather than nonsense words, if the target word was related to the prime. Researchers feel the reduced time for identifying related pairs results from these words being physically closer to each other among the neurons that make up the semantic network, and that related words may be stored together in specific cerebral regions (p. 24).
  • Another study showed that naming items in the same category activated the same area of the brain (p. 24).
  • Activating words between networks takes longer (p. 24).
  • Grammar rules of the order of words~  so speakers understand each other (p. 25).
  • The girl ate the candy. The candy was eaten by the girl.  Means the same but with different syntax (word order) (p. 25).
  • Context helps. A man ate a hot dog at the fair. We know it is a frankfurter and not a barking dog (p. 25).
  • Pinker (1999) says the young brain processes the structure of sentences by a noun phrase and a verb phrase. (A verb can be combined with its direct object to form a verb phrase “eat the hay”) (p. 26).
  • By grouping or chunking words into phrases, processing time is decreased (p. 26).
  • The young adult brain can determine the meaning of a spoken word in 1/5 of a second, it needs ¼ of a second to name an object, and ¼ of a second to pronounce it. For readers, the meaning of the printed word is registered at 1/8 of a second (p. 27).
  • Brain’s ability to recognize different meanings in sentence structure is possible because of Broca’s and Wernicke’s areas and other smaller cerebral regions establish linked networks (p. 27).
  • In a fMRI study (functional magnetic resonance imaging), Dapretto and Bookheimer (1999) found that Broca’s and Wernicke’s areas work together to determine whether changes in syntax or semantics result in changes in meaning (p. 27).
  • Broca’s area was highly activated with sentences of different syntax but the same meaning such as: The policeman arrested the thief. The thief was arrested by the policeman (p. 28).
  • Wernicke’s area was activated by sentences that were semantically but not syntactly different. The car is in the garage. The automobile is in the garage (p. 28).
  • Neurons in Wernicke’s area are spaced about 20% farther apart and are cabled together with longer interconnecting axons than the corresponding area to the right of the brain. The implication is that the practice of language during early human development resulted in longer and more connected neurons in the Wernicke region, allowing for greater sensitivity to meaning (p. 28).
  • Wernicke’s area also has the ability to recognize predictable events. An MRI study found Wernicke’s area was activated when subjects were shown different colored symbols in various patterns. The capacity of the Wernicke area to detect predictability suggests that our ability to make sense of language is rooted in our ability to recognize syntax.  Language is predictable because it is constrained by the rules of grammar and syntax (p. 28).
  • Memory systems develop to store and recall. There are several different types of memory banks (p. 28).
  • Dog enters the ear canal, the listener decodes the sound pattern. Acoustic analysis separates the word sounds from background noise, decodes the phonemes in the word, translates them into a code recognized by the mental lexicon. The lexicon selects the best representation from memory then activates the syntactic and semantic networks which work together to form the mental image. This occurs in a fraction of a second thanks to the extensive neural pathways and memory sites established during the early years (p.29).
  • If the semantic network finds no meaning it may signal for a repetition of the original spoken word to reprime the process (p. 29).
  • The process of reading words shares several steps with the model of spoken language processing (p. 29).
  • The most basic type of language comprehension is explicit. I need a haircut (p. 30).
  • Inferred comprehension requires inferences. Vegetables are good for you. The child must infer the parent is requesting they be eaten (p. 30).
  • Context clues can help with inferred comprehension (p. 30).
  • Children need to develop an awareness that language comprehension exists on several levels: different speech styles that reflect formality of conversation, context in which it occurs, explicit and underlying intent of speaker. When children get a good understanding of speech, they will be better able to comprehend what they read (p. 31).

Further Resources  






All page numbers above corresponds to:  How The Brain Learns To Read by David Sousa, 2014.




Araujo, Judith E., M.Ed., CAGS. "Learning Spoken Language." Mrs. Judy Araujo, Reading Specialist. N.p., 12 Oct. 2015. Web.  <http://www.mrsjudyaraujo.com/learning-spoken-language/>.

Sousa, David A. How the Brain Learns to Read. Thousand Oaks, CA: Corwin, a SAGE, 2014. Print.


I am happy to share my pages, but please cite me as you would expect your students to cite their sources.  Copyscape alerts me to duplicate content.  Please respect my work.


Comments are closed.