I’m working on a paper that focuses on language dominance, proficiency and exposure. I’ve written about these definitions before. Here, I want to think about how it is we capture this information.
There are a number of really nice surveys and questionnaires that have been developed that help to document this information. These include L1 and L2 age of acquisition; educational history in each language, rating of proficiency in each language. Sometimes this is broken out into speaking, listening, reading and writing. Some questionnaires ask about what language is more proficient, and may ask for what purpose(s) each language is used. This information is designed to get at the question of how language is used and how proficient an individual might be across situations. In speech-language assessment of bilinguals, SLPs use this information to inform interpretation of test results. For children we want to know when they started learning English and how much exposure (and what kind) to each language they currently have. If a Spanish-English speaking child uses and is exposed to a lot of English and has been using it for 3 or more years, we know that a low English score on a standardized test is not likely to be due to lack of exposure. Yes, we might observe some Spanish influenced forms in their language, but not too many errors that are typical of DLD. And when they do make errors they shouldn’t make too many of them. If, they more recently started using English regularly we might expect more errors. In the Bilingual English Spanish Assessment (BESA) kit, we include the Bilingual Input Output Survey (BIOS) which can be used to document current use and exposure to two languages as well as history of language use. That information is used to provide guidelines for what language to test. Other questionnaires such as the LEAP-Q are also available for clinical or research use.
School districts need to made decisions about whether or not children need English support, primary language support, or other kind of program. Often, they use a home language survey to make decisions about who to test for ESL. What’s interesting is that the vast majority of these surveys ask very few questions, and there is little to no data on whether they are valid. According to Bailey & Kelly these surveys are inconsistent within and between states and there is very little data to show that they do what they are supposed to do– which is identify which children may need some sort of ESL support. Why do something when it doesn’t really do the job? Isn’t this just a waste of time and resources when there are better ways?
Here are some of the forms that California uses. The questions are good questions, but I suspect that how the results are interpreted and the action plan on the basis of those results needs scrutiny. One thing that is interesting is that the home language survey focuses on exposure to the first language. But, not the second language. The informal assessment of the primary language form focuses on proficiency in the L1. Neither of these consider exposure or proficiency to the L2. What can happen then is (even if the survey is really good) that kids who have achieved a high level of proficiency in their L1 and English end up tested for ESL, year after year after year. Even if they are in English only programs at school and doing well, they may be tested on the basis of the responses to these surveys.
I think the assumption is that if the child has a lot of exposure to the L1 they can’t have achieved a high level of proficiency in the L2, but this is just wrong. What do we know then? Well, for preschool children, we know that current exposure is more predictive of their performance than age of exposure. We know that if children had 80% exposure to English or more, their scores on a screener of semantics and morphosyntax was almost exactly like that of their peers who had 90% and 100% exposure to English.
For older kids though, the story is a little different. We found that age of exposure matters as much as current exposure. This makes sense, they’re older and they are accumulating L2 experience. For first graders, those with the least amount of English exposure scored the lowest on the English measure. This makes sense. But, this interacts with age of exposure. Those who had an early age of English exposure had higher scores than those with later ages of English exposure. What’s important here is that kids with relatively early age of English acquisition and who had a high level of Spanish exposure had scores within 1SD of kids who had been using English from birth. Similar patterns are seen for third grades. Those who had their first exposure to English at 4 or 5 (preschool) by third grade scored only a little bit lower than their English only peers. By this age, they’ve had 4-5 years of English exposure. No, as a group, they are not exactly like monolinguals, but they are close enough that they would not be considered at risk for academic or language difficulties.
So, I think we can do better with these surveys. We can and should ask about English exposure and about age of first exposure. We need to ask about home language proficiency and put this together with English proficiency. By asking just a few more questions, I think we can do better by our ELLs.