As you know, there is a lack of appropriate standardized instruments to appropriately determine language impairment in bilingual children. We have made strides though in assessment Spanish-English bilinguals, which is the biggest bilingual group of kids here in the US especially in the area of morphosyntax. Work by Bedore, Restrepo, Gutierrez-Clellen, demonstrates the kinds of errors that Spanish speaker and bilingual Spanish-English speakers with language impairment make. But, there isn’t quite as much in the area of semantics.
We do know children with language impairment have primary difficulty in the area of morphosyntax, and they often show below average performance on vocabulary tests. At the same time, vocabulary tests are not good diagnostic tools for making a determination of language impairment. The sensitivity and specificity of these single word tests is less than even the minimum (of 80% correct classification) to have some degree of confidence that a low score means language impairment, or for that matter that a high score means no language impairment. Yet, vocabulary tests are easy to give, they are easy to understand, and they lend themselves well to conceptual scoring. Don’t get me wrong, I do think vocabulary tests have a place in assessment practices for the SLP–just not for the purpose of diagnosis.
Beyond words, children with language impairment often have difficulty organizing, selecting, and making connections between words. They have trouble with word relationships like definitions, analogies, categories, descriptions and so on. So, we figured if we tapped these kinds of semantic skills that we might be able to use tasks like these to differentiate between children with and without language impairment. Also, for bilinguals, we wanted tasks that could be done in both languages, could be responded to using codeswitching, and where WHAT was said didn’t matter as much as making connections. For example for a category generation task like, “tell me all the fruit you can think of” it wouldn’t matter if kids said, “orange, apple and banana” or “mango, papaya, and pineapple” or even, “naranja, apple, and durazno.”
That’s what we did in this study. The item sets we used are those employed in developing the BESA. There were 48 items tested in each language and they consisted of: category generation, similarities & differences, linguistic concepts, analogies, comprehension of passages, and functions.
We found that for functional monolinguals the semantics tasks did very well in each language, 81% correct classification. When we tested bilinguals in each language, the Spanish version had higher classification then the English version of the test (even when we used conceptual scoring). We think this may be because the kinds of items tested for the Spanish version of the test were more culturally familiar to the bilinguals than the items tested in English (some of which were more “school-like”). So, it’s not just about allowing codeswitching but looking at the cultural familiarity of the tasks themselves. Also, if you only use one language to test bilingual semantics, it may be best to use the home language rather than English (of course it’s best to test in both).
In follow-up analyses for development of the BESA we actually get better classification. That’s because of the 48 items in this task, we dropped 24 leaving the best 24. So, that’s the strategy for improving the test!