Last month I posted saying I was working on the validity analyses for the Bilingual English Spanish Assessment (BESA) that is hopefully soon to be published and available for clinicians. Today, I’ll tell you a little about the results. As many of you know, a number of years ago we (Vera Gutierrez-Clellen, Aquiles Iglesias, and I) got an NIH contract to explore typical and atypical speech and language development in Spanish-English bilingual children. Brian Goldstein and Lisa Bedore joined our team about a year later. The results of the 7 year project were to have a measure that would identify bilingual children with language impairment and phonological impairment.Through that project and a couple more follow-up grants, we have developed a measure that works well, and we’ve learned a lot about bilingual language impairment along the way. So, what about validity?
As you know, validity focuses on the idea that a measure actually tests what it’s supposed to test. One can do this in several ways. One is to look at the literature and determine if the test targets are consistent with the literature– and this is where we started. We looked at work on English language impairment and Spanish language impairment. When we started there was some, but not much on bilingual language impairment.
Another way to think about validity is to look at the relationship between the new measure and an older, well-established measure. Well, that’s great except that there aren’t a whole lot of other measures for Spanish speakers, and when we started many of the ones that were out there were translations from English and there were no norms. Now there are more, which is a good thing. So, as part of our test development process we collected language samples on all the kids we tested. Jon Miller was on our advisory board and he (and Ann Nockerts) made a bunch of adaptations to the SALT software so that we could use it with our samples. In our validity analyses we looked at NDW, TNW, percent grammatical, and number of main verbs. These all correlate significantly with the subtests of the BESA. But, they correlate most strongly with semantics and morphosyntax (which is what we would expect).
We also have compared our subtests to subtests from other, published tests. For English we examined correlations between our subtests and the TOLD-P subtests, the TNL, and the EOWPVT-2000. Again, we got significant correlations. We also got expected patterns where semantics correlated more strongly to measures of vocabulary and morphosyntax correlated more strongly to grammatical subtests. Whew! For Spanish, we examined correlations with the EOWPVT-Bilingual Spanish and both morphosyntax and semantics were significantly correlated with a larger coefficient for semantics as one would expect.
So, in terms of content, it looks like we are testing what we say we are. The subtests correlate significantly with language sample measures and standardized measures of language. And the patterns of the correlations make sense based on the domain. Yeah!