There’s a new paper out in AJSLP by Sharynne McLeod and Sarah Verdon. I think it’s a great resource for those of us who do bilingual assessment. Additionally, I think it’s an excellent example of how to review and select tests to use for diagnostic purposes. Over the last 10 or so years, there’s been a growing emphasis on evidence-based practice in speech-language pathology. We can’t simply use the tests we’ve always used because we are familiar and comfortable with them. We need to be able to justify our selections, and make our selections based on the best available scientific evidence. And yes, we can and should use our clinical expertise and client preferences as well, but we shouldn’t use these to discount the evidence or to justify what is essentially a bad decision. Rather we should use our expertise and client preferences to select among the very best, scientifically-based options. So, for assessment it’s important to know if it will do the job– that is– accurately identify an impairment when there is one, and accurately indicate that there is no impairment when there isn’t. That’s the most important thing.
For bilinguals, assessment is especially challenging because typical second language acquisition errors might look like impairment. And there have been relatively few instruments developed for other languages. McLeod and Verdon were interested in understanding the state of the art in terms of speech instruments for languages other than English. I really liked how they talked about conceptualization and operationalization of a test. Conceptualization has to do with the purpose and scope of the test. Operationalization has to do with the tests’ reliability and validity. In the article, the authors go through all the steps for evaluating conceptual and operational criteria for a total of 30 (yes THIRTY) tests that together represent 19 different languages.
This is the first review I know of that focuses on speech tests in languages other than English. They considered 41 different items in evaluating of these tests. Specifically, they evaluation the nature, content, and psychometric properties of the tests (including validity and reliability). Each of these was graded on a scale from 0 to 2.
They considered the items of the test, how the items were presented, how the items are scored (and how this all goes on to a protocol sheet). They also report on what kinds are analyses the tests support (phonetic inventory, phonological analysis, percent consonants correct, whole word).
Since the BESA came out pretty recently, I was surprised (and grateful– and then nervous) that the authors were able to include it in the analysis (even though the test was available since December, and the article was just published, there’s usually a time lag between when you get in the final version of a paper and the time it appears, so we got in just under the wire). I got over my nervousness when I read that the BESA was one of the two tests that scored the highest in meeting the criteria! (the other test that scored high is the CPAC-S by Goldstein & Iglesias— of course!).
Read the paper, it’s a great example of how to evaluate current (and future) tests for use. It lays out what to look for and why those are things to examine. And it’s good to know that there are several tests out there for children who speak other languages.