Some methodological considerations on assessing bilingual and foreign language speech in linguistic experimentation


University of Louisiana at Lafayette – April 6-9, 2022 (hybrid)

Keywords: language assessment, second language acquisition, assessing speech, methodology, validity, reliability

Assessing speech is an essential element of many studies on second language learning or acquisition. For statistical analysis, the participants are often grouped by proficiency. For ample examples, see studies in edited volumes like Martohardjono and Flynn (2021), Ionin and Rispoli (2019), VanPatten and Jegerski (2010). Yet, very little to no elaboration in those studies is given to how the students are tested in their language proficiency. Routinely, various tests are used and then results of the studies are compared.

In this paper, we argue that such comparisons are unwarranted. We focus on speech, and argue that while testing second language speech is a vital element of most papers in the field, considerations of validity (Chapelle and Voss 2021) and reliability for such testing are ubiquitously ignored. The objective of the paper is thus to highlight the drawbacks of using a diversity of tests for assessing second language speech (Michigan test, Goethe, DELF etc.) and argue for a unified framework across the board which would aid interpretation of the results of many studies.

To argue against a wide range of tests, we overview the implications this has for comparing studies. To advocate for the unified framework of assessing speech, we sketch a proposal of a rubric following these requirements: