Unpacking Validation Approaches for Applied Linguistics: Current Challenges and Potential Solutions to Enhance Accessibility and Uptake
25 mars 2022 — 12 h à 13 h 15
Le Centre canadien d’études et de recherche en bilinguisme et aménagement linguistique (CCERBAL) vous invite chaleureusement à son prochain forum de recherche intitulé Unpacking Validation Approaches for Applied Linguistics: Current Challenges and Potential Solutions to Enhance Accessibility and Uptake présenté par Angel Arias professeur adjoint à l'école de linguistique et d'études linguistiques de Carleton.
Résumé (en anglais seulement)
Applied linguistics relies on various constructs (e.g., motivation, language aptitude, resilience, teacher cognition, etc.) to study language learning issues and obtain valuable information about context, language learners, and teachers to ultimately improve teaching practices and learning experiences. Theories that underlie the constructs of motivation (Tremblay & Gardner, 1995), resilience (Wang, 2021), language aptitude (Skehan, 2016; Wen et al., 2016), teacher cognition (Borg, 2010), and language proficiency (Bachman, 2007; Bachman & Palmer, 2010) have informed the development of questionnaires, surveys and tests that operationalize such constructs. With the exception of a handful of high-stakes language tests, it is common in our field to develop questionnaires, surveys, and other forms of data collection instruments without subjecting them to procedures of thorough analysis and quality control (i.e., validation) prior to use. Although applied linguistics is considered a collaborative discipline that works on language-based problems within and between fields (Farsano et al., 2021), validation approaches are not readily embraced to gather the required evidence to support score-based interpretations and uses of these instruments. Current and dominant validation approaches include the Standards for Educational and Psychological Testing (American Psychological Research Association [AERA], American Psychological Association [APA] & National Council on Measurement in Education [NCME], 2014) and argument-based validation (Bachman & Palmer, 2010; Chapelle, 2020; Kane, 2013). However, these approaches are rather complex and exclusionary (i.e., accessible to scholars in assessment-centred communities) and require a reasonable amount of resources to apply. This talk outlines the challenges associated with current validation frameworks, discussing implementation difficulties and potential solutions that could enhance uptake across applied linguistics. This talk will be delivered in French and English.
Angel Arias, PhD
Assistant Professor in the School of Linguistics and Language Studies at Carleton
Angel Arias holds a Ph.D. in Educational Measurement from the Université de Montréal. His Master’s degree is in Applied Linguistics and Discourse Studies, and his bachelor’s degree is in Education and Modern Languages from Universidad Dominicana Organización y Métodos (O&M), Dominican Republic. His research interests focus on the application of psychometric models and mixed methods approaches in language testing and assessment to evaluate validity evidence of test score meaning and justification of test use in high stakes and classroom contexts. He has served as an external consultant for the Ministry of Quebec’s Education and Chair of the Test Validity Research and Evaluation special interest group of the American Educational Research Association (AERA).