SEE ALSO: Construction Grammar; Systemic Functional Linguistics; Task‐Based Language Assessment
References
1 Bachman, L. F. (1990). Fundamental considerations in language testing. Oxford, England: Oxford University Press.
2 Bachman, L. F., & Palmer, A. S. (1996). Language testing in practice. Oxford, England: Oxford University Press.
3 Bachman, L. F., & Palmer, A. S. (2010). Language assessment in practice. Oxford, England: Oxford University Press.
4 Bonk, B., & Oh, S. (2019). What aspects of speech contribute to the perceived intelligibility of L2 speakers? Paper presented at 2019 Language Testing Research Colloquium, Atlanta, GA.
5 Burnstein, J., van Moere, A., & Cheng, J. (2010). Validating automated speaking tests. Language Testing, 27(3), 355–77.
6 Canale, M., & Swain, M. (1980). Theoretical bases of communicative approaches to second language teaching and testing. Applied Linguistics, 1, 1–47.
7 Celce‐Murcia, M., & Larsen‐Freeman, D. (1999). The grammar book: An ESL/EFL teacher's course (2nd ed.). Boston, MA: Heinle & Heinle.
8 Chang, J. (2004). Examining models of second language knowledge with specific reference to relative clauses: A model‐comparison approach (Unpublished doctoral dissertation). Teachers College, Columbia University, New York, NY.
9 Chapelle, C. A. (2008). Utilizing technology in language assessment. In N. H. Hornberger (Ed.), Encyclopedia of language education. (2nd ed.). Heidelberg, Germany: Springer.
10 Chapelle, C. A., Chung, Y. R., Hegelheimer, V., Pendar, N., & Xu, J. (2010). Towards a computer‐delivered test of productive grammatical ability. Language Testing, 27(4), 443–69.
11 Cumming, A., Kantor, R., Baba, K., Eouanzoui, K., Erdosy, U., & James, M. (2006). Analysis of discourse features and verification of scoring levels for independent and integrated prototype writing tasks for new TOEFL (TOEFL Monograph No. MS‐30). Princeton, NJ: Educational Testing Service.
12 Ellis, R. (2005). Measuring the implicit and explicit knowledge of a second language: A psychometric study. Studies in Second Language Acquisition, 27, 141–72.
13 Ellis, R., & Barkhuizen, G. (2005). Analysing learner language. Oxford, England: Oxford University Press.
14 Enright, M. K., & Quinlan, T. (2010). Complementing human judgment of essays written by English language learners with e‐rater scoring. Language Testing, 27(3), 317–34.
15 Grabowski, K. (2009). Investigating the construct validity of a test designed to measure grammatical and pragmatic knowledge in the context of speaking (Unpublished doctoral dissertation). Teachers College, Columbia University, New York, NY.
16 Keenan, E., & Comrie, B. (1977). Noun phrase accessibility and the accessibility hypothesis. Inquiry, 9, 63–99.
17 Kim, H. J. (2009). Investigating the effects of context and language speaking ability (Unpublished dissertation). Teachers College, Columbia University, New York, NY.
18 Knoch, U., Macqueen, S., & O'Hagan, S. (2014). An investigation of the effect of task type on the discourse produced by students at various score levels in the TOEFL iBT® writing test (TOEFL Monograph No. RR‐14‐43). Princeton, NJ: Educational Testing Service.
19 Lado, R. (1961). Language testing. New York, NY: McGraw‐Hill.
20 Liao, Y.‐F. A. (2009). Construct validation study of the GEPT reading and listening sections: Re‐examining the models of L2 reading and listening abilities and their relations to lexico‐grammatical knowledge (Unpublished dissertation). Teachers College, Columbia University, New York, NY.
21 Purpura, J. E. (2004). Assessing grammar. Cambridge, England: Cambridge University Press.
22 Purpura, J. E. (2016). Assessing meaning. In E. Shohamy & L. Or (Eds.), Encyclopedia of language and education. Vol. 7: Language testing and assessment (3rd ed.). New York, NY: Springer International Publishing. doi: 10.1007/978‐3‐319‐02326‐7_1‐1
23 Purpura, J. E., Dakin, J. W., Ameriks, Y., & Grabowski, K. (2010). How do we define grammatical knowledge in terms of form and meaning dimensions at six different CEFR proficiency levels? Cambridge, England: Language Testing Research Colloquium.
24 Weigle, S. T. (2010). Validation of automated scores of TOEFL iBT tasks against non‐test indicators of writing ability. Language Testing, 27(3), 335–53.
25 Wolfe‐Quintero, K., Inagaki, S., & Kim, H.‐Y. (1998). Second language development in writing: Measures of fluency, accuracy and complexity. Technical report 17. Manoa: University of Hawai‘i Press.
Note
1 Based in part on J. E. Purpura (2012). Assessment of grammar. In C. A. Chapelle (Ed.), The Encyclopedia of Applied Linguistics. John Wiley & Sons Inc., with permission.
Assessment of Listening
GARY J. OCKEY
Listening is important for in‐person communication, with estimates that it accounts for more than 45% of the time spent communicating (Feyten, 1991). Listening continues to become more important in virtual environments with the increase of communication through such technologies as FaceTime, Second Life, and Skype. It follows that teaching and assessing the listening skill of second language learners is essential. Unfortunately, the assessment of second language listening comprehension has attracted little research attention (Buck, 2018) and, as a result, understanding of how to best assess it is limited.
Listening Processes
Current conceptions of the listening process maintain that comprehension results from the interaction of numerous sources of information, including the acoustic input and other relevant contextual information. The mind simultaneously processes these incoming stimuli and other information such as linguistic and world knowledge already present in the mind. Listening comprehension is a dynamic process, which continues for as long as new information is made available from any of these sources (Gruba, 1999; Buck, 2001).
Listening is multidimensional but is comprised of related discrete lower‐level ability components. While agreement on a comprehensive list of these components has not been reached (nor does there exist an agreed‐upon theory of how these components operate with each other), some research indicates that listening ability may include three lower‐level abilities: the abilities to understand global information, to comprehend specific details, and to draw inferences from implicit information (Min‐Young, 2008). Test developers typically draw upon these in defining a listening construct in the first stages of test development.
Factors Affecting Listening
Professionals take into account factors that affect listening comprehension when they design and use listening assessments. One of these factors is rate of speech (Brindley & Slatyer, 2002). When listeners comprehend authentic oral communication, they process a large amount of information very rapidly, which can result in cognitive overload or push working memory beyond its capacity. This means that listeners may not be able to understand input at faster speeds which can, however, be processed at slower speeds. Research also indicates that background knowledge about the topic is important for the message to be comprehended. Test takers with background knowledge on a topic related to the input are generally advantaged (Jensen & Hansen,