The Concise Encyclopedia of Applied Linguistics. Carol A. Chapelle. Читать онлайн. Newlib. NEWLIB.NET

Автор: Carol A. Chapelle
Издательство: John Wiley & Sons Limited
Серия:
Жанр произведения: Языкознание
Год издания: 0
isbn: 9781119147374
Скачать книгу
University Press.

      11 Read, J. (2000). Assessing vocabulary. Cambridge, England: Cambridge University Press.

      12 Read, J. (2007). Second language vocabulary assessment: Current practices and new directions. International Journal of English Studies, 7(2), 105–25.

      13 Simpson‐Vlach, R., & Ellis, N. C. (2010). An academic formulas list: New methods in phraseology research. Applied Linguistics, 31(4), 487–512.

      14 West, M. (1953). A general service list of English words. London, England: Longman.

      1 Milton, J., & Fitzpatrick, T. (Eds.). (2014). Dimensions of vocabulary knowledge. Basingstoke, England: Palgrave Macmillan.

      2 Schmitt, N. (2010). Researching vocabulary: A vocabulary research manual. Basingstoke, England: Palgrave Macmillan.

      SARA T. CUSHING

      Why do we test writing? Green (2014) distinguishes between proficiency assessment and educational assessment. A proficiency test is not tied to a specific curriculum but is intended to give an overall assessment of language ability, often to predict how well students will function in academic or work settings. Writing tests are frequently included as part of a language proficiency test such as the Test of English as a Foreign Language Internet‐Based Test (TOEFL iBT®) or International English Language Testing System (IELTS). Educational assessments take place in the context of a given educational program and comprise several types of tests. A placement test is used to determine where a student fits in a given curriculum. In a general language program writing tests are often used in combination with tests of other language skills for placement; however, writing tests are also widely used in the USA and other countries to place students into first‐year composition courses at the university level (see White, 1994). Diagnostic and achievement tests tend to be (although are not always) classroom‐based assessments; diagnostic tests are used to identify specific areas of strength and weaknesses, while achievement tests are used to determine whether students have mastered specific skills or knowledge that has been taught.

      The remainder of the entry is organized as follows. First is a discussion on the nature of writing in a second language. Next follows a discussion on research and practical considerations in designing writing assessments, particularly as they relate to task design and scoring procedures. Finally, there is a discussion on some recent trends in writing assessment.

      Applied linguistics is traditionally concerned with learning and teaching a second language. Thus, when we discuss writing assessment in applied linguistics, we usually refer to assessing writing in second or foreign‐language contexts rather than assessing first language writing. Assessing second language writing is often a matter of evaluating second language proficiency through the means of writing, as opposed to the degree to which first language speakers have mastered writing conventions. However, the two are not always distinguishable; many L1 writing concerns are also issues for L2 writing, particularly in academic settings. The degree to which first and second language writing assessments are distinct is related to several considerations, including age, context of learning, and proficiency level, described below.

       Age: Because younger learners tend to be learning to write in their first language, they are developing literacy skills in both their first and second language. In contrast, older learners, depending on their background and history, may or may not have literacy skills in their first language. If they do, they can transfer many of the strategies from their first language onto their second; if they do not, then “writing” may simply refer to basic literacy skills.

       Context (i.e., second vs. foreign language): Students learning to write in a second language context generally need to write for school or work. These students have an immediate need to master the genres and conventions of writing for specific purposes in that language. In a foreign language context, writing may be used to reinforce and practice the language structures (grammar and vocabulary) learned in class rather than as authentic communication. In some situations, foreign‐language learners may never actually need to write in their second language outside the classroom even though they may find opportunities to do so through the use of the Internet.

       Proficiency level: At the earliest stages of language learning, writing is limited to the specific vocabulary and grammar that have been learned; again, writing at this stage may be more appropriately used for reinforcing and practicing language structures. As students gain proficiency in the language, the focus in writing can be more on content development and organization and less on the specifically linguistic aspects of writing.

      These factors, among others, will determine whether language (i.e., displaying language ability by means of writing) or writing (i.e., communicating in writing by means of a second language), or some combination of the two, will be the main focus of assessment (see also Weigle, 2013; for a relevant discussion of similar considerations from the perspective of performance assessment, see McNamara, 1996, chap. 2).

      One way to think about types of writing assessments is as a continuum from least to most authentic, in terms of the degree to which they simulate real‐world writing conditions. At one end of the continuum we can make a distinction between an indirect test of writing and a direct test. An indirect test attempts to measure the subskills involved in writing (particularly grammar and usage) via multiple choice or other selected response measures. Such measures were prevalent in the USA in the 1970s and 1980s (White, 1994). A direct test of writing requires the examinee to produce a continuous piece of prose in response to a set of instructions, and has the following additional characteristics (Hamp‐Lyons, 1991): responses are ordinarily at least 100 words; the instructions, or prompt, provides direction but allows the candidate considerable leeway in responding; each response is read by at least one (preferably two) trained raters using a common scale; and the result is a number rather than (or in addition to) a verbal description of the writing. Typical writing tests are also conducted under timed conditions and the topic is frequently, if not usually, unknown to the candidates in advance (Weigle, 2002).

      On the other end of the continuum of authenticity is portfolio assessment, which allows writing to be assessed over time and over a range of writing tasks and genres. A portfolio can be defined as “a purposeful collection of student works that exhibits to the student (and/or others) the student's effort, progress, or achievement in a given area” (Northwest Evaluation Association, 1991, p. 4, cited in Wolcott, 1998). While portfolios have been implemented successfully in large‐scale writing, portfolio assessment may be more feasible as a classroom assessment tool (Hamp‐Lyons & Condon, 2000; Weigle, 2002, chap. 9).