The Handbook for Collaborative Common Assessments. Cassandra Erkens. Читать онлайн. Newlib. NEWLIB.NET

Автор: Cassandra Erkens
Издательство: Ingram
Серия:
Жанр произведения: Учебная литература
Год издания: 0
isbn: 9781942496878
Скачать книгу
teachers in North America have had insufficient formal training, practice, feedback, and ongoing support regarding the principles of sound assessment. As Rick Stiggins (2008) notes, education has primarily relied on textbook and testing companies to design high-quality assessments. Both undergraduate and graduate teacher-preparation programs have an obvious and alarming absence of courses regarding effective assessment design and use (Stiggins & Herrick, 2007). When teachers do not understand the theory and practice of valid and reliable assessments, teachers have no option but to use predesigned assessments from their textbooks or make up assessments. Often, they replicate the poor assessment practices that they themselves experienced as K–12 students.

      Unless a teacher uses sound assessments, the teacher has no way to ensure that the teaching has actually transferred into learning. Assessment is a core teaching process. Teaching in the absence of constant sound assessment practices is really just coverage of content. The only way teachers can guarantee learning is if they all use sound assessment practices effectively.

      If that’s the case, wouldn’t it suffice if teachers just used already provided assessments in the way the assessment materials advise? Absolutely not. Time and again, when teachers use existing assessments, it accidentally creates gaps in teaching and learning because teachers rarely analyze a prewritten assessment carefully before they administer it, and then, they accept the resulting data at face value. Table 2.4 includes some of the inaccuracies and insufficiencies that result from this lack of analysis.

Assessment Error Inaccuracy Insufficiency
Item or Task Quality The items or tasks are often set up to gather data in the quickest possible manner. When that happens, the assessment falls short of truly measuring the full intent of the standards it is designed to assess (for example, many performance-oriented standards are assessed with the more easily scored pencil-and-paper test). The items or tasks fall short of deep application or higher-order reasoning. Many assessments stop at ensuring students possess content information and in some cases can execute the algorithms that accompany the knowledge. Few assessments move to the level of requiring students to integrate knowledge or construct new solutions and insights in real-world applications.
Sampling An assessment might include standards that are not within the expectations of the teachers engaged in that curricular material. An assessment might include too many standards and not have enough samples of each standard to ensure any reliability.
Results All items have equal weighting—even items that the curricular resource itself might have deemed nonsecure goals. Teachers tally and report the full data for decision making even though the final results should not include some of the generated data. The data may result in learners unnecessarily receiving interventions. Error analysis of what went wrong in an individual student response (reading error, concept error, or reasoning error) frequently stops at the point of the resulting percentage or score. Item analysis is limited to whether the item or prompt was of high quality based on the responses it generated. The data do not offer insight into student thinking or inform next instructional steps.

      According to Helen Timperley (2009), “Knowledge of the curriculum and how to teach it effectively must accompany greater knowledge of the interpretation and use of assessment information” (p. 23). Teachers must experience assessment development and deployment in order to understand it. Designing assessments in advance of teaching creates a laser-like focus on and comprehensive understanding of the instruction required to attain mastery. This does not mean that teachers should only use assessments they themselves create; instead, it means that they can no longer depend solely on the assessments that come premade from outside testing vendors with their curricular materials and software item banks.

      When teacher teams design and employ assessments and interpret their results, they build shared knowledge regarding assessment accuracy and effectiveness. Teachers who engage in the collaborative common assessment process learn both how to design assessments accurately and use assessment data effectively.

      Teams are better able to create more accurate assessments when they agree to design their assessments so that they align to standards; have clear, uniform targets; feature accurate prompts and measurement tools; include varied assessment methods and data points; and foster increased rigor and relevance. The adage Many hands make light work, is as relevant here as the notion that many eyes can bring multiple perspectives into clear focus.

      Alignment to Standards

      From the late 1980s until the early 2000s, schools and districts told teachers to follow the pacing guide and implement the curriculum with fidelity. In some schools and districts, mandates and monitoring made it dangerous for teachers to deviate from the prescribed curriculum plan. As of 2019, no single curriculum has fully aligned with any state’s standards. While textbook companies can demonstrate that their curricular materials address a state or province’s standards, they cannot prove that the materials address every standard, that they do so at the depth that the state or province’s testing system requires, or that their curriculum-based assessments match the types of questions the state or province might ask. When teachers develop collaborative common assessments, they begin with the standards, not the curriculum, to make their instruction and assessment decisions. That early alignment process can better support accurate design.

      Clear, Uniform Targets

      When teachers unpack standards together, they develop a shared understanding of the target expectations that the standards require. It is imperative that teachers agree to the specific learning expectations outlined in the standards. It is equally imperative that they agree on the meaning of key verbs. For example, they might ask, “What exactly does summarize mean? Is summarize similar to or different from generalize? What type of task would best engage learners in the process of summarizing, and what quality criteria would guarantee high-quality summaries in every classroom?” If teams are clear on the individual terms and the specific demands of the standards, they can provide more consistent and accurate instruction leading into the assessments. They can also make individual decisions that allow for variances in the assessments but that remain contingent on clear, agreed-on lists of learning targets unit by unit.

      Accurate Prompts and Measurement Tools

      It is impossible to write a perfect assessment task, item, or rubric; it is sometimes hard to even write a good one. However, when teams work collaboratively, they generally develop such prompts and measurement tools in a more thoughtful way. They often seek clarifying examples, challenge each other’s personal schemas, refine their work based on the evidence it generates over time, and, most importantly, calibrate their expectations so they have consistency from classroom to classroom.

      Varied Assessment Methods and Data Points

      A deep exploration into standards and target language engages teams in exploring the proper questions, prompts, or tasks that will truly assess students’ expected attainment and mastery of the content. This exploration makes it apparent that one assessment, or even one type of assessment, will not suffice to accurately certify a learner’s degree of mastery of a standard. For example, it is important to assess the small, specific tasks (such as identifying text-specific details) of a large concept or skill (for example, drawing conclusions or making predictions) to verify that learners are ready to engage in the larger concept or skill, but it is equally important to engage learners in a comprehensive assessment that certifies that they can put all the parts together. Multiple assessment methods and multiple data points provide a more comprehensive and accurate picture