1 Introduction
During the 2020–2021 academic year, the Dynamic Learning Maps® (DLM®) Alternate Assessment System offered assessments of student achievement in English language arts, mathematics, and science for students with the most significant cognitive disabilities in grades 3–8 and high school. Due to differences in the development timeline for science, a separate technical manual update was prepared for science (see Dynamic Learning Maps Consortium, 2021a).
The purpose of the DLM system is to improve academic experiences and outcomes for students with the most significant cognitive disabilities by setting high, actionable academic expectations and providing appropriate and effective supports to educators. Results from the DLM alternate assessment are intended to support interpretations about what students know and are able to do and to support inferences about student achievement in the given subject. Results provide information that can guide instructional decisions as well as information for use with state accountability programs.
The DLM Alternate Assessment System is based on the core belief that all students should have access to challenging, grade-level content. Online DLM assessments give students with the most significant cognitive disabilities opportunities to demonstrate what they know in ways that traditional paper-and-pencil, multiple-choice assessments cannot. The DLM alternate assessment provides optional, instructionally embedded testlets that are available for use in day-to-day instruction. A year-end assessment is administered in the spring, and results from that assessment are reported for state accountability purposes and programs. This design is referred to as the Year-End model and is one of two models for the DLM Alternate Assessment System. See Assessment section in this chapter for an overview of both models.
A complete technical manual was created after the first operational administration in 2014–2015. After each annual administration, a technical manual update is provided to summarize updated information. The current technical manual provides updates for the 2020–2021 administration. Only sections with updated information are included in this manual. For a complete description of the DLM assessment system, refer to previous technical manuals, including the 2014–2015 Technical Manual—Year-End Model (Dynamic Learning Maps Consortium, 2016).
1.1 Impact of COVID-19 on the Administration of DLM Assessments
The COVID-19 pandemic had a significant impact on instruction, learning, and assessment. Beginning in March 2020, in response to the pandemic, many states and local school districts closed in an effort to slow the spread of the virus, as recommended by the Centers for Disease Control and Prevention (2020b, 2020a). During school closures, students across the country were unable to complete their spring assessments, including the DLM alternate assessments. As a result, on March 20, 2020, the U.S. Secretary of Education used her authority under the Elementary and Secondary Education Act of 1965 (Elementary and Secondary Education Act of 1965, 1965), as amended by the Every Student Succeeds Act (Every Student Succeeds Act, 2015), to invite states to submit 1-year waivers of the assessment and accountability requirements, which all 50 states, the District of Columbia, the Commonwealth of Puerto Rico, and the Bureau of Indian Education applied for and received (Recommended Waiver Authority Under Section 3511(d)(4) of Division A of the Coronavirus Aid, Relief, and Economic Security Act (“CARES ACT”), 2020).
Following the complete school and district closures and the halting of assessment administration in the spring of 2020, the reopening of schools in fall 2020 was characterized by variations of remote, in-person, and hybrid instructional models both within and across states. In many states and districts, the degree to which these instructional models were used changed over the course of the school year and was dependent on multiple factors, including COVID-19 case counts, district size, ages of students within schools, local policy, student needs, and parent choice. While state and local education agencies made every effort to ensure all students had access to instruction and instructional materials regardless of learning environment, it is well acknowledged that changes to learning inevitably occurred during the 2020–2021 academic year. Recognizing both the variability of instructional access and state and local need for data on student achievement, on February 22, 2021, the U.S. Department of Education’s Office of Elementary and Secondary Education provided states with guidance regarding assessment, accountability, and reporting requirements for the 2020–2021 school year. The department’s guidance, as it relates to assessments, offered states the option to apply for a 1-year waiver from accountability requirements as well as flexibility in assessment administration. The types of flexibility described in the department’s letter included administering shorter versions of state assessments, offering remote administration where feasible, and extending testing windows. The guidance further explained that the focus of this year’s assessments is “to provide information to parents, educators, and the public about student performance and to help target resources and supports” (Rosenblum, 2021).
This manual presents evidence for the results that were provided in 2020–2021, as well as other administration, test development, and research activities that occurred in 2020–2021 and were unaffected by the COVID-19 pandemic.
1.2 Background
In 2020–2021, DLM assessments were available to students in 21 states and one Bureau of Indian Education school: Alaska, Arkansas, Colorado, Delaware, District of Columbia, Illinois, Iowa, Kansas, Maryland, Missouri, New Hampshire, New Jersey, New Mexico, New York, North Dakota, Oklahoma, Pennsylvania, Rhode Island, Utah, West Virginia, Wisconsin, and Miccosukee Indian School.
Three DLM Consortium partners, District of Columbia, Maryland, and Miccosukee Indian School did not administer operational assessments in 2020–2021.
In 2020–2021, the Accessible Teaching, Learning, and Assessment Systems at the University of Kansas continued to partner with the Center for Literacy and Disability Studies at the University of North Carolina at Chapel Hill and Agile Technology Solutions at the University of Kansas. The project was also supported by a Technical Advisory Committee.
1.3 Assessment
Assessment blueprints consist of the Essential Elements (EEs) prioritized for assessment by the DLM Consortium. To achieve blueprint coverage, each student is administered a series of testlets. Each testlet is delivered through an online platform, Kite® Student Portal. Student results are based on evidence of mastery of the linkage levels for every assessed EE.
There are two assessment models for the DLM alternate assessment. Each state chooses its own model.
Instructionally Embedded model. There are two instructionally embedded testing windows: fall and spring. Educators have some choice of which EEs to assess, within constraints. For each EE, the system recommends a linkage level for assessment, and the educator may accept the recommendation or choose another linkage level. At the end of the year, summative results are based on mastery estimates for linkage levels for each EE (including performance on all testlets from both the fall and spring windows) and are used for accountability purposes. The pools of operational assessments for the fall and spring windows are separate. In 2020–2021, the states adopting the Instructionally Embedded model included Arkansas, Iowa, Kansas, Missouri, and North Dakota.
Year-End model. During a single operational testing window in the spring, all students take testlets that cover the whole blueprint. Each testlet assesses one linkage level and EE. The linkage level for each testlet varies according to student performance on the previous testlet. Summative assessment results reflect the student’s performance and are used for accountability purposes each school year. Instructionally embedded assessments are available during the school year but are optional and do not count toward summative results. In 2020–2021, the states adopting the Year-End model included Alaska, Colorado, Delaware, Illinois, Maryland, New Hampshire, New Jersey, New Mexico, New York, Oklahoma, Pennsylvania, Rhode Island, Utah, West Virginia, Wisconsin, and Miccosukee Indian School.
Information in this manual is common to both models wherever possible and is specific to the Year-End model where appropriate. A separate version of the technical manual exists for the Instructionally Embedded model.
1.4 Technical Manual Overview
This manual provides evidence collected during the 2020–2021 administration of year-end assessments.
Chapter 1 provides a brief overview of the assessment and administration for the 2020–2021 academic year and a summary of contents of the remaining chapters. While subsequent chapters describe the individual components of the assessment system separately, key topics such as validity are addressed throughout this manual.
Chapter 2 was not updated for 2020–2021; no changes were made to the learning map models used for operational administration of DLM assessments. See the 2014–2015 Technical Manual—Year-End Model (Dynamic Learning Maps Consortium, 2016) for a description of the DLM map-development process.
Chapter 3 outlines evidence related to test content collected during the 2020–2021 administration, including a description of test development activities, external review of content, and the operational and field test content available.
Chapter 4 provides an update on test administration during the 2020–2021 year. The chapter describes the DLM policy on virtual test administration and provides a summary of updated Personal Needs and Preferences Profile selections, a summary of administration time and device usage, and teacher survey results regarding user experience, remote assessment administration, and accessibility.
Chapter 5 provides a brief summary of the psychometric model used in scoring DLM assessments. This chapter includes a summary of 2020–2021 calibrated parameters. For a complete description of the modeling method, see the 2015–2016 Technical Manual Update—Year-End Model (Dynamic Learning Maps Consortium, 2017a).
Chapter 6 describes the administrative standards adjustment used to set cut points for the 2020–2021 administration. See the 2014–2015 Technical Manual—Year-End Model (Dynamic Learning Maps Consortium, 2016) for a description of the methods, preparations, procedures, and results of the original standard-setting meeting and the follow-up evaluation of the impact data.
Chapter 7 reports the 2020–2021 operational results, including student participation data. The chapter details the percentage of students achieving at each performance level; subgroup performance by gender, race, ethnicity, and English-learner status; and the percentage of students who showed mastery at each linkage level. Due to the confounding factors of assessment administration changes and COVID-19, these results should be interpreted with caution and should not be directly compared to previous assessment administrations. Finally, the chapter provides descriptions of changes to score reports and data files during the 2020–2021 administration.
Chapter 8 summarizes reliability evidence for the 2020–2021 administration, including a brief overview of the methods used to evaluate assessment reliability and results by performance level, subject, claim and conceptual area, EE, linkage level, and conditional linkage level. For a complete description of the reliability background and methods, see the 2015–2016 Technical Manual Update—Year-End Model (Dynamic Learning Maps Consortium, 2017a).
Chapter 9 describes additional validity evidence collected during the 2020–2021 administration not covered in previous chapters. The chapter provides evidence collected for two of the five critical sources of evidence: test content and response process.
Chapter 10 describes updates to the professional development offered across the DLM Consortium in 2020–2021, including participation rates and evaluation results.
Chapter 11 summarizes the contents of the previous chapters. It also provides future directions to support operations and research for DLM assessments.