9 Validity Studies

The preceding chapters and the Dynamic Learning Maps® (DLM®) Alternate Assessment System 2014–2015 Technical Manual—Year-End Model (Dynamic Learning Maps Consortium, 2016) provide evidence in support of the overall validity argument for results produced by the DLM assessment. This chapter presents additional evidence collected during 2020–2021 for two of the five critical sources of evidence described in Standards for Educational and Psychological Testing (American Educational Research Association et al., 2014): evidence based on test content and response process. Additional evidence can be found in Chapter 9 of the 2014–2015 Technical Manual—Year-End Model (Dynamic Learning Maps Consortium, 2016) and the subsequent annual technical manual updates (Dynamic Learning Maps Consortium, 2017a, 2017b, 2018, 2019, 2020).

9.1 Evidence Based on Test Content

Evidence based on test content relates to the evidence “obtained from an analysis of the relationship between the content of the test and the construct it is intended to measure” (American Educational Research Association et al., 2014, p. 14). This section presents results from data collected during spring 2021 regarding student opportunity to learn the assessed content. For additional evidence based on test content, including the alignment of test content to content standards via the DLM maps (which underlie the assessment system), see Chapter 9 of the 2014–2015 Technical Manual—Year-End Model (Dynamic Learning Maps Consortium, 2016).

9.1.1 Opportunity to Learn

After administration of the spring 2021 operational assessments, teachers were invited to complete a survey about the assessment (see Chapter 4 of this manual for more information on recruitment and response rates). The survey included four blocks of items. The first, third, and fourth blocks were fixed forms assigned to all teachers. For the second block, teachers received one randomly assigned section.

The first block of the survey served several purposes. Results for other survey items are reported later in this chapter and in Chapter 4 in this manual. One item provided information about the relationship between students’ learning opportunities before testing and the test content (i.e., testlets) they encountered on the assessment. The survey asked teachers to indicate the extent to which they judged test content to align with their instruction across all testlets. Table 9.1 reports the results. Approximately 67% of responses (n = 21,401) reported that most or all reading testlets matched instruction, compared to 58% (n = 18,153) for mathematics. More specific measures of instructional alignment are planned to better understand the extent that content measured by DLM assessments matches students’ academic instruction.

Table 9.1: Teacher Ratings of Portion of Testlets That Matched Instruction
None
Some (< half)
Most (> half)
All
Not applicable
Subject n % n % n % n % n %
Reading 2,168 6.8 7,364 23.2 12,814 40.3 8,587 27.0 825 2.6
Mathematics 2,655 8.4 9,721 30.8 11,620 36.8 6,533 20.7 1,025 3.2

The second block of the survey was spiral-assigned so that teachers received one randomly assigned section. In three of the randomly assigned sections, a subset of teachers was asked to indicate the approximate number of hours spent instructing students on each of the conceptual areas by subject (i.e., ELA, mathematics, and science). Teachers responded using a 5-point scale: 0–5 hours, 6–10 hours, 11–15 hours, 16–20 hours, or more than 20 hours. Table 9.2 and Table 9.3 indicate the amount of instructional time spent on conceptual areas for ELA and mathematics, respectively. Using 11 or more hours per conceptual area as a criterion for instruction, 55% of the teachers provided this amount of instruction to their students in ELA, and 46% did so in mathematics.

Table 9.2: Instructional Time Spent on ELA Conceptual Areas
Number of hours
0–5
6–10
11–15
16–20
>20
Conceptual area Median n % n % n % n % n %
Determine critical elements of text 11–15 2,023 31.9 1,046 16.5 840 13.3    836 13.2 1,587 25.1
Construct understandings of text 11–15 1,371 21.8    996 15.8 885 14.1    971 15.4 2,073 32.9
Integrate ideas and information from text 11–15 1,609 25.7 1,116 17.8 985 15.7    993 15.9 1,556 24.9
Use writing to communicate 11–15 1,847 29.4 1,093 17.4 888 14.1    939 14.9 1,515 24.1
Integrate ideas and information in writing 6–10 2,169 34.7 1,120 17.9 908 14.5    867 13.9 1,188 19.0
Use language to communicate with others 16–20    799 12.7    653 10.4 797 12.7 1,029 16.4 3,009 47.9
Clarify and contribute in discussion 11–15 1,403 22.4    985 15.7 937 14.9 1,098 17.5 1,852 29.5
Use sources and information 6–10 2,453 39.0 1,227 19.5 913 14.5    759 12.1    933 14.8
Collaborate and present ideas 6–10 2,386 38.0 1,225 19.5 909 14.5    778 12.4    987 15.7
Table 9.3: Instructional Time Spent on Mathematics Conceptual Areas
Number of hours
0–5
6–10
11–15
16–20
>20
Conceptual area Median n % n % n % n % n %
Understand number structures (counting, place value, fraction) 16–20 1,195 17.6    988 14.5    870 12.8 1,116 16.4 2,624 38.6
Compare, compose, and decompose numbers and steps 11–15 2,187 32.5 1,180 17.5    975 14.5 1,048 15.6 1,347 20.0
Calculate accurately and efficiently using simple arithmetic operations 16–20 1,663 24.8    838 12.5    840 12.5 1,075 16.0 2,294 34.2
Understand and use geometric properties of two- and three-dimensional shapes 6–10 2,718 40.4 1,496 22.2 1,096 16.3    815 12.1    610   9.1
Solve problems involving area, perimeter, and volume 0–5 4,132 61.5 1,101 16.4    671 10.0    476   7.1    342   5.1
Understand and use measurement principles and units of measure 6–10 2,848 42.3 1,593 23.7 1,035 15.4    710 10.6    540   8.0
Represent and interpret data displays 6–10 2,806 41.9 1,402 20.9 1,072 16.0    793 11.8    620   9.3
Use operations and models to solve problems 6–10 2,214 33.0 1,165 17.4 1,014 15.1 1,067 15.9 1,243 18.5
Understand patterns and functional thinking 6–10 1,917 28.5 1,461 21.7 1,286 19.1 1,061 15.8 1,011 15.0

Results from the teacher survey were also correlated with total linkage levels mastered by conceptual area, as reported on individual student score reports. While a direct relationship between amount of instructional time and number of linkage levels mastered in the area is not expected, as some students may spend a large amount of time on an area and demonstrate mastery at the lowest linkage level for each Essential Element (EE), we generally expect that students who mastered more linkage levels in the area would also have spent more instructional time in the area. More evidence is needed to evaluate this assumption.

Table 9.4 summarizes the Spearman rank-order correlations between ELA conceptual area instructional time and linkage levels mastered in the conceptual area and between mathematics conceptual area instructional time and linkage levels mastered in the conceptual area. Correlations ranged from 0.13 to 0.37, with the strongest correlations observed for writing conceptual areas (ELA.C2.1 and ELA.C2.2) in ELA and measurement, data, and analytic procedures conceptual areas (M.C4.1 and M.C1.3) in mathematics.

Table 9.4: Correlation Between Instuction Time and Linkage Levels Mastered
Conceptual area Correlation with instruction time
English language arts
ELA.C1.1: Determine critical elements of text 0.22
ELA.C1.2: Construct understandings of text 0.30
ELA.C1.3: Integrate ideas and information from text 0.29
ELA.C2.1: Use writing to communicate 0.36
ELA.C2.2: Integrate ideas and information in writing 0.37
Mathematics
M.C1.1: Understand number structures (counting, place value, fraction) 0.13
M.C1.2: Compare, compose, and decompose numbers and steps 0.28
M.C1.3: Calculate accurately and efficiently using simple arithmetic operations 0.32
M.C2.1: Understand and use geometric properties of two- and three-dimensional shapes 0.16
M.C2.2: Solve problems involving area, perimeter, and volume 0.30
M.C3.1: Understand and use measurement principles and units of measure 0.26
M.C3.2: Represent and interpret data displays 0.29
M.C4.1: Use operations and models to solve problems 0.33
M.C4.2: Understand patterns and functional thinking 0.22

The third block of the survey included questions about the student’s learning and assessment experiences during the 2020–2021 school year. During the COVID-19 pandemic, students may have been instructed in a variety of different instructional settings, which could have affected their opportunity to learn. Teachers were asked the percentage of time students spent in each instructional setting. Table 9.5 displays the possible settings and responses. A majority of responses indicated that students spent greater than 50% of the time in school. More than a third of responses indicated at least some time spent in home with direct instruction with the teacher (either one-on-one or as a class), in home with a family member providing instruction, and no formal instruction. Fewer responses indicated an instructional setting in home with the teacher present or an instructional setting other than the settings presented in the survey question.

Table 9.5: Percentage of Instruction Time Spent in Each Instructional Setting
None
1–25
26–50
51–75
76–100
Unknown
Instructional setting n % n % n % n % n % n %
In school   1,848   6.0   3,470 11.3 4,560 14.8 6,818 22.1 13,631 44.2    505   1.6
Direct instruction with teacher remotely, 1:1 11,244 38.8 10,411 35.9 3,307 11.4 1,711   5.9   1,209   4.2 1,082   3.7
Direct instruction with teacher remotely, group   9,333 31.6 10,659 36.1 4,128 14.0 2,351   8.0   2,062   7.0    998   3.4
Teacher present in the home 25,269 89.4      782   2.8    396   1.4    298   1.1      289   1.0 1,221   4.3
Family member providing instruction 17,164 60.1   5,887 20.6 1,389   4.9    751   2.6      733   2.6 2,654   9.3
Absent (no formal instruction) 18,130 64.7   6,257 22.3    779   2.8    445   1.6      309   1.1 2,085   7.4
Other 19,272 80.2      528   2.2    212   0.9    182   0.8      234   1.0 3,610 15.0

Teachers were also asked what instructional scheduling scenarios applied to their student that year. Table 9.6 reports the possible instructional scheduling scenarios and teacher responses. The majority of teachers reported no delayed start of the school year, no lengthened spring semester, no extended school year through summer, and that change(s) between remote and in-person learning occurred during the school year.

Table 9.6: Instructional Scheduling Scenarios Around Student Schedules
Yes
No
Unknown
Instructional setting n % n % n %
Delayed start of the school year   8,363 27.0 21,735 70.1    893 2.9
Lengthened spring semester   1,154   3.8 28,316 92.7 1,070 3.5
Extended school year through summer 11,919 38.7 17,017 55.2 1,891 6.1
Change(s) between remote and in-person learning during the school year 23,883 75.2   7,297 23.0    586 1.8

9.2 Evidence Based on Response Processes

The study of test takers’ response processes provides evidence about the fit between the test construct and the nature of how students actually experience test content (American Educational Research Association et al., 2014). The validity studies presented in this section include teacher survey data collected in spring 2021 regarding students’ ability to respond to testlets and a description of the test administration observations and writing samples collected during 2020–2021. For additional evidence based on response processes, including studies on student and teacher behaviors during testlet administration and evidence of fidelity of administration, see Chapter 9 of the 2014–2015 Technical Manual—Year-End Model (Dynamic Learning Maps Consortium, 2016).

9.2.1 Test Administration Observations

To be consistent with previous years, the DLM Consortium made a test administration observation protocol available for state and local users to gather information about how educators in the consortium states deliver testlets to students with the most significant cognitive disabilities. This protocol gave observers, regardless of their role or experience with DLM assessments, a standardized way to describe how DLM testlets were administered. The test administration observation protocol captured data about student actions (e.g., navigation, responding), educator assistance, variations from standard administration, engagement, and barriers to engagement. The observation protocol was used only for descriptive purposes; it was not used to evaluate or coach educators or to monitor student performance. Most items on the protocol were a direct report of what was observed, such as how the test administrator prepared for the assessment and what the test administrator and student said and did. One section of the protocol asked observers to make judgments about the student’s engagement during the session.

During 2020–2021, there were 218 test administration observations collected in four states. Because test administration observation data are anonymous and the sample of students available for test administration observations may not have represented the full population of students taking DLM assessments due to students completing assessments in a variety of locations (Table 9.6), we do not report the findings from those observations here as part of the assessment validity evidence.

9.2.2 Interrater Agreement of Writing Sample Scoring

All students are assessed on writing EEs as part of the ELA blueprint. Teachers administer writing testlets at two levels: emergent and conventional. Emergent testlets measure nodes at the Initial Precursor and Distal Precursor levels, while conventional testlets measure nodes at the Proximal Precursor, Target, and Successor levels. All writing testlets include items that require teachers to evaluate students’ writing processes; some testlets also include items that require teachers to evaluate students’ writing samples. Evaluation of students’ writing samples does not use a high-inference process common in large-scale assessment, such as applying analytic or holistic rubrics. Instead, writing samples are evaluated for text features that are easily perceptible to a fluent reader and require little or no inference on the part of the rater (e.g., correct syntax, orthography). The test administrator is presented with an onscreen selected-response item and is instructed to choose the option(s) that best matches the student’s writing sample. Only test administrators rate writing samples, and their item responses are used to determine students’ mastery of linkage levels for writing and some language EEs on the ELA blueprint. We annually collect student writing samples to evaluate how reliably teachers rate students’ writing samples. However, due to the COVID-19 pandemic, interrater reliability ratings for writing samples collected during the 2019–2020 administration were postponed until 2021. For a complete description of writing testlet design and scoring, including example items, see Chapter 3 of the 2015–2016 Technical Manual Update—Year-End Model (Dynamic Learning Maps Consortium, 2017a).

During the spring 2021 administration, seven Year-End model states opted to participate in writing sample collection. Teachers were asked to submit student writing samples within Educator Portal. Requested submissions included papers that students used during testlet administration, copies of student writing samples, or printed photographs of student writing samples. To allow the sample to be matched with test administrator response data from the spring 2021 administration, each sample was submitted with limited information to enable matching to the observed educator ratings.

Table 9.7 presents the number of student writing samples submitted in each grade and writing level. A total of 172 student writing samples were submitted from districts in five states. In several grades, the emergent writing testlet does not include any tasks that evaluate the writing sample; therefore, emergent samples submitted for these grades are not eligible to be included in the interrater reliability analysis (e.g., Grade 3 emergent writing samples). Additionally, writing samples that could not be matched with student data were excluded (e.g., student name or identifier was not provided). These exclusion criteria resulted in the availability of 144 writing samples for evaluation of interrater agreement. Due to the suspension of on-site events during 2020–2021, the writing samples rating event for 2021 was postponed. These samples will instead be rated during the 2022 event.

Table 9.7: Number of Writing Samples Collected During 2020–2021 by Grade and Writing Level
Grade Emergent writing samples Conventional writing samples
  3   0   5
  4 11 10
  5   0   6
  6   0   8
  7 11 17
  8   0 22
  9 17 30
10   1   1
11   1   4

9.3 Evidence Based on Internal Structure

Analyses of an assessment’s internal structure indicate the degree to which “relationships among test items and test components conform to the construct on which the proposed test score interpretations are based” (American Educational Research Association et al., 2014, p. 16).

One source of evidence comes from the examination of whether particular items function differently for specific subgroups (e.g., male versus female). The analysis of differential item functioning (DIF) is conducted annually for DLM assessments based on the cumulative operational data for the assessment. For example, in 2019—2020, the DIF analyses were based on data from the 2015—2016 through 2018—2019 assessments. Due to the cancellation of assessment during spring 2020, additional data for DIF analyses were not collected in 2019—2020. Thus, updated DIF analyses are not provided in this update, as there are no additional data to contribute to the analysis. For a description of DIF results from 2019—2020, see Chapter 9 of the 2019–2020 Technical Manual Update—Year-End Model (Dynamic Learning Maps Consortium, 2020).

Additional evidence based on internal structure is provided across the linkage levels that form the basis of reporting. This evidence is described in detail in Chapter 5 of this manual.

9.4 Conclusion

This chapter presents additional studies as evidence for the overall validity argument for the DLM Alternate Assessment System. The studies are organized into categories where available (content and response process), as defined by the Standards for Educational and Psychological Testing (American Educational Research Association et al., 2014), the professional standards used to evaluate educational assessments.

The final chapter of this manual, Chapter 11, references evidence presented through the technical manual, including Chapter 9, and expands the discussion of the overall validity argument. Chapter 11 also provides areas for further inquiry and ongoing evaluation of the DLM Alternate Assessment System, building on the evidence presented in the 2014–2015 Technical Manual—Year-End Model (Dynamic Learning Maps Consortium, 2016) and the subsequent annual technical manual updates (Dynamic Learning Maps Consortium, 2017a, 2017b, 2018, 2019, 2020), in support of the assessment’s validity argument.