Dynamic Learning Maps
Acknowledgements
1
Introduction
1.1
Impact of COVID-19 on the Administration of DLM Assessments
1.2
Background
1.3
Assessment
1.4
Technical Manual Overview
2
Map Development
3
Item and Test Development
3.1
Items and Testlets
3.1.1
Item Writing
3.2
External Reviews
3.2.1
Review Recruitment, Assignments, and Training
3.2.2
Test Development Decisions
3.3
English Language Arts Texts
3.3.1
Development of Texts
3.3.2
External Review of Texts
3.3.3
Recruitment, Training, Panel Meetings, and Results
3.4
Operational Assessment Items for 2020–2021
3.5
Field Testing
3.5.1
Description of Field Tests Administered in 2020–2021
3.5.2
Field Test Data Review
3.6
Conclusion
4
Test Administration
4.1
Overview of Key Administration Features
4.1.1
Test Windows
4.1.2
DLM Statement on Virtual Assessment Administration
4.2
Administration Evidence
4.2.1
Administration Time
4.2.2
Device Usage
4.2.3
Adaptive Delivery
4.2.4
Administration Incidents
4.3
Implementation Evidence
4.3.1
Kite System Updates
4.3.2
User Experience With the DLM System
4.3.3
Remote Assessment Administration
4.3.4
Accessibility
4.3.5
Data Forensics Monitoring
4.4
Conclusion
5
Modeling
5.1
Overview of the Psychometric Model
5.2
Calibrated Parameters
5.2.1
Probability of Masters Providing Correct Response
5.2.2
Probability of Non-Masters Providing Correct Response
5.2.3
Item Discrimination
5.2.4
Base Rate Probability of Mastery
5.3
Mastery Assignment
5.4
Model Fit
5.5
Conclusion
6
Standard Setting
6.1
Standard Adjustment for Blueprint Revisions
6.2
Administrative Adjustment
7
Assessment Results
7.1
Impacts to Assessment Administration
7.2
Student Participation
7.3
Student Performance
7.3.1
Overall Performance
7.3.2
Subgroup Performance
7.3.3
Linkage Level Mastery
7.4
Data Files
7.5
Score Reports
7.5.1
Individual Student Score Reports
7.6
Quality Control Procedures for Data Files and Score Reports
7.7
Conclusion
8
Reliability
8.1
Background Information on Reliability Methods
8.2
Methods of Obtaining Reliability Evidence
8.2.1
Reliability Sampling Procedure
8.3
Reliability Evidence
8.3.1
Performance Level Reliability Evidence
8.3.2
Subject Reliability Evidence
8.3.3
Conceptual Area Reliability Evidence
8.3.4
EE Reliability Evidence
8.3.5
Linkage Level Reliability Evidence
8.3.6
Conditional Reliability Evidence by Linkage Level
8.4
Conclusion
9
Validity Studies
9.1
Evidence Based on Test Content
9.1.1
Opportunity to Learn
9.2
Evidence Based on Response Processes
9.2.1
Test Administration Observations
9.2.2
Interrater Agreement of Writing Sample Scoring
9.3
Evidence Based on Internal Structure
9.4
Conclusion
10
Training and Professional Development
10.1
Instructional Professional Development
10.1.1
Professional Development Participation and Evaluation
10.2
Conclusion
11
Conclusion and Discussion
11.1
Operational Assessment
11.1.1
Future Research
12
References
2020–2021 Technical Manual Update
2020–2021 Technical Manual Update
Year-End Model
December 2021
Placeholder Title
Copyright © 2021 Accessible Teaching, Learning, and Assessment Systems (ATLAS)