IOMW 2020 Virtual Conference

Date: TBA

BEAR Seminar IOMW Talks

(Tuesdays 2-4, Pacific Time)

 

 

Sept 15. Stefanie A. Wind & A. Adrienne Walker. A Model-Data-Fit-Informed Approach to Score Resolution in Rater-Mediated Assessments

Abstract: Many large-scale performance assessments include score resolution procedures for resolving discrepancies in rater judgments. The goal of score resolution is conceptually similar to person fit analyses: To identify students for whom observed scores may not accurately reflect their achievement. Previously, researchers have observed that rater agreement methods and person fit analyses result in similar conclusions about which students’ achievement estimates warrant additional investigation, and that score resolution generally improves person fit. We consider the implications of using person fit analysis as an initial step to identify performances for score resolution, and of using fit indices to identify raters to provide additional ratings. We simulated student responses to multiple-choice items and a writing task. We simulated various types of person misfit in the writing task and identified the persons who needed resolution by a model-fit index. Results indicate larger improvements in person fit after resolution when the person fit approach was used compared to a rater agreement approach. With the fit-informed approach, person fit improved for ≥ 98% of the misfitting students; with the rater agreement approach, person fit improved for around one third of these students. We consider the implications of our findings for mixed-format assessments.

 

Oct 13. Ye Yuan & George Engelhard. Unfolding Models and Learning Progressions: Identification of Feedback Strategies for Improving Writing 

Abstract. This study describes a conceptual framework for learning progressions, applied linguistics, and assessment of English Language Learners (ELL) students. Learning progressions have been proposed in a variety of fields including mathematics, science and English language arts (ELA). In this study, we critically review and evaluate previous literature related to learning progressions in a variety of fields. This study extends and modifies lessons learned in these fields to the acquisition of English as a second language (ESL). This study also syncretizes measurement theories and language theories with concepts from applied linguistics. Applied

linguistics helps to understand language learning pathways, and how they can be used to contribute to ESL teaching, learning, and assessment.  A major aim of this study is to provide theoretical-based guidance for using learning progressions and applied linguistics to consider assessment issues related to English language learners. This research sheds light on learning progressions applied within the context of

language education.

 

Nov 10. Ernesto San Martin & Jorge Gonzalez. How Fair is to be Fair? Revisiting Test Equating under the NEAT Design

Abstract: The nonequivalent groups with anchor test design (NEAT) is widely used in test equating. Under this design, two groups of examinees are administered different test forms with each test form containing a subset of common items. Because test takers from different groups are assigned only one test form, missing score data emerge by design rendering some of the score distributions unavailable. The partial observability of the score data formally leads to an identifiability problem which has not been recognized as such in the equating literature, and has been faced from different perspectives, all of them making different assumptions in order to estimate the unidentified score distributions. In this paper, we formally specify the statistical model underlying the NEAT design and unveil the lack of identifiability of the parameters of interest that compose the equating transformation. We use the theory of partial identification to offer alternatives to traditional practices used to point identify the score distributions when conducting equating under the NEAT design.

 

Dec 1. Nathan Zoanetti. Integrating Natural Language Processing features within explanatory item response models to support score interpretation

This paper describes the integration of Natural Language Processing (NLP) features within an explanatory item response modelling framework in the context of a reading comprehension assessment item bank. Item properties derived through NLP algorithms were incorporated into a Rasch Latent Regression Linear Logistic Test Model with item error, extending the model described by Wilson and de Boeck (2004) on the item side with a random error term. Specifically, item difficulties were modelled as random variables that could be predicted (with uncertainty) (Janssen, Schepers and Peres, 2004) by NLP item property fixed effects, and person covariates were included to increase the accuracy of estimation of latent ability distributions. The focus of this study was on the extent to which different kinds of NLP features explained variance in item difficulties. We investigated how these results could be used to develop and validate proficiency level descriptors and item bank meta-data.

The schedule for the main event will be announced soon.
Luca Mari and Neal Kingston will be our keynote speakers at IOMW 2020!

Luca Mari (MS in Physics, University of Milan, Italy, 1987; Ph.D. in measurement science, Politecnico di Torino, Italy, 1994) since 2006 has been a Full Professor of measurement science with Università Cattaneo - LIUC, Castellanza, Italy, where he teaches courses on measurement science, statistical data analysis, and system theory.

Neal Kingston, Ph.D., is a University Distinguished Professor at the University of Kansas in the Research, Evaluation, Measurement, and Statistics track of the Educational Psychology and Research Program and Director of the Achievement and Assessment Institute. His research focuses on large-scale assessment, with particular emphasis on how it can better support student learning through the use of learning maps and diagnostic classification models. 

What is IOMW?

The IOMW seeks to foster discussion and scholarship on high-quality, rigorous measurement practices in any field. 

It convenes every two years and draws experts and practitioners from around the world to share work in the areas of:

  • Measurement in human sciences: education, medicine, licensure, surveys

  • Philosophy of measurement

  • Objectivity-oriented models and methodologies

General Inquiries

If you have any general inquiries about IOMW 2020, you can contact the organization committee:

Veronica Santelices — Pontificia Universidad Católica de Chile 

Perman Gochyyev University of California, Berkeley

Yukie Toyama  University of California, Berkeley

David Torres Irribarra  Pontificia Universidad Católica de Chile 

Mark Wilson  University of California, Berkeley

© 2020 by IOMW Conference Organization Committee