Testlet effects on pass/fail decisions under competing rasch models.

dc.contributor.advisorMorgan, Grant B.
dc.creatorHodge, Kari J., 1975-
dc.date.accessioned2015-09-04T14:52:07Z
dc.date.available2015-09-04T14:52:07Z
dc.date.created2015-08
dc.date.issued2015-07-08
dc.date.submittedAugust 2015
dc.date.updated2015-09-04T14:52:07Z
dc.description.abstractThe item response model chosen to estimate ability can influence proficiency classification, or pass/fail decisions, made about people based on test scores. This poses a potential problem for both the examinee and the decision makers because examinees may be misclassified based on the item response model used to estimate ability and not their actual proficiency in a domain of interest. The purpose of this study was to examine the use of an incorrect item response model and its impact on proficiency classification. A Monte Carlo simulation design was employed in order to directly compare competing models when the true structure of the data is known (i.e., testlet conditions). The conditions used in the design (e.g., number of items, testlet to item ratio, testlet variance, proportion of items that are testlet-based and sample size) reflect those found in the applied educational literature. An empirical example is also analyzed for pass/fail decisions with the competing models. Overall, decision consistency (DC) was very high between the two models, ranging from 91.5% to 100%. The design factor that had the greatest effect on DC was the testlet effect or testlet variance. Other design factors that affected DC included number of testlets, an interaction between testlet variance and the percent of total items in testlets, and an interaction between the number of testlets and the percent of total items in testlets. PISA is traditionally calibrated with a DRM, and contained 29 items in nine testlets. The classification agreement percent between the DRM and the TRM was 99.5%. When a testlet structure is present in applied data the testlet variance is unknown and as the testlet variance increases so does the misclassification of examinees. When measurement models are used that do not align with the structure of the data additional error is introduced into the parameter estimates. This directly impacts the decisions that are made about people.
dc.format.mimetypeapplication/pdf
dc.identifier.urihttp://hdl.handle.net/2104/9503
dc.language.isoen
dc.rights.accessrightsWorldwide access
dc.rights.accessrightsAccess changed 12/4/17
dc.subjectTestlets. Rasch.
dc.titleTestlet effects on pass/fail decisions under competing rasch models.
dc.typeThesis
dc.type.materialtext
local.embargo.lift2017-08-01
local.embargo.terms2017-08-01
thesis.degree.departmentBaylor University. Dept. of Educational Psychology.
thesis.degree.grantorBaylor University
thesis.degree.levelDoctoral
thesis.degree.namePh.D.

Files

Original bundle

Now showing 1 - 2 of 2
Loading...
Thumbnail Image
Name:
HODGE-DISSERTATION-2015.pdf
Size:
932.45 KB
Format:
Adobe Portable Document Format
No Thumbnail Available
Name:
Hodge_CopyRightAvailabilityForm_1.pdf
Size:
123.84 KB
Format:
Adobe Portable Document Format

License bundle

Now showing 1 - 1 of 1
No Thumbnail Available
Name:
LICENSE.txt
Size:
1.95 KB
Format:
Plain Text
Description: