Browsing by Author "Padgett, R. Noah, 1995-"
Now showing 1 - 2 of 2
- Results Per Page
- Sort Options
Item Factor analysis in educational settings : a simulation study comparing fit statistics across robust estimators.(2019-05-07) Padgett, R. Noah, 1995-; Morgan, Grant B.In education and social science, data often arise from nested data structures, meaning that students are nested within teachers or schools. Traditional factor analytic approaches to measuring latent traits do not account for the nested structure of these data. The logic and potential issues of using multilevel confirmatory factor analysis were discussed. The ability of commonly used fit statistics to discriminate between a correctly specified model and models with omitted factor loading(s) were investigated with receiver-operating-characteristics (ROC) analyses. Combining ROC analyses with traditional methods of investigating fit statistic performance resulted in converging evidence for the utility of these common fit statistics. In general, these fit statistics performed poorly and should not be heavily relied upon for evidence of the factor structures specified. Recommendations were given for which commonly reported fit statistics to use, cut-off criteria to use for which estimators, and cautions about the use of the suggested cut-off criteria.Item Misclassification errors informed by response time in item factor analysis.(2022-03-25) Padgett, R. Noah, 1995-; Morgan, Grant B.The measurement process necessarily leads to observations measured with error to a degree. In education, researchers often want to obtain measurements of difficult-to-measure constructs such as content knowledge, motivation, affect, and personality. A scale is created using multiple items to triangulate the measurement of the construct of interest using the common information across items. One source of error that is not often accounted for is measurement error in the item response itself. In this study, I propose an approach for measuring latent traits while accounting for item-level measurement error. The proposed approach differentially weighs responses by how long an individual takes to respond to the item, i.e., response time as an absolute measure of time taken on each item−weighing responses by response time discounts the information provided by individuals responding rapidly to items. The result is that individuals with longer response times more heavily inform the estimation of the model, and more highly weighted responses are theorized to more accurately reflect the construct of interest. Utilizing more reliable information provides a foundational step in finding validity evidence for inferences made using scales. The purpose of this study was two-fold. First, simulation studies were conducted to show how the proposed measurement can be estimated and demonstrate the effects of estimating traditional item-factor models when data are prone to item-level measurement error. In these studies, I show that the parameter estimates (e.g., factor loadings, residual variances, etc.) may be severely upwardly or downwardly biased. The coverage rates for interval estimates of the parameters were also highly variable across conditions studied and parameters. The results showed that researchers’ ability to make valid inferences about the underlying model is limited by how item-level measurement error is modeled. Secondly, the applied studies used data from the National Assessment of Educational Progress (NAEP) 2017 math assessment and an open-source dataset on extroversion. The results from these applied studies demonstrate the applicability of the proposed model and how inferences about reliability may be highly dependent on how item-level measurement error is modeled. Finally, implications and applications to educational research using the proposed methods are discussed.