Network meta-analysis with rare events and misclassified response.
Access rightsNo access - Contact email@example.com
Wu, Wenqi, 1989-
MetadataShow full item record
Count data are subject to considerable sources of what is often referred to as non-sampling error. Errors such as misclassification, measurement error, and unmeasured confounding can lead to substantially biased estimators. It is strongly recommended that epidemiologists not only acknowledge these sorts of errors in data but also incorporate sensitivity analyses into part of the total data analysis. In this dissertation, we extend previous work on Poisson regression models that allow for misclassification by thoroughly discussing the basis for the models and allowing for extra-Poisson variability in the form of random effects. Markov chain Monte Carlo methods are applied to perform the computations needed to draw inferences and make model assessments. Through simulation, we show the improvements in inference that are brought about by accounting for both misclassification and overdispersion. Network meta-analysis is increasingly popular in clinical trials and provides both direct and indirect treatment comparisons. One common issue in network meta-analysis is zero outcomes, which will lead to biased estimates and low coverage probabilities. We consider both the binomial distribution and the Poisson distribution to model data. Four network patterns are considered, which are star, loop, ladder, and one-closed loop geometry. The Bayesian approach is used as our method of inference. Through simulation, we evaluate two continuity correction methods for different geometry patterns. The performance of continuity correction depends on the geometry pattern and the underlying distribution assumption. We also consider misclassification in the network meta-analysis for binary outcomes. Sensitivity and specificity are introduced to adjust misclassified data. Through simulation, we demonstrate the importance of accounting for misclassification. We also assess the robustness of different values for sensitivity and specificity. We find that the the posterior inferences are very sensitive to misclassification rates.