Department of Statistical Sciences
http://hdl.handle.net/2104/4761
Tue, 30 Jun 2015 03:40:28 GMT2015-06-30T03:40:28ZNormal approximation for Bayesian models with non-sampling bias.
http://hdl.handle.net/2104/8926
Normal approximation for Bayesian models with non-sampling bias.
Yuan, Jiang, 1984-
Bayesian sample size determination can be computationally intensive for mod-
els where Markov chain Monte Carlo (MCMC) methods are commonly used for in-
ference. It is also common in a large database where the unmeasured confounding
presents. We present a normal theory approximation as an alternative to the time
consuming MCMC simulations in sample size determination for a binary regression
with unmeasured confounding. Cheng et al. (2009) develop a Bayesian approach to
average power calculations in binary regression models. They then apply the model
to the common medical scenario where a patient's disease status is not known. In
this dissertation, we generate simulations based on their Bayesian model with both
binary and normal outcomes. We also use normal theory approximation to speed
up such sample size determination and compare power and computational time for
both.
Tue, 28 Jan 2014 00:00:00 GMThttp://hdl.handle.net/2104/89262014-01-28T00:00:00ZSample size determination for two sample binomial and Poisson data models based on Bayesian decision theory.
http://hdl.handle.net/2104/8922
Sample size determination for two sample binomial and Poisson data models based on Bayesian decision theory.
Sides, Ryan A.
Sample size determination continues to be an important research area in statistical analysis due to the cost and time constraints that often exist in areas such as pharmaceuticals and public health. We begin by outlining the work of a previous article that attempted to find a minimum necessary sample size in order to reach a desired expected power for binomial data under the Bayesian paradigm. We make improvements to their efforts that allow us to specify not only a desired expected Bayesian power, but also a more generic loss function and a desired expected Bayesian significance level, the latter having never been considered previously. We then extend these methodologies to handle Poisson data and discuss challenges in the methodology. We cover a detailed example in both cases and display various results of interest.
We conclude by covering a mixed treatment comparisons meta-analysis problem when analyzing Poisson data. Traditional methods do not allow for the presence of underreporting. Here, we illustrate how a constant underreporting rate for all treatments has no effect on relative risk comparisons; however, when this rate changes per treatment, not accounting for it can lead to serious errors. Our method allows this to be taken into account so that correct analyses can be made.
Tue, 28 Jan 2014 00:00:00 GMThttp://hdl.handle.net/2104/89222014-01-28T00:00:00ZTopics in interval estimation for two problems using double sampling.
http://hdl.handle.net/2104/8915
Topics in interval estimation for two problems using double sampling.
Njoh, Linda.
This dissertation addresses two distinct topics. The first considers interval estimation methods of the odds ratio parameter in two by two cohort studies with misclassified data. That is, we derive two first-order likelihood-based confidence intervals and two pseudo-likelihood-based confidence intervals for the odds ratio in a two by two cohort study subject to differential misclassification and non-differential misclassification using a double-sampling paradigm for binary data. Specifically, we derive the Wald, score, profile likelihood, and approximate integrated likelihood-based confidence intervals for the odds ratio of a two by two cohort study. We then compare coverage properties and median interval widths of the newly derived confidence intervals via a Monte Carlo simulation. Our simulation results reveal the consistent superiority of the approximate integrated likelihood confidence interval, especially when the degree of misclassification is high.
The second topic is concerned with interval estimation methods of a Poisson rate parameter in the presence of count misclassification. More specifically, we derive multiple first-order asymptotic confidence intervals for estimating a Poisson rate parameter using a double sample for data containing false-negative and false-positive observations in one case and for data with only false-negative observations in another case. We compare the new confidence intervals in terms of coverage probability and median interval width via a simulation experiment. We then apply our derived confidence intervals to real-data examples. Over the parameter configurations and observation-opportunity sizes considered here, our investigation demonstrates that the Wald interval is the best omnibus interval estimator for a Poisson rate parameter using data subject to over-and under-counts. Also, the profile log-likelihood-based confidence interval is the best omnibus confidence interval for a Poisson rate parameter using data subject to visibility bias.
Tue, 28 Jan 2014 00:00:00 GMThttp://hdl.handle.net/2104/89152014-01-28T00:00:00ZTopics in multivariate covariance estimation and time series analysis.
http://hdl.handle.net/2104/8896
Topics in multivariate covariance estimation and time series analysis.
Beeson, John D. (John David)
In this dissertation we will discuss two topics relevant to statistical analysis. The first is a new test of linearity for a stationary time series, that extends the bootstrap methods of Berg et al. (2010) to goodness-of-fit (GoF) statistics specified in Harvill (1999) and Jahan and Harvill (2008). Berg's bootstrap method utilizes the statistics specified in Hinich (1982) in the framework of an autoregressive bootstrap procedure, however we show that by utilizing GoF methods, we can increase the power of the test. In Chapter three we discuss an alternative way of approaching the Friedman (1989) regularized discriminant method. Regularized discriminant analysis (RDA) is a well-known method of covariance regularization for the multivariate-normal based discriminant function. RDA generalizes the ideas of linear (LDA), quadratic (QDA), and mean-eigenvalue covariance regularization methods into one framework. The original idea and known extensions involve cross-validating in potentially high di- mensions, and can be highly computational. We propose using the Kullback-Leibler divergence as an optimization method to estimate a linear combination of class co- variance structures, which increases the accuracy of the RDA method, an limits the use of leave one out cross validation.
Tue, 28 Jan 2014 00:00:00 GMThttp://hdl.handle.net/2104/88962014-01-28T00:00:00Z