Theses/Dissertations - Statistical Sciences
Permanent URI for this collectionhttps://hdl.handle.net/2104/4798
Browse
Browsing Theses/Dissertations - Statistical Sciences by Subject "Bayesian statistical decision theory."
Now showing 1 - 10 of 10
- Results Per Page
- Sort Options
Item Bayesian adaptive designs for non-inferiority and dose selection trials.(2006-07-31T01:02:37Z) Spann, Melissa Elizabeth.; Seaman, John Weldon, 1956-; Statistical Sciences.; Baylor University. Dept. of Statistical Sciences.The process of conducting a pharmaceutical clinical trial often produces information in a way that can be used as the trial progresses. Bayesian methods offer a highly flexible means of using such information yielding inferences and decisions that are consistent with the laws of probability and consequently admit ease of interpretation. Bayesian adaptive sampling methods offer the potential to accelerate the investigation of a drug without compromising the safety of the trial’s participants. These methods select a patient’s treatment based upon prior information and the knowledge accrued from the trial to date which can reduce patient exposure to unsafe or ineffective treatments and therefore improve patient care in clinical trials. Improving the process of clinical trials in this manner is beneficial to all involved including the pharmaceutical companies and more especially the patients; safer and less expensive drugs can make it to market faster. In this research we present a Bayesian approach to determining if an experimental treatment is non-inferior to an active control treatment within a clinical trial that includes a placebo arm. We incorporate this non-inferiority model in a Bayesian adaptive design that uses joint posterior predictive probabilities of safety and efficacy to determine adaptive allocation probabilities. Results from a retrospective study and a simulation are used to illustrate use of the method. We also present a Bayesian adaptive approach to dose selection that uses effect sizes of doses relative to placebo to perform adaptive allocation and to select the most efficacious dose. The proposed design removes treatment arms if their performance relative to placebo or other treatment arms is undesirable. Results from analyses of simulated data will be discussed.Item Bayesian and maximum likelihood methods for some two-segment generalized linear models.(2008-10-14T20:38:46Z) Miyamoto, Kazutoshi.; Seaman, John Weldon, 1956-; Statistical Sciences.; Baylor University. Dept. of Statistical Sciences.The change-point (CP) problem, wherein parameters of a model change abruptly at an unknown covariate value, is common in many fields, such as process control, epidemiology, and ecology. CP problems using two-segment regression models, such as those based on generalized linear models, are very flexible and widely used. For two-segment Poisson and logistic regression models, misclassification in the response is well known to cause attenuation of key parameters and other difficulties. How misclassification effects estimation of a CP in such models has not been studied. In this research, we consider the effect of misclassification on CP problems in Poisson and logistic regression. We focus on maximum likelihood and Bayesian methods.Item Bayesian and pseudo-likelihood interval estimation for comparing two Poisson rate parameters using under-reported data.(2009-04-01T15:56:04Z) Greer, Brandi A.; Young, Dean M.; Statistical Sciences.; Baylor University. Dept. of Statistical Sciences.We present interval estimation methods for comparing Poisson rate parameters from two independent populations with under-reported data for the rate difference and the rate ratio. In addition, we apply the Bayesian paradigm to derive credible intervals for both the ratio and the difference of the Poisson rates. We also construct pseudo-likelihood-based confidence intervals for the ratio of the rates. We begin by considering two cases for analyzing under-reported Poisson counts: inference when training data are available and inference when they are not. From these cases we derive two marginal posterior densities for the difference in Poisson rates and corresponding credible sets. First, we perform Monte Carlo simulation analyses to examine the effects of differing model parameters on the posterior density. Then we perform additional simulations to study the robustness of the posterior density to misspecified priors. In addition, we apply the new Bayesian credible intervals for the difference of Poisson rates to an example concerning the mortality rates due to acute lower respiratory infection in two age groups for children in the Upper River Division in Gambia and to an example comparing automobile accident injury rates for male and female drivers. We also use the Bayesian paradigm to derive two closed-form posterior densities and credible intervals for the Poisson rate ratio, again in the presence of training data and without it. We perform a series of Monte Carlo simulation studies to examine the properties of our new posterior densities for the Poisson rate ratio and apply our Bayesian credible intervals for the rate ratio to the same two examples mentioned above. Lastly, we derive three new pseudo-likelihood-based confidence intervals for the ratio of two Poisson rates using the double-sampling paradigm for under-reported data. Specifically, we derive profile likelihood-, integrated likelihood-, and approximate integrated likelihood-based intervals. We compare coverage properties and interval widths of the newly derived confidence intervals via a Monte Carlo simulation. Then we apply our newly derived confidence intervals to an example comparing cervical cancer rates.Item Bayesian approaches to problems in drug safety and adaptive clinical trial designs.(2008-06-10T21:19:06Z) Mauldin, Jo A.; Seaman, John Weldon, 1956-; Statistical Sciences.; Baylor University. Dept. of Statistical Sciences.The efficacy, safety, and cost of pharmaceutical products are critical issues in society today. Motivated both financially and ethically by these concerns, the pharmaceutical industry has continually worked to develop methods which provide more efficient and ethical assessments of the safety and efficacy of pharmaceutical products. There is an increased emphasis on more targeted treatments with a focus on better patient outcomes. In this vein, recent applications of advanced statistical methods have allowed companies to reduce the costs of getting safe and effective products to market—savings that can be passed on to consumers in the form of price cuts or additional investment in research and development. Among the methods that have become increasingly important in drug development are adaptive experimental designs. We first investigate the impacts of misclassification of response on a Bayesian adaptive design. A primary argument for the use of adaptive designs is the efficiency one gains over implementing a traditional fixed design. We examine the design’s performance under misclassified responses and compare it to the situation for which we account for the misclassification in a Bayesian model. Next, we examine the utility of safety lab measures collected during the clinical development of a drug. These labs are used to characterize a drug’s safety profile and their scope can be limited when reasonably confident of no associated safety concern, facilitating reduced costs and less subject burden. We consider the use of a Bayesian generalized linear model and investigate the use of conditional means priors and power priors for the regression coefficients used in the analysis of safety lab measures. Finally, we address the need for transparent benefit-risk assessment methods that combine safety and efficacy data and allow straight forward comparisons of treatment options. We begin by developing interval estimates on a commonly-used benefit-risk ratio. We then propose the use of a Bayesian generalized linear model to jointly assess safety and efficacy, allowing for direct comparisons of competing treatment options utilizing posterior 95% credible sets and predictive probabilities.Item Bayesian evaluation of surrogate endpoints.(2006-07-29T17:03:06Z) Feng, Chunyao.; Seaman, John Weldon, 1956-; Statistical Sciences.; Baylor University. Dept. of Statistical Sciences.To save time and reduce the size and cost of clinical trials, surrogate endpoints are frequently measured instead of true endpoints. The proportion of the treatment effect explained by surrogate endpoints (PTE) is a widely used, albeit controversial, validation criteria. Frequentist and Bayesian methods have been developed to facilitate such validation. The former does not formally incorporate prior information; a critical issue since confidence intervals on PTE is often unacceptably wide. Both the Bayesian and frequentist approaches may yield estimates of PTE outside the unit interval. Furthermore, the existing Bayesian method offers no insight into the prior used for PTE, making prior-to-posterior sensitivity analyses problematic. We proposed a fully Bayesian approach that avoids both of these problems. We also consider the effect of interaction on inference for PTE. As an alternative to the use of PTE, we develop a Bayesian model for relative effect and the association between surrogate and true endpoints, making use of power priors.Item Bayesian inference for correlated binary data with an application to diabetes complication progression.(2006-10-26T19:07:46Z) Carlin, Patricia M.; Seaman, John Weldon, 1956-; Statistical Sciences.; Baylor University. Dept. of Statistical Sciences.Correlated binary measurements can occur in a variety of practical contexts and afford interesting statistical modeling challenges. In order to model the separate probabilities for each measurement we must somehow account for the relationship between them. We choose to focus our applications to the progression of the complications of diabetic retinopathy and diabetic nephropathy. We first consider probabilistic models which employ Bayes' theorem for predicting the probability of onset of diabetic nephropathy given that a patient has developed diabetic retinopathy, modifying the work of Ballone, Colagrande, Di Nicola, Di Mascio, Di Mascio, and Capani (2003). We consider beta-binomial models using the Sarmanov (1966) framework which allows us to specify the marginal distributions for a given bivariate likelihood. We present both maximum likelihood and Bayesian methods based on this approach. Our Bayesian methods include a fully identified model based on proportional probabilities of disease incidence. Finally, we consider Bayesian models for three different prior structures using likelihoods representing the data in the form of a 2-by-2 table. To do so, we consider the data as counts resulting from two correlated binary measurements: the onset of diabetic retinopathy and the onset of diabetic nephropathy. We compare resulting posterior distributions from a Jeffreys' prior, independent beta priors, and conditional beta priors, based on a structural zero likelihood model and the bivariate binomial model.Item Conjugate hierarchical models for spatial data: an application on an optimal selection procedure.(2006-07-24T15:25:43Z) McBride, John Jacob.; Bratcher, Thomas L.; Statistical Sciences.; Baylor University. Dept. of Statistical Sciences.The theory of generalized linear models provides a unifying class of statistical distributions that can be used to model both discrete and continuous events. In this dissertation we present a new conjugate hierarchical Bayesian generalized linear model that can be used to model counts of occurrences in the presence of spatial correlation. We assume that the counts are taken from geographic regions or areal units (zip codes, counties, etc.) and that the conditional distributions of these counts for each area are distributed as Poisson having unknown rates or relative risks. We incorporate the spatial association of the counts through a neighborhood structure which is based on the arrangement of the areal units. Having defined the neighborhood structure we then model this spatial association with a conditionally autoregressive (CAR) model as developed by Besag (1974). Once the spatial model has been created we adapt a subset selection procedure created by Bratcher and Bhalla (1974) to select the areal unit(s) having the highest relative risks.Item Logistic regression with covariate measurement error in an adaptive design : a Bayesian approach.(2008-10-14T16:59:14Z) Crixell, JoAnna Christine, 1979-; Seaman, John Weldon, 1956-; Stamey, James D.; Statistical Sciences.; Baylor University. Dept. of Statistical Sciences.Adaptive designs are increasingly popular in clinical trials. This is because such designs have the potential to decrease patient exposure to treatments that are less efficacious or unsafe. The Bayesian approach to adaptive designs is attractive because it makes systematic use of prior data and other information in a way that is consistent with the laws of probability. The goal of this dissertation is to examine the effects of measurement error on a Bayesian adaptive design. Measurement error problems are common in a variety of regression applications where the variable of interest cannot be measured perfectly. This is often unavoidable because infallible measurement tools to account for such error are either too expensive or unavailable. When modeling the relationship between a response variable and other covariates, we must account for any uncertainty introduced when one or both of these variables are measured with error. This dissertation will explore the consequence of imperfect measurements on a Bayesian adaptive design.Item Normal approximation for Bayesian models with non-sampling bias.(2014-01-28) Yuan, Jiang, 1984-; Stamey, James D.; Statistical Sciences.; Baylor University. Dept. of Statistical Sciences.Bayesian sample size determination can be computationally intensive for mod- els where Markov chain Monte Carlo (MCMC) methods are commonly used for in- ference. It is also common in a large database where the unmeasured confounding presents. We present a normal theory approximation as an alternative to the time consuming MCMC simulations in sample size determination for a binary regression with unmeasured confounding. Cheng et al. (2009) develop a Bayesian approach to average power calculations in binary regression models. They then apply the model to the common medical scenario where a patient's disease status is not known. In this dissertation, we generate simulations based on their Bayesian model with both binary and normal outcomes. We also use normal theory approximation to speed up such sample size determination and compare power and computational time for both.Item Selected topics in statistical discriminant analysis.(2007-02-07T19:01:17Z) Ounpraseuth, Songthip T.; Young, Dean M.; Statistical Sciences.; Baylor University. Dept. of Statistical Sciences.This dissertation consists of three selected topics in statistical discriminant analysis: dimension reduction, regularization methods, and imputation methods. In Chapter 2 we first derive a new linear dimension-reduction method to determine a low-dimensional hyperplane that preserves or nearly preserves the separation of the individual populations and the Bayes probability of misclassification. Next, we derive a new low-dimensional representation-space approach for multiple high-dimensional multivariate normal populations. Third, we develop a linear dimension reduction method for quadratic discriminant analysis when the class population parameters must be estimated. Using a Monte Carlo simulation with several different parameter configurations, we compare our new methodology with two competing linear dimension-reduction procedures for statistical discrimination in terms of expected error rates. We find that under certain conditions, our new dimension-reduction method yields superior results for a majority of the configurations we consider. In addition, we determine that in several configurations, classification performance is actually enhanced by our new feature-reduction method when the sample size is sufficiently small relative to the original feature space dimension. In Chapter 3 we compare and contrast the efficacy of seven regularization methods for the quadratic discriminant function under a variety of parameter configurations. In particular, we use the expected error rate to assess the efficacy of these regularized quadratic discriminant functions. A two-parameter family of regularized class covariance-matrix estimators derived by Friedman (1989) yields superior classification results relative to its six competitors for the configurations, training-sample sizes, and original feature dimensions examined here. Finally, in Chapter 4 we consider the statistical classification problem for two multivariate normal populations with equal covariance matrices when the training samples contain observations missing at random. That is, we analyze the effect of missing-at-random data on Anderson's linear discriminant function. We use a Monte Carlo simulation to examine the expected probabilities of misclassification under several single and multiple imputation methods. The seven missing-data algorithms include: complete observation, mean substitution, expectation maximization, regression, predictive mean matching, propensity score, and MCMC. The regression, predictive mean, and propensity score multiple imputation approaches are, in general, superior to the other methods for the configurations and training-sample sizes we consider.