Theses/Dissertations - Statistical Sciences
Permanent URI for this collectionhttps://hdl.handle.net/2104/4798
Browse
Browsing Theses/Dissertations - Statistical Sciences by Author "Baylor University. Dept. of Statistical Sciences."
Now showing 1 - 20 of 37
- Results Per Page
- Sort Options
Item Bayesian adaptive designs for non-inferiority and dose selection trials.(2006-07-31T01:02:37Z) Spann, Melissa Elizabeth.; Seaman, John Weldon, 1956-; Statistical Sciences.; Baylor University. Dept. of Statistical Sciences.The process of conducting a pharmaceutical clinical trial often produces information in a way that can be used as the trial progresses. Bayesian methods offer a highly flexible means of using such information yielding inferences and decisions that are consistent with the laws of probability and consequently admit ease of interpretation. Bayesian adaptive sampling methods offer the potential to accelerate the investigation of a drug without compromising the safety of the trial’s participants. These methods select a patient’s treatment based upon prior information and the knowledge accrued from the trial to date which can reduce patient exposure to unsafe or ineffective treatments and therefore improve patient care in clinical trials. Improving the process of clinical trials in this manner is beneficial to all involved including the pharmaceutical companies and more especially the patients; safer and less expensive drugs can make it to market faster. In this research we present a Bayesian approach to determining if an experimental treatment is non-inferior to an active control treatment within a clinical trial that includes a placebo arm. We incorporate this non-inferiority model in a Bayesian adaptive design that uses joint posterior predictive probabilities of safety and efficacy to determine adaptive allocation probabilities. Results from a retrospective study and a simulation are used to illustrate use of the method. We also present a Bayesian adaptive approach to dose selection that uses effect sizes of doses relative to placebo to perform adaptive allocation and to select the most efficacious dose. The proposed design removes treatment arms if their performance relative to placebo or other treatment arms is undesirable. Results from analyses of simulated data will be discussed.Item Bayesian and likelihood-based interval estimation for the risk ratio using double sampling with misclassified binomial data.(2011-01-05T19:44:19Z) Rahardja, Dewi Gabriela.; Young, Dean M.; Statistical Sciences.; Baylor University. Dept. of Statistical Sciences.We consider the problem of point and interval estimation for the risk ratio using double sampling with two-sample misclassified binary data. For such data, it is well-known that the actual data model is unidentifiable. To achieve model identifiability, then, we obtain additional data via a double-sampling scheme. For the Bayesian paradigm, we devise a parametric, straight-forward algorithm for sampling from the joint posterior density for the parameters, given the data. We then obtain Bayesian point and interval estimators of the risk ratio of two-proportion parameters. We illustrate our algorithm using a real data example and conduct two Monte Carlo simulation studies to demonstrate that both the point and interval estimators perform well. Additionally, we derive three likelihood-based confidence intervals (CIs) for the risk ratio. Specifically, we first obtain closed-form maximum likelihood estimators (MLEs) for all parameters. We then derive three CIs for the risk ratio: a naive Wald interval, a modified Wald interval, and a Fieller-type interval. For illustration purposes, we apply the three CIs to a real data example. We also perform various Monte Carlo simulation studies to assess and compare the coverage probabilities and average lengths of the three CIs. A modified Wald CI performs the best of the three CIs and has near-nominal coverage probabilities.Item Bayesian and maximum likelihood methods for some two-segment generalized linear models.(2008-10-14T20:38:46Z) Miyamoto, Kazutoshi.; Seaman, John Weldon, 1956-; Statistical Sciences.; Baylor University. Dept. of Statistical Sciences.The change-point (CP) problem, wherein parameters of a model change abruptly at an unknown covariate value, is common in many fields, such as process control, epidemiology, and ecology. CP problems using two-segment regression models, such as those based on generalized linear models, are very flexible and widely used. For two-segment Poisson and logistic regression models, misclassification in the response is well known to cause attenuation of key parameters and other difficulties. How misclassification effects estimation of a CP in such models has not been studied. In this research, we consider the effect of misclassification on CP problems in Poisson and logistic regression. We focus on maximum likelihood and Bayesian methods.Item Bayesian and pseudo-likelihood interval estimation for comparing two Poisson rate parameters using under-reported data.(2009-04-01T15:56:04Z) Greer, Brandi A.; Young, Dean M.; Statistical Sciences.; Baylor University. Dept. of Statistical Sciences.We present interval estimation methods for comparing Poisson rate parameters from two independent populations with under-reported data for the rate difference and the rate ratio. In addition, we apply the Bayesian paradigm to derive credible intervals for both the ratio and the difference of the Poisson rates. We also construct pseudo-likelihood-based confidence intervals for the ratio of the rates. We begin by considering two cases for analyzing under-reported Poisson counts: inference when training data are available and inference when they are not. From these cases we derive two marginal posterior densities for the difference in Poisson rates and corresponding credible sets. First, we perform Monte Carlo simulation analyses to examine the effects of differing model parameters on the posterior density. Then we perform additional simulations to study the robustness of the posterior density to misspecified priors. In addition, we apply the new Bayesian credible intervals for the difference of Poisson rates to an example concerning the mortality rates due to acute lower respiratory infection in two age groups for children in the Upper River Division in Gambia and to an example comparing automobile accident injury rates for male and female drivers. We also use the Bayesian paradigm to derive two closed-form posterior densities and credible intervals for the Poisson rate ratio, again in the presence of training data and without it. We perform a series of Monte Carlo simulation studies to examine the properties of our new posterior densities for the Poisson rate ratio and apply our Bayesian credible intervals for the rate ratio to the same two examples mentioned above. Lastly, we derive three new pseudo-likelihood-based confidence intervals for the ratio of two Poisson rates using the double-sampling paradigm for under-reported data. Specifically, we derive profile likelihood-, integrated likelihood-, and approximate integrated likelihood-based intervals. We compare coverage properties and interval widths of the newly derived confidence intervals via a Monte Carlo simulation. Then we apply our newly derived confidence intervals to an example comparing cervical cancer rates.Item Bayesian approaches for design of psychometric studies with underreporting and misclassification.(2013-05-15) Falley, Brandi.; Stamey, James D.; Statistical Sciences.; Baylor University. Dept. of Statistical Sciences.Measurement error problems in binary regression are of considerable interest among researchers, especially in epidemiological studies. Misclassification can be considered a special case of measurement error specifically for the situation when measurement is the categorical classification of items. Bayesian methods offer practical advantages for the analysis of epidemiological data including the possibility of incorporating relevant prior scientific information and the ability to make inferences that do not rely on large sample assumptions. Because of the high cost and time constraints for clinical trials, researchers often need to determine the smallest sample size that provides accurate inferences for a parameter of interest. Although most experimenters have employed frequentist methods, the Bayesian paradigm offers a wide variety of methodologies and are becoming increasingly more popular in clinical trials because of their flexibility and their ease of interpretation. We will simultaneously estimate efficacy and safety where the safety variable is subject to underreporting. We propose a Bayesian sample size determination method to account for the underreporting and appropriately power the study. We will allow efficacy and safety to be independent, as well as dependent using a regression model. For both models, we will allow the safety variable to be underreported.Item Bayesian approaches to correcting bias in epidemiological data.(2011-05-12T15:17:48Z) Bennett, Monica M.; Stamey, James D.; Seaman, John Weldon, 1956-; Statistical Sciences.; Baylor University. Dept. of Statistical Sciences.Bias in parameter estimation of count data is a common concern. The concern is even greater when all counts are not recorded. Failing to adjust for underreported data can lead to incorrect parameter estimates. A Bayesian Poisson regression model to account for underreported data has previously been developed. We expand this model by using a multilevel Poisson regression. In our model we consider the case where the probability of reporting is the same for all groups, and the case where there are multiple reporting probabilities. In both situations we show the importance of accounting for underreporting in the analysis. Another common source of bias in parameter estimation is missing data. In particular, we consider missing data in follow-up studies aimed to estimate the rate of a particular event. If we ignore the missing data, then both the overall event rates and the uncertainty in the model parameters will be underestimated. To address this problem we will extend an already existing Bayesian model for missing data in follow-up studies to two multilevel models. One model uses an overdispersion term to account for excess variability in the data. The second model uses random intercepts and slopes. The last topic that we consider is a meta-analysis comparison. We are interested in the performance of the methods for safety signal evaluation of rare events. This topic is of particular interest due to the recent FDA guidance for assessing cardiovascular risk in diabetes drugs. We consider three methods based on the Cox proportional hazards model, including a Bayesian approach. A formal comparison of the methods is conducted using a simulation study. In our simulation we model two treatments and consider several scenarios.Item Bayesian approaches to problems in drug safety and adaptive clinical trial designs.(2008-06-10T21:19:06Z) Mauldin, Jo A.; Seaman, John Weldon, 1956-; Statistical Sciences.; Baylor University. Dept. of Statistical Sciences.The efficacy, safety, and cost of pharmaceutical products are critical issues in society today. Motivated both financially and ethically by these concerns, the pharmaceutical industry has continually worked to develop methods which provide more efficient and ethical assessments of the safety and efficacy of pharmaceutical products. There is an increased emphasis on more targeted treatments with a focus on better patient outcomes. In this vein, recent applications of advanced statistical methods have allowed companies to reduce the costs of getting safe and effective products to market—savings that can be passed on to consumers in the form of price cuts or additional investment in research and development. Among the methods that have become increasingly important in drug development are adaptive experimental designs. We first investigate the impacts of misclassification of response on a Bayesian adaptive design. A primary argument for the use of adaptive designs is the efficiency one gains over implementing a traditional fixed design. We examine the design’s performance under misclassified responses and compare it to the situation for which we account for the misclassification in a Bayesian model. Next, we examine the utility of safety lab measures collected during the clinical development of a drug. These labs are used to characterize a drug’s safety profile and their scope can be limited when reasonably confident of no associated safety concern, facilitating reduced costs and less subject burden. We consider the use of a Bayesian generalized linear model and investigate the use of conditional means priors and power priors for the regression coefficients used in the analysis of safety lab measures. Finally, we address the need for transparent benefit-risk assessment methods that combine safety and efficacy data and allow straight forward comparisons of treatment options. We begin by developing interval estimates on a commonly-used benefit-risk ratio. We then propose the use of a Bayesian generalized linear model to jointly assess safety and efficacy, allowing for direct comparisons of competing treatment options utilizing posterior 95% credible sets and predictive probabilities.Item Bayesian evaluation of surrogate endpoints.(2006-07-29T17:03:06Z) Feng, Chunyao.; Seaman, John Weldon, 1956-; Statistical Sciences.; Baylor University. Dept. of Statistical Sciences.To save time and reduce the size and cost of clinical trials, surrogate endpoints are frequently measured instead of true endpoints. The proportion of the treatment effect explained by surrogate endpoints (PTE) is a widely used, albeit controversial, validation criteria. Frequentist and Bayesian methods have been developed to facilitate such validation. The former does not formally incorporate prior information; a critical issue since confidence intervals on PTE is often unacceptably wide. Both the Bayesian and frequentist approaches may yield estimates of PTE outside the unit interval. Furthermore, the existing Bayesian method offers no insight into the prior used for PTE, making prior-to-posterior sensitivity analyses problematic. We proposed a fully Bayesian approach that avoids both of these problems. We also consider the effect of interaction on inference for PTE. As an alternative to the use of PTE, we develop a Bayesian model for relative effect and the association between surrogate and true endpoints, making use of power priors.Item Bayesian inference for correlated binary data with an application to diabetes complication progression.(2006-10-26T19:07:46Z) Carlin, Patricia M.; Seaman, John Weldon, 1956-; Statistical Sciences.; Baylor University. Dept. of Statistical Sciences.Correlated binary measurements can occur in a variety of practical contexts and afford interesting statistical modeling challenges. In order to model the separate probabilities for each measurement we must somehow account for the relationship between them. We choose to focus our applications to the progression of the complications of diabetic retinopathy and diabetic nephropathy. We first consider probabilistic models which employ Bayes' theorem for predicting the probability of onset of diabetic nephropathy given that a patient has developed diabetic retinopathy, modifying the work of Ballone, Colagrande, Di Nicola, Di Mascio, Di Mascio, and Capani (2003). We consider beta-binomial models using the Sarmanov (1966) framework which allows us to specify the marginal distributions for a given bivariate likelihood. We present both maximum likelihood and Bayesian methods based on this approach. Our Bayesian methods include a fully identified model based on proportional probabilities of disease incidence. Finally, we consider Bayesian models for three different prior structures using likelihoods representing the data in the form of a 2-by-2 table. To do so, we consider the data as counts resulting from two correlated binary measurements: the onset of diabetic retinopathy and the onset of diabetic nephropathy. We compare resulting posterior distributions from a Jeffreys' prior, independent beta priors, and conditional beta priors, based on a structural zero likelihood model and the bivariate binomial model.Item Bayesian modelling of mixed outcome types using random effect.(2012-11-29) Wei, Hua, 1982-; Seaman, John Weldon, 1956-; Statistical Sciences.; Baylor University. Dept. of Statistical Sciences.The problem of analyzing associated outcomes of mixed type arises frequently in practice. In this dissertation we develop several Bayesian models for analyzing associated discrete and continuous responses simultaneously using random effects. We also extend these models to overcome the bias in parameter estimation due to ignorance of skewness of the continuous response, a misclassified covariate, and a zero-inflated discrete response. Simulation studies indicate that our models provide good estimates of regression coefficients, response variability, and the correlation between responses. We also show that ignoring the random effects leads to a bias in parameter estimates which is magnified with increasing variability of the random effect. Comparison to corresponding likelihood methods suggests that the Bayesian Poisson-normal (PN) model takes clear advantage of prior information when available, and performs similarly when relatively non-informative priors are used. A Bayesian sample size determination method is also developed. We make three extensions to the PN model accounting for three complications common to Poisson-normal (PN) regression: continuous responses exhibiting skewness, potential misclassification in a binary covariate (MisPSN model), and excess zeros in the Poisson counts (ZipPSN model). Simulation studies are performed for these models, and the results show that ignoring these complications result in estimated regression coefficients further from the truth when compared to the models that accounted for these complications. For the MisPSN model, we study the attenuation of the regression parameters caused by the misclassified covariate. For the ZipPSN model, we discuss how zero-inflation influences parameter estimation. Finally, we compare the PN model to a model using two separate but correlated random effects. In simulation studies we find there is little advantage to the more complicated model.Item Bayesian models for discrete censored sampling and dose finding.(2010-06-23T12:29:00Z) Pruszynski, Jessica E.; Seaman, John Weldon, 1956-; Statistical Sciences.; Baylor University. Dept. of Statistical Sciences.We first consider the problem of discrete censored sampling. Censored binomial data may lead to irregular likelihood functions and problems with statistical inference. We consider a Bayesian approach to inference for censored binomial problems and compare it to non-Bayesian methods. We include examples and a simulation study in which we compare point estimation, interval coverage, and interval width for Bayesian and non-Bayesian methods. The continual reassessment method (CRM) is a Bayesian design often used in Phase I cancer clinical trials. It models the toxicity response of the patient as a function of administered dose using a model that is updated as data accrues. The CRM does not take into consideration the relationship between the toxicity response and the proportion of the administered drug that is absorbed by targeted tissue. Not accounting for this discrepancy can yield misleading conclusions about the maximum tolerated dose to be used in subsequent Phase II trials. We will examine, through simulation, the effect that disregarding the level of bioavailability has on the performance of the CRM.Item Bayesian sample-size determination and adaptive design for clinical trials with Poisson outcomes.(Elsevier., 2010) Hand, Austin L.; Stamey, James D.; Statistical Sciences.; Baylor University. Dept. of Statistical Sciences.Because of the high cost and time constraints for clinical trials, researchers often need to determine the smallest sample size that provides accurate inferences for a parameter of interest or need to adaptive design elements during the course of the trial based on information that is initially unknown. Although most experimenters have employed frequentist methods, the Bayesian paradigm offers a wide variety of methodologies and are becoming increasingly more popular in clinical trials because of their flexibility and their ease of interpretation. Recently, Bayesian approaches have been used to determine the sample size of a single Poisson rate parameter in a clinical trial setting. We extend these results to the comparison of two Poisson rates and develop methods for sample-size determination for hypothesis testing in a Bayesian context. Also, we propose a Bayesian predictive adaptive two-stage design for Poisson data that allows for sample-size adjustments by basing the second-stage sample size on the first-stage results. Lastly, we present a new Bayesian meta-analytic non-inferiority method for binomial data that allows researchers a more direct interpretation of their results. Our method uses MCMC methods to approximate the posterior distribution of the new treatment compared to a placebo rather than indirectly inferring a conclusion from the comparison of the new treatment to an active control.Item Bayesian topics in biostatistics : treatment selection, sample size, power, and misclassification.(2011-12-19) Doty, Tave Parker.; Tubbs, Jack Dale.; Stamey, James D.; Statistical Sciences.; Baylor University. Dept. of Statistical Sciences.Bayesian methodology is implemented to investigate three problems in biostatistics. The first problem considers using biomarkers to select optimal treatments for individual patients. A Bayesian adaptation of the selection impact (SI) curve developed by Pepe and Song (2004) is investigated. The second problem considers a Bayesian approach for determining specific sample sizes to achieve a desired range of power for fixed-dose combination drug trials. Sidik and Jonkman (2003) developed a sample size formula using the intersection-union test for testing the efficacy of combination drugs. Our results are compared to their frequentist approach. The third problem considers response misclassification in fixed-dose combination drug trials under two scenarios: when the sensitivity and specificity are known, and when the sensitivity and specificity are unknown but have specified informative prior structures.Item A bivariate regression model with correlated mixed responses.(2013-09-16) Bray, Ross A.; Seaman, John Weldon, 1956-; Stamey, James D.; Statistical Sciences.; Baylor University. Dept. of Statistical Sciences.In the dissertation we consider a bivariate model for associated binary and continuous responses such as those in a clinical trial where both safety and efficacy are observed. We designate a marginal and conditional model that allows for the association between the responses by including the marginal response as an additional predictor of the conditional response. We use a Bayesian approach to model the bivariate regression model using a hierarchical prior structure. Simulation studies indicate that the model provides good point and interval estimates of regression parameters across a variety of parameter configurations, with smaller binary event probabilities offering particular challenges. For example, as the probability of an adverse event decreases, we find that the marginal posterior variances increase for the binary safety response regression coefficients, but not for the conditional efficacy response coefficients. Potential problems with induced priors are briefly considered. We implement an asymptotic higher order approximation in order to obtain parameter estimates and confidence intervals via a simulation study. In comparison, the frequentist intervals are slightly more narrow than the Bayesian intervals (using vague priors), but the latter have far superior coverage. Finally, we implement a Bayesian sample size determination method while controlling an operating characteristic of the model, the family-wise error rate. We find that there is a savings in power afforded by use of the multiplicity adjustment when simultaneously testing multiple hypotheses. Simulation results indicate that multiplicity adjustments improve the power of the model when compared to the overly conservative Bonferroni adjustment. We also see an improvement in power through the effective use of prior information.Item Conjugate hierarchical models for spatial data: an application on an optimal selection procedure.(2006-07-24T15:25:43Z) McBride, John Jacob.; Bratcher, Thomas L.; Statistical Sciences.; Baylor University. Dept. of Statistical Sciences.The theory of generalized linear models provides a unifying class of statistical distributions that can be used to model both discrete and continuous events. In this dissertation we present a new conjugate hierarchical Bayesian generalized linear model that can be used to model counts of occurrences in the presence of spatial correlation. We assume that the counts are taken from geographic regions or areal units (zip codes, counties, etc.) and that the conditional distributions of these counts for each area are distributed as Poisson having unknown rates or relative risks. We incorporate the spatial association of the counts through a neighborhood structure which is based on the arrangement of the areal units. Having defined the neighborhood structure we then model this spatial association with a conditionally autoregressive (CAR) model as developed by Besag (1974). Once the spatial model has been created we adapt a subset selection procedure created by Bratcher and Bhalla (1974) to select the areal unit(s) having the highest relative risks.Item Count regression models with a misclassified binary covariate : a Bayesian approach.(2010-06-23T12:28:43Z) Morgan-Cox, MaryAnn.; Stamey, James D.; Seaman, John Weldon, 1956-; Statistical Sciences.; Baylor University. Dept. of Statistical Sciences.Mismeasurment, and specifically misclassification, are inevitable in a variety of regression applications. Fallible measurement methods are often used when infallible methods are either expensive or not available. Ignoring mismeasurement will result in biased estimates for the associated regression parameters. The models presented in this dissertation are designed to correct this bias and yield variance estimates reflecting the uncertainty that is introduced by flawed measurements. We consider a generalized linear model for a Poisson response. This model accounts for the misclassification associated with the binary exposure covariate. In the first portion of the analysis, diffuse priors are utilized for the regression coefficients and the effective prior sample size technique is implemented to construct informative priors for the misclassification parameters. In the second portion of the analysis we place informative priors on the regression parameters and diffuse priors on the misclassification parameters. We also present results of a simulation study that incorporates prior information for both the regression coefficients and the misclassification parameters. Next, we extend the Poisson model with a single binary covariate in various ways, including adding a continuous covariate and accounting for clustering through the use of random effects models. We also consider a zero-inflated version of the model. Simulation studies are summarized for each extension. Finally, we discuss an application in which frequentist and Bayesian logistic regression models are used to predict prevalence of high BMI-for-age among preschool-aged children in Texas.Item Interval estimation for TPRs and FPRs of two diagnostic tests with unverified negatives.(2011-05-12T15:51:24Z) Stock, Eileen Marie.; Young, Dean M.; Stamey, James D.; Statistical Sciences.; Baylor University. Dept. of Statistical Sciences.In the clinical setting, the performance of a diagnostic or screening test is often summarized using the test's true positive rate (TPR) and false positive rate (FPR). However, estimation of the TPR and FPR for a diagnostic or screening test requires individuals to be identified as diseased or non-diseased using a gold standard, which is seldomly available. In cases for when the gold standard test is available, it may be too costly or time-consuming to implement the gold standard procedure. Furthermore, this verification procedure is frequently limited to individuals with one or more positive test results, thus yielding unverified negatives and possibly verification bias. We present two interval estimators for the estimation of the differences in the TPRs and FPRs of two dichotomous diagnostic tests applied to members of a population stratified into two distinct groups. The two groups have varying disease prevalences with unverified negatives. We obtain maximum likelihood estimates using the EM algorithm to devise a Wald interval for the differences in the TPRs and FPRs and compare its performance to a Bayesian credible interval for a spectrum of different TPRs, FPRs, and sample sizes. We further present a hierarchical Bayesian logit model when incorporating different locations into the model. In particular, we compare the accuracy of two low-cost procedures for screening of high-grade cervical intra-epithelial neoplasia (CIN2+) using a cross-sectional multi-center study. We model the prevalence of CIN2+ for two different age groups assumed to have varying disease prevalences with unverified negatives. Furthermore, we include random effects to account for heterogeneity among the locations. In the final chapter, we perform a power analysis for multivariate normality in the presence of a monotone missing data pattern. We assess the efficiency of several imputation procedures and multivariate normality tests through a simulation study where we compare the resulting powers using an alternative distribution. In particular, we consider different sample sizes and proportions of missingness for various parameter values of a multivariate skewed t-distribution. We calculate power using both the median and mean test statistics for each imputation-test combination.Item Interval-censored negative binomial models : a Bayesian approach.(2012-11-29) Doherty, Stephanie Michelle.; Seaman, John Weldon, 1956-; Statistical Sciences.; Baylor University. Dept. of Statistical Sciences.Count data are quite common in many research areas. Interval-censored counts, in which an interval representing a range of counts is observed rather than the precise count, may arise in many situations, including survey data. In this dissertation we develop a model for accommodating interval-censored count data through the interval-censored negative binomial model, with an expansion to a regression model in which the interval count responses are regressed on covariate values. We employ both frequentist and Bayesian methods to arrive at point and interval estimates for the negative binomial parameters. We nd that many factors, including the interval-censored widths and the tendency of the precise counts toward either endpoint of the intervals, a ect parameter estimates based on interval-censored data as compared to estimates using only precise data. We perform simulation studies in the non-regression and regression contexts, which compare the interval-censored model to alternatives for accommodating interval-censored data. These methods are precise-count analyses based on the lower endpoints, upper endpoints, or means of the observed intervals. For the scenarios in our simulation experiments, we nd that the interval-censored model outperforms the lower endpoint and upper endpoint methods, and performs at least as well as, or better than, the mean method. We conclude with an extended example, in which we compare the interval-censored method to the lower and upper endpoint methods for health-related quality of life survey data that are interval-censored. We nd that the interval-censored method allows us to calculate parameter estimates and conduct posterior inferences, without the need to discard any information provided in the study.Item Logistic regression with covariate measurement error in an adaptive design : a Bayesian approach.(2008-10-14T16:59:14Z) Crixell, JoAnna Christine, 1979-; Seaman, John Weldon, 1956-; Stamey, James D.; Statistical Sciences.; Baylor University. Dept. of Statistical Sciences.Adaptive designs are increasingly popular in clinical trials. This is because such designs have the potential to decrease patient exposure to treatments that are less efficacious or unsafe. The Bayesian approach to adaptive designs is attractive because it makes systematic use of prior data and other information in a way that is consistent with the laws of probability. The goal of this dissertation is to examine the effects of measurement error on a Bayesian adaptive design. Measurement error problems are common in a variety of regression applications where the variable of interest cannot be measured perfectly. This is often unavoidable because infallible measurement tools to account for such error are either too expensive or unavailable. When modeling the relationship between a response variable and other covariates, we must account for any uncertainty introduced when one or both of these variables are measured with error. This dissertation will explore the consequence of imperfect measurements on a Bayesian adaptive design.Item Logistic regression with misclassified response and covariate measurement error: a Bayesian approach.(2007-12-04T19:56:26Z) McGlothlin, Anna E.; Stamey, James D.; Seaman, John Weldon, 1956-; Statistical Sciences.; Baylor University. Dept. of Statistical Sciences.In a variety of regression applications, measurement problems are unavoidable because infallible measurement tools may be expensive or unavailable. When modeling the relationship between a response variable and covariates, we must account for the uncertainty that is inherently introduced when one or both of these variables are measured with error. In this dissertation, we explore the consequences of and remedies for imperfect measurements. We consider a Bayesian analysis for modeling a binary outcome that is subject to misclassification. We investigate the use of informative conditional means priors for the regression coefficients. Additionally, we incorporate random effects into the model to accommodate correlated responses. Markov chain Monte Carlo methods are utilized to perform the necessary computations. We use the deviance information criterion to aid in model selection. Next, we consider data where measurements are flawed for both the response and explanatory variables. Our interest is in the case of a misclassified dichotomous response and a continuous covariate that is unobservable, but where measurements are available on its surrogate. A logistic regression model is developed to incorporate the measurement error in the covariate as well as the misclassification in the response. The methods developed are illustrated through an example. Results from a simulation experiment are provided illustrating advantages of the approach. Finally, we expand this model to incorporate random effects, resulting in a generalized linear mixed model for a misclassified response and covariate measurement error. We demonstrate the use of this model using a simulated data set.