Document Server@UHasselt
https://uhdspace.uhasselt.be:443/dspace
The DSpace digital repository system captures, stores, indexes, preserves, and distributes digital research material.2017-02-25T00:08:37ZProperties of Estimators in Exponential Family Settings With Observation-based Stopping Rules
http://hdl.handle.net/1942/16242
<h5>Title</h5>Properties of Estimators in Exponential Family Settings With Observation-based Stopping Rules
<h5>Authors</h5>Milanzi, Elasma; Molenberghs, Geert; ALONSO ABAD, Ariel; Kenward, Michael G; VERBEKE, Geert; Tsiatis, Anastasios A.; Davidian, Marie
<h5>Abstract</h5>Often, sample size is not fixed by design. A typical example is a sequential trial with a stopping rule, where stopping is based on what has been observed at an interimlook. While such designs are used for time and cost efficiency, and hypothesis testing theory has been well developed, estimation following a sequential trial is
a challenging, controversial problem. Progress has been made in the literature, predominantly for normal outcomes and/or for a deterministic stopping rule. Here, we place these settings in the broader context of outcomes following an exponential
family distribution and, with a stochastic stopping rule that includes a deterministic rule and completely random sample size as special cases. We study (1) the so-called incompleteness property of the sufficient statistics, (2) a general class of linear estimators, and (3) joint and conditional likelihood estimation. Apart from
the general exponential family setting, normal and binary outcomes are considered as key examples. While our results hold for a general number of looks, for ease of 1 exposition we focus on the simple yet generic setting of two possible sample sizes, N = n or N = 2n.2014-01-01T00:00:00ZEstimation After a Group Sequential Trial
http://hdl.handle.net/1942/16182
<h5>Title</h5>Estimation After a Group Sequential Trial
<h5>Authors</h5>Milanzi, Elasma; Molenberghs, Geert; ALONSO ABAD, Ariel; Kenward, Michael G; Tsiatis, Anastasios A; Davidian, Marie; VERBEKE, Geert
<h5>Abstract</h5>Group sequential trials are one important instance of studies for which the sample size is not xed a priori but rather takes one of a nite set of pre-speci ed values, dependent on the observed data. Much work has been devoted to the inferential consequences of this design feature. Molenberghs et al (2012) and Milanzi et al (2012) reviewed and extended the existing literature, focusing on a collection of seemingly disparate, but related, settings, namely completely random
sample sizes, group sequential studies with deterministic and random stopping rules, incomplete data, and random cluster sizes. They showed that the ordinary sample average is a viable option for estimation following a group sequential trial, for a wide class of stopping rules and for random outcomes with a distribution in the exponential family. Their results are somewhat surprising in the sense that the sample average is not optimal, and further, there does not exist an optimal, or even, unbiased linear estimator. However, the sample average is asymptotically unbiased, both conditionally upon the observed sample size as well as marginalized over it. By exploiting ignorability they showed that the sample average is the conventional maximum likelihood estimator. They also showed that a conditional maximum likelihood estimator is nite
sample unbiased, but is less e cient than the sample average and has the larger mean squared error. Asymptotically, the sample average and the conditional maximum likelihood estimator are equivalent.
This previous work is restricted however to the situation in which the the random sample size can take only two values, N = n or N = 2n. In this paper, we consider the more practically
useful setting of sample sizes in a the nite set fn1; n2; : : : ; nLg. It is shown that the sample average is then a justi able estimator , in the sense that it follows from joint likelihood estimation, and it is consistent and asymptotically unbiased. We also show why simulations can give the false impression of bias in the sample average when considered conditional upon the sample size.
The consequence is that no corrections need to be made to estimators following sequential trials. When small-sample bias is of concern, the conditional likelihood estimator provides a relatively
straightforward modi cation to the sample average. Finally, it is shown that classical likelihood-based standard errors and con dence intervals can be applied, obviating the need for technical corrections.2014-01-01T00:00:00ZA new modeling approach for quantifying expert opinion in the drug discovery process
http://hdl.handle.net/1942/16181
<h5>Title</h5>A new modeling approach for quantifying expert opinion in the drug discovery process
<h5>Authors</h5>ALONSO ABAD, Ariel; Milanzi, Elasma; Molenberghs, Geert; Buyck, Christophe; BIJNENS, Luc
<h5>Abstract</h5>Expert opinion plays an important role when choosing clusters of chemical compounds for further investigation. Often in practice, the process by which the clusters are assigned to the experts for evaluation, the so-called selection process, and the qualitative ratings given by them (chosen/not chosen) need to be jointly modeled in order to avoid bias. This approach is referred to as the joint modeling approach. However, misspecifying the selection model may impact the estimation and inferences on parameters in the rating model, which are of most scienti c interest. We propose to incorporate the selection process into the analysis by adding a new set of random e ects to the rating model and, in this way, avoid the need to model it parametrically. This approach is referred to as the combined model approach. Through simulations, the performance of the combined and joint models were compared in terms of bias and con dence interval coverage. The estimates from the combined model were nearly unbiased and the derived con dence intervals had coverage probability around 95% in all the scenarios considered. In contrast, the estimates from the joint model were severely biased under some
misspeci cations of the selection model and tting the model was often numerically challenging. The results show that the combined model may o er a safer alternative on which to base inferences when there are doubts about the validity of the selection model. Importantly, due to its greater numerical stability, the combined model may
outperform the joint model even when the latter is correctly speci ed.2014-01-01T00:00:00ZImpact of selection bias on the evaluation of clusters of chemical compounds in the drug discovery process
http://hdl.handle.net/1942/16180
<h5>Title</h5>Impact of selection bias on the evaluation of clusters of chemical compounds in the drug discovery process
<h5>Authors</h5>ALONSO ABAD, Ariel; Milanzi, Elasma; Molenberghs, Geert; Buyck, Christophe; Bijnens, Luc
<h5>Abstract</h5>Expert opinion plays an important role when selecting promising clusters of chemical compounds in the drug discovery process. Indeed, experts can qualita-tively assess the potential of each cluster and, with appropriate statistical methods, these qualitative assessments can be quanti ed into a success probability for each of them. However, one crucial element often overlooked is the procedure by which the clusters are assigned/selected to/by the experts for evaluation. In the present work, the impact is studied that such a procedure may have on the statistical analysis and the entire evaluation process. It has been shown that some implementations of the selection procedure may seriously compromise the validity
of the evaluation and, consequently, the fully random allocation of the clusters to the experts is strongly advocated.2014-01-01T00:00:00Z