 anglais uniquement
EPFL Statistics Seminar

Prof. Terry Speed
Walter and Eliza Hall Institute of Medical Research and UC Berkeley
Friday, March 7, 2014
Time 15:00 sharp  Room CE2
Title: Removing unwanted variation: from principal components to random effects
Abstract
Ordinary leastsquares is a venerable tool for the analysis of scientific data originating in the work of AM. Legendre and C. F. Gauss around 1800. Gauss used the method extensively in astronomy and geodesy. Generalized least squares is more recent, originating with A. C. Aitken in 1934, though weighted least squares was widely used long before that. At around the same time (1933) H. Hotelling introduced principal components analysis to psychology. Its modern form is the singular value decomposition. In 1907, motivated by social science, G. U. Yule presented a new notation and derived some identities for linear regression and correlation. Random effects models date back to astronomical work in the mid19th century, but it was through the work of C. R. Henderson and others in animal science in the 1950s that their connexion with generalized least squares was firmly made. These are the diverse origins of our story, which concerns the removal of unwanted variation in highdimensional genomic and other “omic” data using negative controls. We start with a linear model that Gauss would recognize, with ordinary least squares in mind, but we add unobserved terms to deal with unwanted variation. A singular value decomposition, one of Yule's identities, and negative control measurements (here genes) permit the identification of our model. In a surprising twist, our initial solution turns out to be equivalent to a form of generalized least squares. This is the starting point for much of our recent work. In this talk I will try to explain how a rather eclectic mix of familiar statistical ideas can combine with equally familiar notions from biology (negative and positive controls) to give a useful new set of tools for omic data analysis. Other statisticians have come close to the same endpoint from a different perspectives, including Bayesian, sparse linear and random effects models. .

Prof. Jonathan Tawn
Lancaster University
Thursday, March 20, 2014
Time 15:15  Room MEB331
Title: Extreme Value Theory: An Impact Case Study for International Shipping Standards
Abstract
Research on extreme value methods proved critical in determining the conclusions of the UK High Court’s investigation of the sinking of the M.V. Derbyshire (the UK’s largest ship lost at sea) and identified that design standards for hatch covers of oceangoing carriers needed to be increased by 35%. This new level was then set as a new worldwide mandatory standard. This talk describes my involvement in this work starting with the estimation of the probability of the M.V. Derbyshire having sunk from a structural failure, resulting from large wave impacts on the ship, for each of a range of possible seastate and vessel conditions; through experiences of presenting evidence to the High Court; and subsequent work aimed at setting new design standards for oceangoing carriers.

Dr. Heather Battey
University of Bristol
Friday, April 4, 2014
15:15  MA 12
Title: Smooth projected density estimation
Abstract
In this talk, I will introduce a new family of procedures, smooth projection estimators, for multidimensional density estimation. These estimators are defined by a projection of a nonparametric pilot estimate onto a finite mixture class. The projection step yields a succinct parametric representation, whilst the nonparametric step allows one to exploit structure (e.g. conditional independencies) that may be detected by other means. Although structural constraints are not preserved through the projection, exploitation of the structural information is shown to be worthwhile. I will discuss the sense in which the estimator is consistent, and its ability to achieve a faster rate of convergence than the pilot estimator upon which it is based.

Dr. Susan Wei
University of North Carolina, Chapel Hill
Thursday, April 10, 2014
15:15  MA 10
Title: Latent Supervised Learning
Abstract
Machine learning is a branch of artificial intelligence concerning the construction of systems that can learn from data. Algorithms in machine learning can be placed along a spectrum according to the type of input available during training. The two main machine learning algorithms, unsupervised and supervised learning, occupy either end of this spectrum. In this talk I will overview some of my recent research on machine learning tasks that fall somewhere in the middle of this spectrum. I will primarily focus on a new machine learning task called latent supervised learning, where the goal is to learn a binary classifier from continuous training labels that serve as surrogates for the unobserved class labels. A specific model is investigated where the surrogate variable arises from a twocomponent Gaussian mixture with unknown means and variances, and the component membership is determined by a hyperplane in the covariate space. A datadriven sieve maximum likelihood estimator for the hyperplane is proposed, which in turn can be used to estimate the parameters of the Gaussian mixture. Extensions of the framework to survival data and applications to estimating treatment effect heterogeneity will also be discussed.

Prof. Jane L. Hutton
University of Warwick
Friday, May 16, 2014
15:15  MA 12
Title: Chain Event Graphs for Informative Missingness
Abstract
Chain event graphs (CEGs) extend graphical models to address situations in which, after one variable takes a particular value, possible values of future variables differ from those following alternative values (Thwaites et al 2010). These graphs are a useful framework for modelling discrete processes which exhibit strong asymmetric dependence structures, and are derived from probability trees by merging the vertices in the trees together whose associated conditional probabilities are the same. We exploit this framework to develop new classes of models where missingness is influential and data are unlikely to be missing at random (Barclay et al 2014). Contextspecific symmetries are captured by the CEG. As models can be scored efficiently and in closed form, standard Bayesian selection methods can be used to search over a range of models. The selected maximum a posteriori model can be easily read back to the client in a graphically transparent way. The efficacy of our methods are illustrated using survival of people with cerebral palsy, and a longitudinal study from birth to age 25 of children in New Zealand, analysing their hospital admissions aged 1825 years with respect to family functioning, education, and substance abuse aged 1618 years. P Thwaites, JQ Smith, and E Riccomagno (2010) "Causal Analysis with Chain Event Graphs" Artificial Intelligence, 174, 889909. LM Barclay, JL Hutton and JQ Smith, (2014) "Chain Event Graphs for Informed Missingness", Bayesian Analysis, Vol. 9, 5376.

Prof. Claudia Klüppelberg
Technische Universität München
Friday, 19 September, 2014
15:15  MA11
Title: Semiparametric estimation for maxstable spacetime processes
Abstract
Maxstable spacetime processes have been developed to study extremal dependence in spacetime data. We propose a semiparametric estimation procedure based on a closed form expression of the extremogram to estimate the parameters in a maxstable spacetime process. We show asymptotic properties of the resulting parameter estimates and propose bootstrap procedures to obtain asymptotically correct confidence intervals. A simulation study shows that the proposed procedure works well for moderate sample sizes. Finally, we apply this estimation procedure to fitting a maxstable model to radar rainfall measurements in a region in Florida. This is joint work with Richard Davis and Christina Steinkohl.

Dr. Raphael Huser
KAUST
Thursday, 25 September, 2014
15:15  MA10
Title: Modelling of nonstationarity in spatial extremes
Abstract
Maxstable processes are natural models for spatial extremes, because they provide suitable asymptotic approximations to the distribution of maxima of random fields. In the recent past, several parametric families of stationary maxstable models have been developed, and fitted to various types of data. However, a recurrent problem is the modelling of nonstationarity. While it is fairly straightforward to build nonstationary models for marginal distributions, it is much less obvious to model nonstationarity in the dependence structure of extremal data, and there have been very few attempts to address this important issue so far. In my talk, I will discuss nonstationarity modelling in maxstable processes and show how inference can be performed using pairwise likelihoods. If time allows, I will also illustrate the methodology with an application to environmental data.

Prof. Richard Olshen
Stanford University
Friday, 10 October, 2014
15:15  CE 105
Title: Successive normalization/standardization of rectangular arrays
Abstract
When each subject in a study provides a vector of numbers/features for analysis, and one wants to standardize, then for each coordinate of the resulting rectangular array one may subtract the mean by subject and divide by the standard deviation by subject. Each subject's data then has mean 0 and standard deviation 1. Subsequently, one may so standardize by row, then by column, and so on. Data from expression arrays and protein arrays often come as such rectangular arrays, where typically column denotes "subject" and the other some measure of "gene." When analyzing these data one may ask that subjects and genes "be on the same footing." Thus, there may be a need to standardize successively rows and columns of the matrix. I investigate the convergence, including rates of convergence, of this successive approach to standardization, which colleague Bala Rajaratnam and I learned from Bradley Efron. Limit matrices exist on a Borel set of full measure; these limits have row and column means 0, row and column standard deviations 1. We have studied implementation on simulated data and data that arose in cardiology. The procedure can be shown not to work with simultaneous standardization, first subtracting off means for rows and columns, and then division of resulting numbers by the product of standard deviations. Results make contact with previous work on large deviations of Lipschitz functions of Gaussian vectors, with alternating conditional expectations, and with von Neumann's algorithm for the distance between two closed, convex subsets of a Hilbert space. New insights regarding inference are enabled.
Efforts have been joint not only with Rajaratnam, but also with many others (who will be mentioned during my presentation).

Joint seminar of numerical analysis / statistics
Dr. David Ginsbourger
IMSV, Bern Universität
Friday, 17 October, 2014
15:15  MA 31
Title: Gaussian random field models for the adaptive design of costly experiments
Abstract
Gaussian random field models have become commonplace in the design and analysis of costly experiments. Thanks to convenient properties of associated conditional distributions, Gaussian field models not only allow approximating deterministic functions based on scarce evaluation results, but can also be used as a basis for evaluation strategies dedicated to optimization, inversion, uncertainty quantification, probability of failure estimation, and more. After an introduction to Gaussian random field modelling and some of its popular applications in adaptive design of deterministic experiments, we will focus on two recent contributions. First, results on covariancedriven pathwise invariances of random fields will be presented. Simulation and prediction examples will illustrate how Gaussian field models can incorporate a number of structural priors such as group invariances, harmonicity, or sparsity. Second, results on infill sampling criteria for sequential uncertainty reduction will be discussed, with application to an excursion set estimation problem from safety engineering.

Dr. Djalel Eddine Meskaldji
EPFL
Friday, 21 November, 2014
15:15  MA11
Title: The control of the scaled false discovery rate, a flexible and comprehensive error control and a powerful theoretical tool in multiple testing
Abstract
A large variety of error control rates to limit the declaration of false effects as being real has been proposed in the field of multiple hypotheses testing or multiple comparisons. Given the large number of papers written over the last ten years on error control in high dimensional testing, it would be worthwhile to consider a single comprehensive technique that allows a user flexibility in error control when dealing with bigdata. We describe a new and comprehensive family of error rates that contains and generalizes most existing proposals. It offers the scientist a broad choice on how to properly control for discovering false findings. We also propose a corresponding family of control procedures that guarantees the control of the new error rates under different assumptions on the pvalues. We show the interest of introducing this comprehensive error rate to obtain new interesting theoretical results on assumption weakening, relation between different error rates and on asymptotic control. We also discuss some particular choices of error rates that bridge the gap between two well known control error metrics: FWER and FDR. The comprehensive family and the corresponding control theorems open new perspectives in the field of multiple testing.

Prof. Alastair Young
Imperial College London
Thursday, 27 November, 2014
16:15  MA10
Title: The formal relationship between analytic and simulation approaches to parametric inference
Abstract
Two routes most commonly proposed for accurate inference on a scalar interest parameter in the presence of a (possibly highdimensional) nuisance parameter are parametric simulation (`bootstrap') methods, and analytic procedures based on normal approximation to adjusted forms of the signed root likelihood ratio statistic. Both methods yield, under some null hypothesis of interest, pvalues which are uniformly distributed to error of thirdorder in the available sample size. But, given a specific inference problem, what is the formal relationship between pvalues calculated by the two approaches? We elucidate the extent to which the two methodologies actually give the same inference.
Dr. Marco Oesting
INRA ParisThursday, April 3, 2014
15:15  MA 10
Title: Conditional Modeling of Extreme Wind Gusts by Bivariate BrownResnick Processes
Abstract
In order to incorporate the dependence between the spatial random fields of observed and forecasted maximal wind gusts, we propose to model them jointly by a bivariate BrownResnick process. As there is a onetoone correspondence between bivariate BrownResnick processes and pseudo crossvariograms, stationary BrownResnick processes can be characterized by properties of the underlying pseudo crossvariogram. We particularly focus on the investigation of their asymptotic behavior and introduce a flexible parametric model both being interesting in classical geostatistics on their own. The model is applied to real observation and forecast data for 110 stations in Northern Germany. The resulting postprocessed forecasts are verified. This is joint work with Martin Schlather (Universität Mannheim) and Petra Friederichs (Universität Bonn).
Dr. Axel Gandy
Imperial College LondonFriday, May 23, 2014
15:15  MA 12
Title: Implementing (Multiple) Monte Carlo Tests
Abstract
Consider Monte Carlo tests, e.g. bootstrap tests or permutation tests. Naive implementations can lead to decisions that depend mainly on the simulation error and not on the observed data. This talk will present algorithms that solve this problem: for individual Monte Carlo tests as well as for multiple Monte Carlo tests with multiplicity correction such as the Benjamini & Hochberg False Discovery Rate (FDR) procedure. The key property of the presented algorithms is that, with arbitrarily high probability, the same decisions as the original procedure with the ideal pvalues is reached.
Prof. Elena Kulinskaya
University of East Anglia, UKFriday, May 23, 2014
16:15  MA 12
Title: Random means biased?
Abstract
Random effects model (REM) in metaanalysis incorporates heterogeneity of effect measures across studies. We were interested in combining odds ratios from K 2x2 contingency tables. The standard (additive) REM is the random intercept model in 1way ANOVA for logodds ratios. Alternatively, heterogeneity can be induced via intracluster correlation, say assuming betabinomial distributions. This (multiplicative) model is convenient for defining REM in conjunction with the MantelHaenzsel approach. Our method of estimating intraclass correlation (assumed constant across studies) is based on profiling the modified BreslowDay test. Coverage of resulting confidence intervals is compared to standard methods through simulation. Unexpectedly, we found that the standard methods are very biased in the multiplicative REM, and our new method is very biased in the standard REM. The explanation lies in the general (but new to us) fact that any function of a random variable is biased under REM. This is a general concern in Generalised Linear Mixed Models. The question on what exactly is random under REM is a difficult question for a frequentist... (joint work with Ilyas Bakbergenuly)
Prof. Victor Panaretos [Public Inaugural Lecture]
Ecole Polytechnique Fédérale de LausanneMonday, May 26, 2014
17:15  Room CM2
Sums of Squares from Pythagoras to Hilbert
Seminar Speakers
Past Statistics Seminars
 Statistics Seminar
 Seminars held during 2016
 Seminars held during 2015
 Seminars held during 2014
 Seminars held during 2013
 Seminars held during 2012
 Seminars held during 2011
 Seminars held during 2010
 Seminars held during 2009
 Seminars held during 2008
 Seminars held during 2007
 Seminars held from 1995 through 2006
Visitor Information
Directions for visitorsMailing List
Please email Ms. Schaffner if you would like to be added to the seminar mailing list.