Invertibility and spectral anti-concentration for random matrices with non-iid entries

Speaker: 

Nicholas Cook

Institution: 

UCLA

Time: 

Tuesday, November 7, 2017 - 11:00am to 11:50am

Host: 

Location: 

RH 306

The invertibility of random matrices with iid entries has been the object of intense study over the past decade, due in part to its role in proving the circular law, as well as its importance in numerical analysis (smoothed analysis). In this talk we review recent progress in our understanding of invertibility for some non-iid models: adjacency matrices of sparse random regular digraphs, and random matrices with inhomogeneous variance profile. We will also discuss estimates for the number of singular values in short intervals. Graph regularity properties play a key role in both problems. Based in part on joint works with Walid Hachem, Jamal Najim, David Renfrew, Anirban Basak and Ofer Zeitouni.

Simple Classification from Binary Data

Speaker: 

Deanna Needell

Institution: 

UCLA

Time: 

Tuesday, November 21, 2017 - 11:00am to 11:50am

Host: 

Location: 

RH 306

Binary, or one-bit, representations of data arise naturally in many applications, and are appealing in both hardware implementations and algorithm design. In this talk, we provide a brief background to sparsity and 1-bit measurements, and then present new results on the problem of data classification from binary data that proposes a stochastic framework with low computation and resource costs. We illustrate the utility of the proposed approach through stylized and realistic numerical experiments, provide a theoretical analysis for a simple case, and discuss future directions. 

Nonconvex optimization meets supremum of stochastic processes

Speaker: 

Madhi Soltanolkotabi

Institution: 

University of Southern California

Time: 

Tuesday, October 31, 2017 - 11:00am to 12:00pm

Host: 

Location: 

RH 306

Many problems of contemporary interest in signal processing and machine learning involve highly non-convex optimization problems. While nonconvex problems are known to be intractable in general, simple local search heuristics such as (stochastic) gradient descent are often surprisingly effective at finding global optima on real or randomly generated data. In this talk I will discuss some results explaining the success of these heuristics by connecting convergence of nonconvex optimization algorithms to supremum of certain stochastic processes. I will focus on two problems.

The first problem, concerns the recovery of a structured signal from under-sampled random quadratic measurements. I will show that projected gradient descent on a natural nonconvex formulation finds globally optimal solutions with a near minimal number of samples, breaking through local sample complexity barriers that have emerged in recent literature. I will also discuss how these new mathematical developments pave the way for a new generation of data-driven phaseless imaging systems that can utilize prior information to significantly reduce acquisition time and enhance image reconstruction, enabling nano-scale imaging at unprecedented speeds and resolutions. The second problem is about learning the optimal weights of the shallowest of neural networks consisting of a single Rectified Linear Unit (ReLU). I will discuss this problem in the high-dimensional regime where the number of observations are fewer than the ReLU weights. I will show that projected gradient descent on a natural least-squares objective, when initialization at 0, converges at a linear rate to globally optimal weights with a number of samples that is optimal up to numerical constants.

Eigenvalues of multivariate variance components estimates

Speaker: 

Zhou Fan

Institution: 

Stanford

Time: 

Friday, November 3, 2017 - 10:00am

Location: 

NS 1201

Variance components (a.k.a. random/mixed effects) models are commonly used to determine genetic variance-covariance matrices of quantitative phenotypic traits in a population. The eigenvalue spectra of such matrices describe the evolutionary response to selection, but may be difficult to estimate from limited samples when the number of traits is large. In this talk, I will discuss the eigenvalues of classical MANOVA estimators of these matrices, including a characterization of the bulk empirical eigenvalue distribution, Tracy-Widom fluctuations at the spectral edges under a ``sphericity'' null hypothesis, the behavior of outlier eigenvalues under spiked alternatives, and a statistical procedure for estimating true population spike eigenvalues from the sample. These results are established using tools of random matrix theory and free probability.

On loop soups, gradient fields, and critical phenomena

Speaker: 

Pierre-François Rodriquez

Institution: 

UCLA

Time: 

Tuesday, October 24, 2017 - 11:00am

Location: 

RH 306

We will discuss recent developments relating Poissonian soups of random walks à la Lawler-Werner, Le Jan and Sznitman, with Gaussian free fields. We will show how the underlying correspondence, which can be traced back to early work of Symanzik in constructive field theory, can be used to effectively study phase transitions in such systems.

Cleaning large correlation matrices: Eigenvector overlaps, rotationally invariant estimators and financial applications

Speaker: 

Marc Potters

Institution: 

Capital Fund Management (Paris) and UCLA Applied Mathematics

Time: 

Tuesday, October 17, 2017 - 11:00am

Location: 

RH 306

Modern financial portfolio construction uses mean-variance optimisation that requiers the knowledge of a very large covariance matrix. Replacing the unknown covariance matrix by the sample covariance matrix (SCM) leads to disastrous out-of-sample results that can be explained by properties of large SCM understood since Marcenko and Pastur.  A better estimate of the true covariance can be built by studying the eigenvectors of SCM via the average matrix resolvent. This object can be computed using a matrix generalisation of Voiculescu’s addition and multiplication of free matrices.  The original result of Ledoit and Peche on SCM can be generalise to estimate any rotationally invariant matrix corrupted by additive or multiplicative noise. Note that the level of rigor of the seminar will be that of statistical physics.

This is a joint probability/applied math seminar.

Variational principles for discrete maps

Speaker: 

Martin Tassy

Institution: 

UCLA

Time: 

Tuesday, February 28, 2017 - 11:00am to 11:50am

Host: 

Location: 

RH 306

Previous works have shown that arctic circle phenomenons and limiting behaviors of some integrable discrete systems can be explained by a variational principle. In this talk we will present the first results of the same type for a non-integrable discrete system: graph homomorphisms form Z^d to a regular tree. We will also explain how the technique used could be applied to other non-integrable models.

How do we parametrize a random fractal curve?

Speaker: 

Greg Lawler

Institution: 

University of Chicago

Time: 

Friday, February 24, 2017 - 2:00am to 3:00am

Host: 

Location: 

NS2 1201

For a smooth curve, the natural paraemtrization

is parametrization by arc length.  What is the analogue

for a random curve of fractal dimension d?  Typically,

such curves have Hausdorff dmeasure 0.  It turns out

that a different quantity, Minkowski content, is the

right thing.   

 

I will discuss results of this type for the Schramm-Loewner

evolution --- both how to prove the content is well-defined

(work with M. Rezaei) and how it relates to the scaling

limit of the loop-erased random walk (work with F. Viklund

and C. Benes).

Normal approximation for recovery of structured unknowns in high dimension: Steining the Steiner formula.

Speaker: 

Larry Goldstein

Institution: 

USC

Time: 

Tuesday, February 7, 2017 - 11:00pm to 11:50pm

Host: 

Location: 

306 RH

Normal approximation for recovery of structured unknowns in high dimension: Steining the Steiner formula Larry Goldstein, University of Southern California Abstract Intrinsic volumes of convex sets are natural geometric quantities that also play important roles in applications. In particular, the discrete probability distribution L(VC) given by the sequence v0,...,vd of conic intrinsic volumes of a closed convex cone C in Rd summarizes key information about the success of convex programs used to solve for sparse vectors, and other structured unknowns such as low rank matrices, in high dimensional regularized inverse problems. The concentration of VC implies the existence of phase transitions for the probability of recovery of the unknown in the number of observations. Additional information about the probability of recovery success is provided by a normal approximation for VC. Such central limit theorems can be shown by first considering the squared length GC of the projection of a Gaussian vector on the cone C. Applying a second order Poincar´e inequality, proved using Stein’s method, then produces a non-asymptotic total variation bound to the normal for L(GC). A conic version of the classical Steiner formula in convex geometry translates finite sample bounds and a normal limit for GC to that for VC. Joint with Ivan Nourdin and Giovanni Peccati. http://arxiv.org/abs/1411.6265

Pages

Subscribe to RSS - Combinatorics and Probability