Edge universality of separable covariance matrices

Speaker: 

Fan Yang

Institution: 

UCLA

Time: 

Tuesday, November 20, 2018 - 11:00am to 12:00pm

 

 In this talk, we consider the largest singular value of the so-called separable covariance matrix Y=A^{1/2}XB^{1/2}, where X is a random matrix with i.i.d. entries and A, B are deterministic covariance matrices (which are non-negative definite symmetric). The separable covariance matrix is commonly used in e.g. environmental study, wireless communications and financial study to model sampling data with spatio-temporal correlations. However, the spectral properties of separable covariance matrices are much less known compared with sample covariance matrices.  

 

Recently, we prove that the distribution of the largest singular value of Y converges to the Tracy-Widom law under the minimal moment assumption on the entries of X. This is the first edge universality result for separable covariance matrices. As a corollary, if B=I, we obtain the edge universality for sample covariance matrices with correlated data and heavy tails. This improves the previous results for sample covariance matrices, which usually assume diagonal A or high moments of the X entries. The core parts of the proof are two comparison arguments: the Lindeberg replacement method, and a continuous self-consistent comparison argument.

Universality and Delocalization of Random Band Matrices

Speaker: 

Jun Yin

Institution: 

UCLA

Time: 

Tuesday, November 6, 2018 - 11:00am to 12:00pm

Location: 

RH 306

We consider N × N symmetric one-dimensional random band matrices with general distribution of the entries and band width $W$.   The localization - delocalization conjecture predicts that there is a phase transition on the behaviors of  eigenvectors and  eigenvalues of the band matrices. It occurs at $W=N^{1/2}$. For wider-band matrix, the eigenvalues satisfied the so called sine-kernal distribution, and the eigenvectors are delocalized. With Bourgade, Yau and Fan, we proved that it holds when $W\gg N^{3/4}$. The previous best work required $W=\Omega(N).$ 

 

On 1-factorizations of graphs

Speaker: 

Asaf Ferber

Institution: 

MIT

Time: 

Tuesday, October 30, 2018 - 11:00am to 12:00pm

Host: 

Location: 

RH 306

A 1-factorization of a graph G is a partitioning of its edges into perfect matchings. Clearly, if a graph G admits a 1-factorization then it must be regular, and the converse is easily verified to be false. In the special case where G is bipartite, it is an easy exercise to show that G has a 1-factorization, and observe that a 1-factorization corresponds to a partial Latin Square.  

In this talk we survey known results/conjectures regarding the existence and the number of 1-factorizations in graphs and the related problem about the existence of a proper edge coloring of a graph with exactly \Delta(G) colors.  Moreover, we prove that every `nice' d-regular pseudorandom graph has a 1-factorization. In particular, as a corollary, we obtain that for every d=\omega(1), a random d-regular graph typically has a 1-factorization.  This extends and completely solves a problem of Molloy, Robalewska, Robinson, and Wormald  (showed it for all constant d greater than or equal to 3).

 

Joint with: Vishesh Jain (PhD student in MIT).

Large deviations of subgraph counts for sparse random graphs

Speaker: 

Nicholas Cook

Institution: 

UCLA

Time: 

Tuesday, November 27, 2018 - 11:00am to 12:00pm

Location: 

RH 306

In their breakthrough 2011 paper, Chatterjee and Varadhan established a large deviations principle (LDP) for the Erdös-Rényi graph G(N,p), viewed as a measure on the space of graphons with the cut metric topology. This yields LDPs for subgraph counts, such as the number of triangles in G(N,p), as these are continuous functions of graphons. However, as with other applications of graphon theory, the LDP is only useful for dense graphs, with p ϵ (0,1) fixed independent of N. 

Since then, the effort to extend the LDP to sparse graphs with p ~ N^{-c} for some fixed c>0 has spurred rapid developments in the theory of "nonlinear large deviations". We will report on recent results increasing the sparsity range for the LDP, in particular allowing c as large as 1/2 for cycle counts, improving on previous results of Chatterjee-Dembo and Eldan. These come as applications of new quantitative versions of the classic regularity and counting lemmas from extremal graph theory, optimized for sparse random graphs. (Joint work with Amir Dembo.)

Lower-tail large deviations of the KPZ equation

Speaker: 

Li-Cheng Tsai

Institution: 

Columbia University

Time: 

Tuesday, October 23, 2018 - 11:00am to 12:00pm

Host: 

Location: 

306 RH

Regarding time as a scaling parameter, we prove the one-point, lower tail Large Deviation Principle (LDP) of the KPZ equation, with an explicit rate function. This result confirms existing physics predictions. We utilize a formula from [Borodin Gorin 16] to convert LDP of the KPZ equation to calculating an exponential moment of the Airy point process, and analyze the latter via stochastic Airy operator and Riccati transform.

A moment method for invariant ensembles

Speaker: 

Jonathan Novak

Institution: 

UCSD

Time: 

Tuesday, January 8, 2019 - 11:00am

Location: 

RH 306

Conjugation invariant ensembles of random matrices have long formed one of the basic paradigms in Random Matrix Theory. Apart from the Gaussian case, the matrix elements of a conjugation invariant random matrix are highly correlated, and this fact has traditionally been viewed as prohibiting the use of moment methods in the spectral analysis of invariant ensembles. However, it turns out that there is a very natural and appealing version of the moment method available for these ensembles which seems to have been overlooked. I will describe the rudiments of this method, and some of its applications. Based on joint work with Sho Matsumoto.

From number theory to machine learning: hunting for smooth Boolean functions

Speaker: 

Roman Vershynin

Institution: 

UCI

Time: 

Tuesday, May 1, 2018 - 11:00am to 12:00pm

Location: 

RH 306

The most fundamental kind of functions studied in computer science are Boolean functions. They take n bits as an input and return one bit as an output. Most Boolean functions oscillate a lot, which is analogous to the fact that "most" continuous functions on R are nowhere differentiable. If we want to generate a "smooth" Boolean function, we can take the sign of some polynomial of low degree in n variables. Such functions are called polynomial threshold functions, and they are widely used in machine learning as classification devices. Surprisingly, we do not know how many polynomial threshold functions there are with a given degree! Even an approximate answer to this question has been known only for polynomials of degree 1, i.e. for linear functions. In a very recent joint work with Pierre Baldi, we found a way to approximately count polynomial threshold functions of any fixed degree. This solves a problem of M. Saks that goes back to 1993 and earlier. Our argument draws ideas from analytical number theory, additive combinatorics, enumerative combinatorics, probability and discrete geometry. I will describe some of these connections, focusing particularly on a beautiful interplay of zeta and Mobius funcitons in number theory, hyperplane arrangements in enumerative combinatorics and random tensors in probability theory.

Deterministic Random Matrices

Speaker: 

Ilya Soloveychik

Institution: 

Harvard University

Time: 

Tuesday, April 24, 2018 - 12:00pm to 1:00pm

Host: 

Location: 

440 RH

Random matrices have become a very active area of research in the recent years and have found enormous applications in modern mathematics, physics, engineering, biological modeling, and other fields. In this work, we focus on symmetric sign (+/-1) matrices (SSMs) that were originally utilized by Wigner to model the nuclei of heavy atoms in mid-50s. Assuming the entries of the upper triangular part to be independent +/-1 with equal probabilities, Wigner showed in his pioneering works that when the sizes of matrices grow, their empirical spectra converge to a non-random measure having a semicircular shape. Later, this fundamental result was improved and substantially extended to more general families of matrices and finer spectral properties. In many physical phenomena, however, the entries of matrices exhibit significant correlations. At the same time, almost all available analytical tools heavily rely on the independence condition making the study of matrices with structure (dependencies) very challenging. The few existing works in this direction consider very specific setups and are limited by particular techniques, lacking a unified framework and tight information-theoretic bounds that would quantify the exact amount of structure that matrices may possess without affecting the limiting semicircular form of their spectra.

 

From a different perspective, in many applications one needs to simulate random objects. Generation of large random matrices requires very powerful sources of randomness due to the independence condition, the experiments are impossible to reproduce, and atypical or non-random looking outcomes may appear with positive probability. Reliable deterministic construction of SSMs with random-looking spectra and low algorithmic and computational complexity is of particular interest due to the natural correspondence of SSMs and undirected graphs, since the latter are extensively used in combinatorial and CS applications e.g. for the purposes of derandomization. Most of the existing constructions of pseudo-random graphs focus on the extreme eigenvalues and do not provide guaranties on the whole spectrum. In this work, using binary Golomb sequences, we propose a simple completely deterministic construction of circulant SSMs with spectra converging to the semicircular law with the same rate as in the original Wigner ensemble. We show that this construction has close to lowest possible algorithmic complexity and is very explicit. Essentially, the algorithm requires at most 2log(n) bits implying that the real amount of randomness conveyed by the semicircular property is quite small.

Mathematical problems in the physics of proteins

Speaker: 

Stanislav Molchanov

Institution: 

UNCC

Time: 

Tuesday, March 6, 2018 - 3:00pm to 3:50pm

Host: 

Location: 

RH 340P

The talk will present the review of the mathematical models of the phase transitions and diffusion of the proteins.

Content.

  1. Kindergarten biophysics
  2. Physical Brownian motion
  3. Folding – unfolding phase transition
  4. Diffusion coefficient as the function of the temperature
  5. Intermediate asymptotics
  6. Diffusion with aggregation

Random Matrices with Structured or Unstructured Correlations

Speaker: 

Todd Kemp

Institution: 

UCSD

Time: 

Tuesday, April 17, 2018 - 11:00pm to 11:50pm

Host: 

Location: 

306 RH

Random matrix theory began with the study, by Wigner in the 1950s, of high-dimensional matrices with i.i.d. entries (up to symmetry).  The empirical law of eigenvalues demonstrates two key phenomena: bulk universality (the limit empirical law of eigenvalues doesn't depend on the laws of the entries) and concentration (the convergence is robust and fast).

 

Several papers over the last decade (initiated by Bryc, Dembo, and Jiang in 2006) have studied certain special random matrix ensembles with structured correlations between some entries. The limit laws are different from the Wigner i.i.d. case, but each of these models still demonstrates bulk universality and concentration.

 

In this lecture, I will talk about very recent results of mine and my students on these general phenomena:

 

Bulk universality holds true whenever there are constant-width independent bands, regardless of the correlations within each band.  (Interestingly, the same is not true for independent rows or columns, where universality fails.)  I will show several examples of such correlated band matrices generalizing earlier known special cases, demonstrating how the empirical law of eigenvalues depends on the structure of the correlations.

 

At the same time, I will show that concentration is a more general phenomenon, depending not on the the structure of the correlations but only on the sizes of correlated partition blocks. Under  some regularity assumptions, we find that Gaussian concentration occurs in NxN ensembles so long as the correlated blocks have size smaller than N^2/log(N).

Pages

Subscribe to RSS - Combinatorics and Probability