Integrating Observation Errors in Optimal Recovery

Speaker: 

Simon Foucart

Institution: 

Texas A&M University

Time: 

Wednesday, March 9, 2022 - 2:00pm to 3:00pm

Host: 

Location: 

340 N Rowland Hall

For a function observed through point evaluations, is there an optimal way to recover it or merely to estimate a dependent quantity? I will give an affirmative answer to this data-focused question, especially  under the assumption that the function belongs to a model set defined by approximation capabilities.  In fact, I will uncover computationally implementable linear recovery maps that are optimal in the worst-case setting. I will present some recent and ongoing works extending the theory in several directions, with particular emphasis put on observations that are inexact---adversarially or randomly.

Dimension-Free Noninteractive Simulation from Gaussian Sources

Speaker: 

Steven M. Heilman

Institution: 

University of Southern California

Time: 

Wednesday, March 2, 2022 - 2:00pm

Host: 

Location: 

340 N Rowland Hall

Let $X$ and $Y$ be two real-valued random variables.  Let $(X_{1},Y_{1}),(X_{2},Y_{2}),\ldots$ be independent identically distributed copies of $(X,Y)$.  Suppose there are two players A and B.  Player A has access to $X_{1},X_{2},\ldots$ and player B has access to $Y_{1},Y_{2},\ldots$.  Without communication, what joint probability distributions can players A and B jointly simulate?  That is, if $k,m$ are fixed positive integers, what probability distributions on $\{1,\ldots,m\}^{2}$ are equal to the distribution of $(f(X_{1},\ldots,X_{k}),\,g(Y_{1},\ldots,Y_{k}))$ for some $f,g\colon\mathbb{R}^{k}\to\{1,\ldots,m\}$?

When $X$ and $Y$ are standard Gaussians with fixed correlation $\rho\in(-1,1)$, we show that the set of probability distributions that can be noninteractively simulated from $k$ Gaussian samples is the same for any $k\geq m^{2}$.  Previously, it was not even known if this number of samples $m^{2}$ would be finite or not, except when $m\leq 2$.

Joint with Alex Tarter.  https://arxiv.org/abs/2202.09309

The spectrum of non-linear random matrices from neural networks

Speaker: 

Yizhe Zhu

Institution: 

UCI

Time: 

Wednesday, February 2, 2022 - 2:00pm to 3:00pm

Location: 

510R Rowland Hall

Recent theoretical understanding of neural networks has connected their training and generalization to associated kernel matrices. Due to the nonlinearity of the activation function, at random initialization, these kernel matrices are non-linear random matrices.  We consider the limiting spectral distributions of conjugate kernel and neural tangent kernel matrices for two-layer neural networks with deterministic data and random weights. When the width of the network grows faster than the size of the dataset, a deformed semicircle law appears. In this regime, we can also calculate the asymptotic testing and training errors for random feature regression. Joint work with Zhichao Wang https://arxiv.org/abs/2109.09304.

Geometric constructions for sparse integer signal recovery

Speaker: 

Lenny Fukshansky

Institution: 

Claremont McKenna College

Time: 

Wednesday, February 16, 2022 - 2:00pm to 3:00pm

Host: 

Location: 

RH 340N

 

We investigate the problem of constructing m x d integer matrices with small entries and d large comparing to m so that for all vectors x in Z^d with at most s ≤ m nonzero coordinates the image vector Ax is not 0. Such constructions allow for robust recovery of the original vector x from its image Ax. This problem is motivated by the compressed sensing paradigm and has numerous potential applications ranging from wireless communications to medical imaging. We discuss existence of such matrices for appropriate choices of d as a function of m and demonstrate a deterministic construction of a family of such matrices stemming from a certain geometric covering problem. We also discuss a connection of our constructions to a simple variant of the Tarski plank problem. This talk is based on joint works with B. Sudakov and D. Needell, as well as with A. Hsu.

The multispecies zero range process and modified Macdonald polynomials

Speaker: 

Olya Mandelshtam

Institution: 

University of Waterloo

Time: 

Monday, November 29, 2021 - 2:00pm to 3:00pm

Host: 

Location: 

510R

Over the last couple of decades, the theory of interacting particle systems has found some unexpected connections to orthogonal polynomials, symmetric functions, and various combinatorial structures. The asymmetric simple exclusion process (ASEP) has played a central role in this connection. Recently, Cantini, de Gier, and Wheeler found that the partition function of the multispecies ASEP on a circle is a specialization of a Macdonald polynomial $P_{\lambda}(X;q,t)$. Macdonald polynomials are a family of symmetric functions that are ubiquitous in algebraic combinatorics and specialize to or generalize many other important special functions. Around the same time, Martin gave a recursive formulation expressing the stationary probabilities of the ASEP on a circle as sums over combinatorial objects known as multiline queues, which are a type of queueing system. Shortly after, with Corteel and Williams we generalized Martin's result to give a new formula for $P_{\lambda}$ via multiline queues.

 

The modified Macdonald polynomials $\widetilde{H}_{\lambda}(X;q,t)$ are a version of $P_{\lambda}$ with positive integer coefficients. A natural question was whether there exists a related statistical mechanics model for which some specialization of $\widetilde{H}_{\lambda}$ is equal to its partition function. With Ayyer and Martin, we answer this question in the affirmative with the multispecies totally asymmetric zero-range process (TAZRP), which is a specialization of a more general class of zero range particle processes. We introduce a new combinatorial object in the flavor of the multiline queues, which on one hand, expresses stationary probabilities of the mTAZRP, and on the other hand, gives a new formula for $\widetilde{H}_{\lambda}$. We define an enhanced Markov chain on these objects that lumps to the multispecies TAZRP, and then use this to prove several results about particle densities and correlations in the TAZRP.

Mathematics of synthetic data and privacy

Speaker: 

Roman Vershynin

Institution: 

UCI

Time: 

Wednesday, December 1, 2021 - 2:00pm to 3:00pm

Location: 

Rowland Hall 510R

An emerging way to protect privacy is to replace true data by synthetic data. Medical records of artificial patients, for example, could retain meaningful statistical information while preserving privacy of the true patients. But what is synthetic data, and what is privacy? How do we define these concepts mathematically? Is it possible to make synthetic data that is both useful and private? I will tie these questions to a simple-looking problem in probability theory: how much information about a random vector X is lost when we take conditional expectation of X with respect to some sigma-algebra? This talk is based on a series of papers joint with March Boedihardjo and Thomas Strohmer, mainly this one: https://arxiv.org/abs/2107.05824

An invariance principle for Markov cookie random walks.

Speaker: 

Thomas Mountford

Institution: 

EPFL

Time: 

Wednesday, November 17, 2021 - 2:00pm to 3:00pm

Host: 

Location: 

510R

 

 

In joint work with E Kosygina and J Peterson, the

"natural" diffusive scaling is considered for the recurrent case

and the convergence to Brownian motion perturbed at extrema is shown.  The key ideas are coarse graining and

the Ray Knight approach.

Properties of the Riemann zeta distribution.

Speaker: 

Michael Cranston

Institution: 

UCI

Time: 

Wednesday, November 10, 2021 - 2:00pm to 3:00pm

Location: 

510R

In this talk we will discuss properties of integers selected according to the Riemann zeta distribution. We will emphasize two aspects of this distribution. The first is its faithful similarity to properties of an integer chosen according to the uniform distribution on a finite interval. The second aspect will be the appearance of Poisson behavior under this distribution. The Riemann zeta function is given for $\mbox{Re} z>1$ by 

$$\zeta(z)=\sum_{n=1}^\infty \frac{1}{n^z}.$$

An alternative description is given by

$$\zeta(z)=\Pi_{p\in\mathcal{P}}\lt(1-\frac{1}{p^z}\rt)^{-1},$$

where $\mathcal{P}$ denotes the set of primes.

In our discussions we will replace the complex $z$ by a real number $s>1.$ We will denote by $X_s$ a random variable with the distribution

P(X_s=n)=\frac{1}{\zeta(s)n^s},\, n=1,2,3,\cdots.

The statistical properties of $X_s$ is the focus of the talk.

The talk is based on joint work with Adrien Peltzer.

Clustering a mixture of Gaussians with unknown covariance

Speaker: 

Mateo Diaz

Institution: 

Caltech

Time: 

Wednesday, November 3, 2021 - 2:00pm to 3:00pm

Host: 

Location: 

510R Rowland Hall

Clustering is a fundamental data scientific task with broad application. This talk investigates a simple clustering problem with data from a mixture of Gaussians that share a common but unknown, and potentially ill-conditioned, covariance matrix. We start by considering Gaussian mixtures with two equally-sized components and derive a Max-Cut integer program based on maximum likelihood estimation. We show its solutions achieve the optimal misclassification rate when the number of samples grows linearly in the dimension, up to a logarithmic factor. However, solving the Max-cut problem appears to be computationally intractable. To overcome this, we develop an efficient spectral algorithm that attains the optimal rate but requires a quadratic sample size. Although this sample complexity is worse than that of the Max-cut problem, we conjecture that no polynomial-time method can perform better. Furthermore, we present numerical and theoretical evidence that supports the existence of a statistical-computational gap. Finally, we generalize the Max-Cut program to a k-means program that handles multi-component mixtures with possibly unequal weights and has similar guarantees.

Learning low degree functions in logarithmic number of random queries

Speaker: 

Paata Ivanishvili

Institution: 

UCI

Time: 

Wednesday, October 27, 2021 - 2:00pm to 3:00pm

Location: 

Rowland Hall 510R

Perhaps a very basic question one asks in learning theory is as follows: we are given a  function f on the hypercube {-1,1}^n, and we are allowed to query samples (X, f(X)) where X is uniformly distributed on {-1,1}^n. After getting these samples (X_1, f(X_1)), ..., (X_N, f(X_N)) we would like to construct a function h which approximates f up to an error epsilon (say in L^2). Of course h is a random function as it involves i.i.d. random variables X_1, ... , X_N in its construction. Therefore, we want to construct such h which can only fail to approximate f with probability at most delta. So given parameters epsilon, delta  in (0,1) the goal is to minimize the number of random queries N. I will show that around log(n) random queries are sufficient to learn bounded "low-complexity" functions. Based on joint work with Alexandros Eskenazis. 

Pages

Subscribe to RSS - Combinatorics and Probability