Learning low degree functions in logarithmic number of random queries

Speaker: 

Paata Ivanishvili

Institution: 

UCI

Time: 

Wednesday, October 27, 2021 - 2:00pm to 3:00pm

Location: 

Rowland Hall 510R

Perhaps a very basic question one asks in learning theory is as follows: we are given a  function f on the hypercube {-1,1}^n, and we are allowed to query samples (X, f(X)) where X is uniformly distributed on {-1,1}^n. After getting these samples (X_1, f(X_1)), ..., (X_N, f(X_N)) we would like to construct a function h which approximates f up to an error epsilon (say in L^2). Of course h is a random function as it involves i.i.d. random variables X_1, ... , X_N in its construction. Therefore, we want to construct such h which can only fail to approximate f with probability at most delta. So given parameters epsilon, delta  in (0,1) the goal is to minimize the number of random queries N. I will show that around log(n) random queries are sufficient to learn bounded "low-complexity" functions. Based on joint work with Alexandros Eskenazis. 

Odd subgraphs are odd

Speaker: 

Asaf Ferber

Institution: 

UCI

Time: 

Wednesday, October 20, 2021 - 2:00pm to 3:00pm

Location: 

510R

In this talk we discuss the problems of finding large induced subgraphs of a given graph G with some degree-constraints. We survey some classical results, present some intersting and challenging open problems, and sketch solutions to some of them. 

This is based on joint works with Liam Hardiman and Michael Krivelevich. 

Sharp matrix concentration

Speaker: 

March Boedihardjo

Institution: 

UCI

Time: 

Wednesday, October 13, 2021 - 2:00pm to 3:00pm

Location: 

510R

Classical matrix concentration inequalities are sharp up to a logarithmic factor. This logarithmic factor is necessary in the commutative case but unnecessary in many classical noncommutative cases. We will present some matrix concentration results that are sharp in many cases, where we overcome this logarithmic factor by using an easily computable quantity that captures noncommutativity. Joint work with Afonso Bandeira and Ramon van Handel. Paper: https://arxiv.org/abs/2108.06312

Gaussian Spherical Tessellations and Learning Adaptively

Speaker: 

Anna Ma

Institution: 

UCI

Time: 

Wednesday, October 6, 2021 - 2:00pm to 3:00pm

Location: 

510R

Signed measurements of the form $y_i = sign(\langle a_i, x \rangle)$ for $i \in [M]$ are ubiquitous in large-scale machine learning problems where the overarching task is to recover the unknown, unit norm signal $x \in \mathbb{R}^d$. Oftentimes, measurements can be queried adaptively, for example based on a current approximation of $x$, leading to only a subset of the $M$ measurements being needed. Geometrically, these measurements emit a spherical hyperplane tessellation in $\mathbb{R}^{d}$ where one of the cells in the tessellation contains the unknown vector $x$. Motivated by this problem, in this talk we will present a geometric property related to spherical hyperplane tessellations in $\mathbb{R}^{d}$. Under the assumption that $a_i$ are Gaussian random vectors, we will show that with high probability there exists a subset of the hyperplanes whose cardinality is on the order of $d\log(d)\log(M)$ such that the radius of the cell containing $x$ induced by these hyperplanes is bounded above by, up to constants, $d\log(d)\log(M)/M$. The work presented is joint work with Rayan Saab and Eric Lybrand. 

Nodal domains of eigenvectors of Erdos-Renyi random graphs

Speaker: 

Mark Rudelson

Institution: 

University of Michigan

Time: 

Friday, March 6, 2020 - 2:00pm to 2:50pm

Host: 

Location: 

RH 340N

A nodal domain of a function is a connected component of the set where this function has a constant sign. Nodal domains of eignefunctions of the Laplacian are classical objects in geometry of manifolds. Their number of nodal domains corresponding to the k-th smallest eigenvalue is bounded by k and it typically increases as k gets larger. About 10 years ago, Dekel, Lee, and Linial proved that with high probability, the number of nodal domains of Erdos-Renyi graphs remains bounded as the size of the graph and the eigenvalues increase.  This runs contrary to the intuition we draw from the world of manifolds. We will survey some recent results on the structure of nodal domains of such graphs. Based in part on the joint work with Han Huang.

Independent sets in random graphs and random trees

Speaker: 

Steven Heilman

Institution: 

USC

Time: 

Tuesday, March 3, 2020 - 11:00am

Host: 

Location: 

Rowland Hall, 340P

An independent set of size k in a finite undirected graph is a set of k vertices of the graph, no two of which are connected by an edge. The structure of independent sets of size k as k varies is of interest in probability, statistical physics, combinatorics, and computer science. In 1987, Alavi, Malde, Schwenk and Erdos conjectured that the number of independent sets of size k in a tree is a unimodal sequence (this number goes up and then it goes down), and this problem is still open. A variation on this question is: do the number of independent sets of size k form a unimodal sequence for Erdos-Renyi random graphs, or random trees? By adapting an argument of Coja-Oghlan and Efthymiou, we show unimodality for Erdos-Renyi random graphs, random bipartite graphs and random regular graphs (with high probability as the number of vertices in the graph goes to infinity, when the expected degree of a single vertex is large). The case of random trees remains open, as we can only show weak partial results there. 

 

Integrable Probability

Speaker: 

Ivan Corwin

Institution: 

Columbia University

Time: 

Friday, January 31, 2020 - 11:00am to 11:50am

Location: 

340 P

 

I will describe an example of an integrable probabilistic system and then indicate some of the mathematical structures which unify and underlay the field. The model I will mainly focus on is the Beta random walk in random environment. Time permitting I will also discuss some elements of stochastic vertex models.

Embedding large minors in weak expanders and in sparse random graphs

Speaker: 

Michael Krivelevich

Institution: 

Tel Aviv University

Time: 

Tuesday, February 18, 2020 - 11:00am

Host: 

Location: 

340P

A graph G on n vertices is called an alpha-expander if the external neighborhood of every vertex subset U of size |U|<=n/2 in G has size at least alpha*|U|.

Extending and improving the results of Plotkin, Rao and Smith, and of Kleinberg and Rubinfeld from the 90s, we prove that for every alpha>0, an alpha-expander G on n vertices contains every graph H with at most cn/log n vertices and edges as a minor, for c=c(alpha)>0.

Alternatively, every n-vertex graph G without sublinear separators contains all graphs with cn/logn vertices and edges as minors.

Consequently, a supercritical random graph G(n,(1+epsilon)/n) is typically minor-universal for the class of graphs with cn/log n vertices and edges.

The order of magnitude n/log n in the above results is optimal.

 

A joint work with Rajko Nenadov.

Super-logarithmic cliques in dense inhomogeneous random graphs

Speaker: 

Gwen McKinley

Institution: 

MIT

Time: 

Tuesday, January 14, 2020 - 11:00am to 12:00pm

Host: 

Location: 

340R

In the theory of dense graph limits, a graphon is a symmetric measurable function W from [0,1]^2 to [0,1]. Each graphon gives rise naturally to a random graph distribution, denoted G(n,W), that can be viewed as a generalization of the Erdos-Renyi random graph. Recently, Dolezal, Hladky, and Mathe gave an asymptotic formula of order log(n) for the size of the largest clique in G(n,W) when W is bounded away from 0 and 1. We show that if W is allowed to approach 1 at a finite number of points, and displays a moderate rate of growth near these points, then the clique number of G(n,W) will be of order √n almost surely. We also give a family of examples with clique number of order n^c for any c in (0,1), and some conditions under which the clique number of G(n,W) will be o(√n) or omega(√n). This talk assumes no previous knowledge of graphons.

On the geometry of polytopes generated by heavy-tailed random vectors

Speaker: 

Felix Krahmer

Institution: 

TU Munich

Time: 

Tuesday, December 3, 2019 - 11:00am to 11:50am

Host: 

Location: 

RH 340N

 

We present recent results on the geometry of centrally-symmetric random polytopes generated by N independent copies of a random vector X. We show that under minimal assumptions on X, for N>Cn, and with high probability, the polytope contains a deterministic set that is naturally associated with the random vector - namely, the polar of a certain floating body. This solves the long-standing question on whether such a random polytope contains a canonical body. Moreover, by identifying the floating bodies associated with various random vectors we recover the estimates that have been obtained previously, and thanks to the minimal assumptions on X we derive estimates in cases that had been out of reach, involving random polytopes generated by heavy-tailed random vectors (e.g., when X is q-stable or when X has an unconditional structure). Finally, the structural results are used for the study of a fundamental question in compressive sensing - noise blind sparse recovery. This is joint work with the speaker's PhD student Christian Kümmerle (now at Johns Hopkins University) as well as Olivier Guédon (University of Paris-Est Marne La Vallée), Shahar Mendelson (Sorbonne University Paris), and Holger Rauhut (RWTH Aachen).

Pages

Subscribe to RSS - Combinatorics and Probability