Asymmetry helps: Non-backtracking spectral methods for sparse matrices and tensors

Speaker: 

Yizhe Zhu

Institution: 

UCI

Time: 

Wednesday, October 18, 2023 - 2:00pm to 3:00pm

Location: 

510R Rowland Hall

The non-backtracking operator is a non-Hermitian matrix associated with an undirected graph. It has become a powerful tool in the spectral analysis of random graphs. Recently, many new results for sparse Hermitian random matrices have been proved for the corresponding non-backtracking operator and translated into a statement of the original matrices through the Ihara-Bass formula. In another direction, efficient algorithms based on the non-backtracking matrix have successfully reached optimal sample complexity in many sparse low-rank estimation problems. I will talk about my recent work with the help of the non-backtracking operator. This includes the Bai-Yin law for sparse random rectangular matrices, hypergraph community detection, and tensor completion. 

The Ramsey number r(4,t)

Speaker: 

Jacques Verstraete

Institution: 

UCSD

Time: 

Wednesday, October 11, 2023 - 2:00pm to 3:00pm

Host: 

Location: 

510R Rowland Hall

In this talk, I will outline the proof that r(4,t)=t3o(1), which solves a long-standing open problem of Erdos. The proof combines a variety of techniques, and lends itself to improving bounds on various other Ramsey numbers r(F,t) where F is a fixed graph., as well as applications to a variety of extremal and Ramsey problems.

Set representation of sparse graphs and hypergraphs

Speaker: 

Marcelo Sales

Institution: 

UCI

Time: 

Wednesday, November 8, 2023 - 2:00pm to 3:00pm

Location: 

510R Rowland Hall

Let G be a graph on n vertices. Given an integer t, we say that a family of sets {Ax}xV2[t] is a set representation of G if xyE(G)AxAy= Let ¯θ1(G) be the smallest integer t with such representation. In this talk I will discuss some of the bounds on ¯θ1 for sparse graphs and related problems. Joint work with Basu, Molnar, Rödl and Schacht.

Probabilistic laws on infinite groups

Speaker: 

Gil Goffer

Institution: 

UCSD

Time: 

Wednesday, November 15, 2023 - 2:00pm to 3:00pm

Host: 

Location: 

510R Rowland Hall

In various cases, a law that holds in a group with high probability, must actually hold for all elements. For instance, a finite group in which the commutator law [x,y]=1 holds with probability at least 5/8, must be abelian. For infinite groups, one needs to work a bit harder to define the probability that a given law holds. One natural way is by sampling a random element uniformly from the r-ball in the Cayley graph and taking r to infinity; another way is by sampling elements using random walks. It was asked by Amir, Blachar, Gerasimova, and Kozma whether a law that holds with probability 1, must actually hold globally, for all elements. In a recent joint work with Be’eri Greenfeld, we give a negative answer to their question.

In this talk I will give an introduction to probabilistic group laws and present a finitely generated group that satisfies the law x^p=1 with probability 1, but yet admits no group law that holds for all elements.

Deep Generative Models in Infinite-Dimensional Spaces

Speaker: 

Gavin Kerrigan

Institution: 

UCI

Time: 

Wednesday, November 29, 2023 - 2:00pm to 3:00pm

Location: 

510R Rowland Hall

Deep generative models have seen a meteoric rise in capabilities across a wide array of domains, ranging from natural language and vision to scientific applications such as precipitation forecasting and molecular generation. However, a number of important applications focus on data which is inherently infinite-dimensional, such as time-series, solutions to partial differential equations, and audio signals. This relatively under-explored class of problems poses unique theoretical and practical challenges for generative modeling. In this talk, we will explore recent developments for infinite-dimensional generative models, with a focus on diffusion-based methodologies. 

Chemical distance for 2d critical percolation

Speaker: 

Lily Reeves

Institution: 

Caltech

Time: 

Wednesday, November 1, 2023 - 2:00pm to 3:00pm

Location: 

510R Rowland Hall

Percolation clusters induce an intrinsic graph distance called the chemical distance. Besides its mathematical appeal, the chemical distance is connected to the study of random walks on critical percolation clusters. In this talk, I will begin with a brief survey on the chemical distance. Then, I will zoom in to the progress and challenges in the 2d critical regime. A portion of this talk is based on joint work with Philippe Sosoe.

Stochastic algorithms for quantizing neural networks

Speaker: 

Rayan Saab

Institution: 

UCSD

Time: 

Wednesday, December 6, 2023 - 2:00pm

Location: 

510R Rowland Hall

Neural networks are highly non-linear functions often parametrized by a staggering number of weights. Miniaturizing these networks and implementing them in hardware is a direction of research that is fueled by a practical need, and at the same time connects to interesting mathematical problems. For example, by quantizing, or replacing the weights of a neural network with quantized (e.g., binary) counterparts, massive savings in cost, computation time, memory, and power consumption can be attained. Of course, one wishes to attain these savings while preserving the action of the function on domains of interest. 
We discuss connections to problems in discrepancy theory, present data-driven and computationally efficient stochastic methods for quantizing the weights of already trained neural networks and we prove that our methods have favorable error guarantees under a variety of assumptions.  

Spectrahedral Geometry of Graph Sparsifiers

Speaker: 

Catherine Babecki

Institution: 

Caltech

Time: 

Wednesday, October 25, 2023 - 2:00pm

Location: 

510R Rowland Hall

We propose an approach to graph sparsification based on the idea of preserving the smallest k eigenvalues and eigenvectors of the Graph Laplacian. This is motivated by the fact that small eigenvalues and their associated eigenvectors tend to be more informative of the global structure and geometry of the graph than larger eigenvalues and their eigenvectors.  The set of all weighted subgraphs of a graph G that have the same first k eigenvalues (and eigenvectors) as G is the intersection of a polyhedron with a cone of positive semidefinite matrices. We discuss the geometry of these sets and deduce the natural scale of k. Various families of graphs illustrate our construction.

 

Structure preservation via the Wasserstein distance

Speaker: 

Shahar Mendelson

Institution: 

Australian National University

Time: 

Wednesday, October 4, 2023 - 2:00pm to 3:00pm

Host: 

Location: 

510R Rowland Hall

Consider an isotropic measure μ on Rd (i.e., centered and whose covariance is the identity) and let X1,...,Xm be independent, selected according to μ. If Γ is the random operator whose rows are Xi/m, how does the image of the unit sphere under Γ typically look like? For example, if the extremal singular values of Γ are close to 1, then this random set is "well approximated" by a d-dimensional sphere, and vice-versa. But is it possible to give a more accurate description of that set? I will show that under minimal assumptions on μ, with high probability and uniformly in a unit vector t, each vector Γt inherits the structure of the one-dimensional marginal X,t in a strong sense. If time permits I will also present a few generalisations of this fact - for an arbitrary indexing set. (A joint work with D. Bartl.)

Refine Dimension Reduction using Probabilistic Surrogate Models

Speaker: 

Hengrui Luo

Institution: 

Lawrence Berkeley National Laboratory and UCB

Time: 

Wednesday, May 31, 2023 - 1:00pm

Location: 

440R Rowland Hall

We introduce an efficient and robust auto-tuning framework for hyperparameter selection in dimension reduction (DR) algorithms, focusing on large-scale datasets and arbitrary performance metrics. By leveraging Bayesian optimization with a surrogate model, our approach enables efficient hyperparameter selection with multi-objective trade-offs and allows us to perform data-driven sensitivity analysis. By incorporating normalization and subsampling, the proposed framework demonstrates versatility and efficiency, as shown in applications to visualization techniques such as t-SNE and UMAP. We evaluate our results on various synthetic and real-world datasets using multiple quality metrics, providing a robust and efficient solution for hyperparameter selection in DR algorithms.

Pages

Subscribe to RSS - Combinatorics and Probability