Hashing

Speaker: 

Roman Vershynin

Institution: 

UCI

Time: 

Tuesday, February 19, 2019 - 11:00am to 11:50am

Location: 

306 RH

Hashing is a technique widely used in coding theory (an area of computer science) and in cryptography. Although hashing is an interesting mathematical object, it is surprisingly little known to the "mainstream" mathematicians. I will focus on one specific result on hasing, namely the Leftover Hash Lemma. We will state it as a result in extremal combinatorics, give a probabilistic proof of it, and relate it to another fundamental result in extremal combinatorics, the Sauer-Shelah Lemma.

The conjugate gradient algorithm on random matrices

Speaker: 

Tom Trogdon

Institution: 

UC Irvine

Time: 

Tuesday, April 30, 2019 - 11:00am

Location: 

RH 510M

Numerical analysis and random matrix theory have long been coupled, going (at least) back to the seminal work of Goldstine and von Neumann (1951) on the condition number of random matrices. The works of Trotter (1984) and Silverstein (1985) incorporate "numerical techniques" to assist in the analysis of random matrices. One can also consider the problem of computing distributions (e.g. Tracy-Widom) from random matrix theory. In this talk, I will discuss a different numerical analysis problem: using random matrices to analyze the runtime (or halting time) of numerical algorithms.  I will focus on recent results for the conjugate gradient algorithm including a proof that the halting time is almost deterministic.

Phase transitions and conic geometry

Speaker: 

Joel Tropp

Institution: 

Caltech

Time: 

Thursday, January 31, 2019 - 12:00pm to 12:50pm

Host: 

Location: 

RH 340N

A phase transition is a sharp change in the behavior of a mathematical model as one of its parameters varies. This talk describes a striking phase transition that takes place in conic geometry. First, we will explain how to assign a notion of "dimension" to a convex cone. Then we will use this notion of "dimension" to see that two randomly oriented convex cones share a ray with probability close to zero or close to one. This fact has implications for many questions in signal processing. In particular, it yields a complete solution of the "compressed sensing" problem about when we can recover a sparse signal from random measurements. Based on joint works with Dennis Amelunxen, Martin Lotz, Mike McCoy, and Samet Oymak.

Dynamic embedding of motifs into networks

Speaker: 

Hanbaek Lyu

Institution: 

UCLA

Time: 

Tuesday, December 11, 2018 - 11:30am to 12:20pm

Host: 

Location: 

RH 306

We study various structural information of a large network $G$ by randomly embedding a small motif $F$ of choice. We propose two randomized algorithms to effectively sample such a random embedding by a Markov chain Monte Carlo method. Time averages of various functionals of these chains give structural information on $G$ via conditional homomorphism densities and density profiles of its filtration. We show such observables are stable with respect to various notions of network distance. Our efficient sampling algorithm and stability inequalities allow us to use our techniques for hypothesis testing on and hierarchical clustering of large networks. We demonstrate this by analyzing both synthetic and real world network data.  Join with Facundo Memoli and David Sivakoff.

Random matrix perturbations

Speaker: 

Sean O'Rourke

Institution: 

University of Colorado, Boulder

Time: 

Tuesday, May 14, 2019 - 11:00am to 11:50am

Host: 

Location: 

RH 510M

Computing the eigenvalues and eigenvectors of a large matrix is a basic task in high dimensional data analysis with many applications in computer science and statistics. In practice, however, data is often perturbed by noise. A natural question is the following: How much does a small perturbation to the matrix change the eigenvalues and eigenvectors? In this talk, I will consider the case where the perturbation is random. I will discuss perturbation results for the eigenvalues and eigenvectors as well as for the singular values and singular vectors.  This talk is based on joint work with Van Vu, Ke Wang, and Philip Matchett Wood.

Several open problems on the Hamming cube II.

Speaker: 

Paata Ivanisvili

Institution: 

UCI

Time: 

Tuesday, February 5, 2019 - 11:00am to 12:00pm

Host: 

Location: 

306 RH

The Hamming cube of dimension n  consists of vectors of length n with coordinates +1 or -1.  Real-valued functions on the Hamming cube equipped with uniform counting measure can be expressed as Fourier--Walsh series, i.e., multivariate polynomials of n variables +1 or -1. The degree of the function is called the corresponding degree of its multivariate polynomial representation.  Pick a function whose Lp norm is 1. How large the Lp norm of the discrete (graph) Laplacian of the function can be if its degree is at most d, i.e., it lives on ``low frequencies''? Or how small it can be if the function lives on high frequencies, i.e., say all low degree terms (lower than d) are zero? I will sketch some proofs based on joint works (some in progress) with Alexandros Eskenazis.

Longest increasing and decreasing subsequences

Speaker: 

Richard Stanley

Institution: 

University of Miami

Time: 

Tuesday, January 22, 2019 - 11:00am to 12:00pm

Host: 

Location: 

RH 306

An increasing subsequence of a permutation $a_1, a_2,\dots, a_n$ of 
$1,2,\dots, n$ is a subsequence $b_1,b_2,\dots,b_k$ satisfying 
$b_1<b_2<\cdots<b_k$, and similarly for decreasing subsequence. The 
earliest result in this area is due to Erd\H{o}s and Szekeres in 1935: any 
permuation of $1,2,\dots,pq+1$ has an increasing subsequnce of length 
$p+1$ or a decreasing subsequence of length $q+1$. This result turns out 
to be closely connected to the RSK algorithm from the representation 
theory of the symmetric group. A lot of work has been devoted to the 
length $k$ of the longest increasing subsequence of a permutation 
$1,2,\dots,n$, beginning with Ulam's question of determining the average 
value of this number over all such permutations, and culminating with the 
result of Baik-Deift-Johansson on the limiting distribution of this 
length. There are many interesting analogues of longest increasing 
subsequences, such as longest alternating subsequences, i.e., 
subsequences $b_1,b_2,\dots, b_k$ of a permutation $a_1, a_2,\dots, a_n$ 
satisfying $b_1>b_2<b_3>b_4<\cdots$. The limiting distribution of the 
length of the longest alternating subsequence of a random permutation 
behaves very differently from the length of the longest increasing 
subsequence.  We will survey these highlights from the theory of 
increasing and decreasing subsequences.

Random matrix point processes via stochastic processes

Speaker: 

Elliot Paquette

Institution: 

The Ohio State University

Time: 

Thursday, January 10, 2019 - 12:00pm to 1:00pm

Location: 

RH 340P

In 2007, Virág and Válko introduced the Brownian carousel, a dynamical system that describes the eigenvalues of a canonical class of random matrices. This dynamical system can be reduced to a diffusion, the stochastic sine equation, a beautiful probabilistic object requiring no random matrix theory to understand. Many features of the limiting eigenvalue point process, the Sine--beta process, can then be studied via this stochastic process. We will sketch how this stochastic process is connected to eigenvalues of a random matrix and sketch an approach to two questions about the stochastic sine equation: deviations for the counting Sine--beta counting function and a functional central limit theorem.

Based on joint works with Diane Holcomb, Gaultier Lambert, Bálint Vet\H{o}, and Bálint Virág.

Stochatically modeled reaction networks : positive recurrence and mixing times.

Speaker: 

Jinsu Kim

Institution: 

UCI

Time: 

Tuesday, December 4, 2018 - 11:00am to 12:00pm

Host: 

Location: 

306 RH

 

A reaction network is a graphical configuration that describes an interaction between species (molecules). If the abundances of the network system is small, then the randomness inherent in the molecular
interactions is important to the system dynamics, and the abundances are modeled stochastically as a jump by jump fashion continuous-time Markov chain. One of challenging issues facing researchers who study biological

systems is the often extraordinarily complicated structure of their interaction networks. Thus, how to characterize network structures that induce characteristic behaviors of the system dynamics is one of the major open questions in this literature.

In this talk, I will provide an analytic approach to find a class of reaction networks whose associated Markov process has a stationary distribution. Moreover I will talk about the convergence rate for the process to its stationary distribution with the mixing time.

On the smallest singular value of unstructured heavy-tailed matrices

Speaker: 

Galyna Livshyts

Institution: 

Georgia Tech

Time: 

Tuesday, March 5, 2019 - 11:00am to 12:00pm

Host: 

Location: 

RH 306

In this talk we discuss questions related to invertibility of random matrices, and the estimates for the smallest singular value. We outline the main results: an optimal small-ball behavior for the smallest singular value of square matrices under mild assumptions, and an essentially optimal small ball behavior for heavy-tailed rectangular random matrices. We make several remarks and outline some examples. We then explain the relation between such estimates and net constructions, and outline our main result in regards to existence of a net around the sphere with good properties. If time permits, we outline two more implications of this result.

Pages

Subscribe to RSS - Combinatorics and Probability