Speaker: 

Eric Liu

Institution: 

SDSU and UCI

Time: 

Tuesday, May 28, 2024 - 3:00pm to 4:00pm

Location: 

RH 440R

The application of Tensor Train (TT) decomposition in machine learning models provides a promising approach to addressing challenges related to model size and computational complexity. TT decomposition, by breaking down high-dimensional weight tensors into smaller, more manageable tensor cores, allows for significant reductions in model size while maintaining performance. This presentation will explore how TT decomposition can be effectively used in different types of models.

TT decomposition is adopted differently in recurrent models, Convolutional Neural Networks (CNN), and Binary Neural Networks (BNN). In recurrent models like Long Short-Term Memory (LSTM), large weight matrices are transformed into smaller, manageable tensor cores, reducing the number of parameters and computational load. For CNNs, TT decomposition targets the convolutional layers, transforming convolutional filters into tensor cores to preserve spatial structure while significantly reducing parameters. In BNNs, TT decomposition is combined with weight binarization, resulting in extremely compact models that retain essential information for accurate predictions even with minimal computational power and memory.

The primary aim of this presentation is to explore the theoretical foundations and practical applications of TT decomposition, demonstrating how this technique optimizes various machine learning models. The findings suggest that TT decomposition can greatly enhance model efficiency and scalability, making it a valuable tool for a wide range of applications.