* Faculty       * Staff       * Students & Alumni       * Committees       * Contact       * Institute Directory
* Undergraduate Program       * Graduate Program       * Courses       * Institute Catalog      
* Undergraduate       * Graduate       * Institute Admissions: Undergraduate | Graduate      
* Colloquia       * Seminars       * News       * Events       * Institute Events      
* Overview       * Lab Manual       * Institute Computing      
No Menu Selected

* News

Colloquia

Linear and Nonlinear Low-Rank Approximations in Theory and in Practice

Speaker: Dr. Alex Gittens
International Computer Science Institute (ICSI)
Department of Statistics and AMPLab
University of California, Berkeley

May 5, 2016 - 4:00 p.m. to 5:00 p.m.
Location: Fischbach Room, Folsom Library
Hosted By: Dr. Bulent Yener (x6907)

Abstract:

Low-rank approximation is a ubiquitous instrument in the toolkit of machine learning researchers and data analysts. This talk surveys recent results on the theory and practice of low-rank approximation for several problems. First, we provide guarantees on the performance of the Nystrom method and related low-rank approximation techniques used in large-scale kernel learning. Next we provide similar guarantees for randomized feature map approximations used in large-scale polynomial kernel learning. An empirical investigation of two previous approaches to this problem suggest a simple algorithmic tweak that leads to state-of-the-art performance on multi-class classification problems. Recently popularized methods for feature learning in natural language processing can also be fruitfully interpreted through the lens of nonlinear low-rank factorizations: we explain how at least one such method cleanly maps a multitude of semantic relationships onto simple algebraic operations, and apply this method successfully to the apparently unrelated problem of multi-label learning. Finally, we characterize the difference in running times between low-rank factorizations implemented in the popular Apache Spark cluster computing framework and the same factorizations implemented in C+MPI on HPC platforms: we perform scaling experiments on up to 1600 Cray XC40 nodes, identify the sources of overhead in Spark, and describe the tuning used to obtain high performance.

Bio:

Alex Gittens obtained his PhD in Applied Mathematics from Caltech in 2013 for his work on the applications of random matrix theory to numerical linear algebra. He then joined the machine learning group at eBay Research Labs to work on the semantic search problem and randomized approaches to efficient large-scale kernel learning. Alex is currently a postdoctoral fellow at the International Computer Science Institute, a visiting postdoctoral fellow at the Department of Statistics, UCB, and a member of the AMPLab. His research interests lie at and around the intersection of large-scale machine learning, high-dimensional probability and statistics, and numerical linear algebra.

Last updated: May 2, 2016



---