* Faculty       * Staff       * Students & Alumni       * Committees       * Contact       * Institute Directory
* Undergraduate Program       * Graduate Program       * Courses       * Institute Catalog      
* Undergraduate       * Graduate       * Institute Admissions: Undergraduate | Graduate      
* Colloquia       * Seminars       * News       * Events       * Institute Events      
* Overview       * Lab Manual       * Institute Computing      
No Menu Selected

* News

Colloquia

Why (Some) Nonlinear Embeddings Capture Compositionality Linearly

Speaker: Dr. Alex Gittens
International Computer Science Institute, Department of Statistics and AMPLab, University of California, Berkeley

January 26, 2016 - 4:00 p.m. to 5:00 p.m.
Location: Troy 2018
Hosted By: Dr. Bulent Yener (x6907)

Abstract:

Dimensionality reduction methods have been used to represent words with vectors in NLP applications since at least the introduction of latent semantic indexing in the late 1980s, but word embeddings developed in the past several years have exhibited a robust ability to map semantics in a surprisingly straightforward manner onto simple linear algebraic operations. These embeddings are trained on cooccurrence statistics and intuitively justified by appealing to the distributional hypothesis of Harris and Firth, but are typically presented in an ad-hoc algorithmic manner. We consider the canonical skip-gram Word2vec embedding, arguably the most well-known of these recent word embeddings, and establish a corresponding generative model that maps the composition of words onto the addition of their embeddings. By virtue of, first, the fact that words can be replaced more generally with arbitrary symbols and, second, natural connections between Word2vec, the classical RC(M) association model for contingency tables, Bayesian PCA decompositions, Poisson matrix completion, and the Sufficient Dimensionality Reduction model of Globerson and Tishby, our results are meaningful in a broad context.

Bio:

Alex Gittens obtained his PhD in Applied Mathematics from Caltech in 2013 for his work on the applications of random matrix theory to numerical linear algebra. He then joined the machine learning group at eBay Research Labs to work on the semantic search problem and randomized approaches to efficient large-scale kernel learning. Alex is currently a postdoctoral fellow at the International Computer Science Institute, a visiting postdoctoral fellow at the Department of Statistics, UCB, and a member of the AMPLab. His research interests lie at and around the intersection of large-scale machine learning, high-dimensional probability and statistics, and numerical linear algebra.

Last updated: January 14, 2016



---