# CSCI 6971/4971 Large Scale Matrix Computations and Machine Learning, Spring 2019

## Overview

Modern machine learning routinely deals with millions of points in high-dimensional spaces. Classical linear algebra and optimization algorithms can be prohibitively costly in such applications, as they aim at machine precision and/or scale super-linearly in the size of the input data. Randomization can be used to bring the costs of machine learning algorithms closer to linear in the size of the input data; this is done by sacrificing, in a principled manner, computational accuracy for increased speed. This course surveys modern randomized algorithms and their applications to machine learning, with the goal of providing a solid foundation for the use of randomization in large-scale machine learning.

Topics covered will include time-accuracy tradeoffs, stochastic first-order and second-order methods, applications of low-rank approximation, approximate kernel learning, distributed optimization, and hyperparameter optimization.

## Course Information

The syllabus is available as an archival pdf, and is more authoritative than this website.

** Course Text**: Lecture notes (the ones you scribe for yourself).

** Grading Criteria**:

- Homeworks, 50%
- In-class Pop Quizzes, 20%
- Project, 30%

Students are expected to have writing supplies on hand in each class to complete the in-class pop quizzes. If you are an athlete or for some other reason are not able to attend each class, make alternative arrangements with the instructor in the first two weeks of the course.

Letter grades will be computed from the semester average. Maximum lower bound cutoffs for A, B, C and D grades are 90%, 80%, 70%, and 60%, respectively. These bounds may be moved lower at the instructor's discretion.

## Topics and Schedule

- Lecture 1, Thursday 1/10: course logistics, large-scale considerations in ML, supervised and unsupervised learning, approximate nearest neighbors, hinge-loss linear classification
- Lecture 2, Monday 1/14: risk minimization, empirical risk minimization, least squares loss and ordinary least squares regression. probability recaps: joint distributions, expectations, conditional and marginal distributions.
- Lecture 3, Thursday 1/17: regularized empirical risk minimization, l2- and l1- regularization, lasso, logistic regression.
- e
- TBA

## Homeworks

All assignments must be typed (preferably in LaTeX) and are due at the start of class (defined as the first 15 minutes) via email.
*Late assignments will be penalized and accepted at the instructor's discretion.*

- Self-assessment, due Monday 1/14 (in class, you may handwrite the answers)

## Supplementary Materials

- TBA