Location/Time: F 12:10pm - 2:00pm, 829 Seeley W. Mudd Building
Instructor: Vasileios Kalantzis
Contact: vk2599(at)columbia.edu
TA: Rushin Bhatt [rsb2213(at)columbia.edu]
This course investigates advanced methods in large-scale matrix computations and its applications in modern and emerging computing hardware (e.g., post-von Neumann computers). Topics include randomized and asynchronous algorithms, straggler- and fault-tolerant methods, high-performance GPU/MPI implementations, and in-memory computing paradigms such as analog and quantum computing.
The minimum requirements for the course are basics concepts of linear algebra and programming. Knowledge and experience with matrix computations and machine learning algorithms will be helpful. For the course, we will rely most heavily on linear algebra kernels/algorithms, but we will also learn concepts related to high-performance computing and post-von Neumann computer architectures. The course will involve rigorous theoretical analyses and some programming (practical implementation and applications).
Assignments are to be submitted through Canvas, and should be individual work. You are allowed to discuss the problems with your classmates and to work collaboratively. The preferred format is to upload your work as a single PDF, preferably typewritten (using LaTeX, Markdown, or some other mathematical formatting program). In general, late assignments will not receive credit.
| Week | Title | Topics |
|---|---|---|
| 1 | Introduction & Motivation | High-Performance Computing, von Neumann computer model, accelerators and in-memory computing, performance metrics and the memory bottleneck |
| 2-3 | Randomized Matrix Algorithms | Randomized SVD and PCA, randomized butterfly transformation, probabilistic matrix inversion, random variables and Monte Carlo estimation |
Any opinions, statements, or conclusions expressed in the above reports do not necessarily reflect the views of the acknowledged funding sponsors or Columbia University.