Course Description
Generative models are widely used in many subfields of AI and Machine Learning. Recent advances in parameterizing these models using deep neural networks, combined with progress in stochastic optimization methods, have enabled scalable modeling of complex, high-dimensional data including images, text, and speech. In this course, we will study the probabilistic foundations and learning algorithms for deep generative models, including variational autoencoders, generative adversarial networks, autoregressive models, and normalizing flow models. The course will also discuss application areas that have benefitted from deep generative models, including computer vision, speech and natural language processing, graph mining, and reinforcement learning.
FAQ
What are the pre-requisites?
Basic knowledge about machine learning from at least one of: CS 221, 228, 229 or 230.
Basic knowledge of probabilities and calculus: students will work with computational and mathematical models.
Proficiency in a programming language: preferably Python.
Can I audit or sit in?
In general we are very open to sitting-in guests if you are a member of the Stanford community (registered student, staff, and/or faculty). Out of courtesy, we would appreciate that you first email us or talk to the instructor after the first class you attend. If the class is too full and we're running out of space, we would ask that you please allow registered students to attend.
Is there a textbook for this course?
We offer our own self-contained
notes for this course. While there is no required textbook, we recommend "Deep Learning" by Ian Goodfellow, Yoshua Bengio, Aaron Courville. The online version available for free
here.