Schedule

Updated lecture slides will be posted here shortly before each lecture. For ease of reading, we have color-coded the lecture category titles in blue, discussion sections (and final project poster session) in yellow, and the midterm exam in red. Note that the schedule is subject to change as the quarter progresses.

DateDescriptionCourse MaterialsEventsDeadlines
04/04 Lecture 1: Introduction
Computer vision overview
Course overview
Course logistics
[slides 1] [slides 2]
——— Deep Learning Basics
04/06 Lecture 2: Image Classification with Linear Classifiers
The data-driven approach
K-nearest neighbor
Linear Classifiers
Algebraic / Visual / Geometric viewpoints
SVM and Softmax loss
[slides]
Image Classification Problem
Linear Classification
04/07 Python / Numpy Review Session
[Colab] [Tutorial]
1:30-2:20pm PT Assignment 1 out
[handout] [colab]
04/11 Lecture 3: Regularization and Optimization
Regularization
Stochastic Gradient Descent
Momentum, AdaGrad, Adam
Learning rate schedules
[slides]
Optimization
04/13 Lecture 4: Neural Networks and Backpropagation
Multi-layer Perceptron
Backpropagation
[slides]
Backprop
Linear backprop example
Suggested Readings:
  1. Why Momentum Really Works
  2. Derivatives notes
  3. Efficient backprop
  4. More backprop references: [1], [2], [3]
04/14 Backprop Review Session
[slides] [Colab]
1:30-2:20pm PT
——— Perceiving and Understanding the Visual World
04/18 Lecture 5: Image Classification with CNNs
History
Higher-level representations, image features
Convolution and pooling
[slides]
Convolutional Networks
04/20 Lecture 6: CNN Architectures
Batch Normalization
Transfer learning
AlexNet, VGG, GoogLeNet, ResNet
[slides]
AlexNet, VGGNet, GoogLeNet, ResNet
04/21 Final Project Overview and Guidelines
[slides]
1:30-2:20pm PT Assignment 2 out
[handout] [colab]
Assignment 1 due
04/24 Project proposal due
04/25 Lecture 7: Training Neural Networks
Activation functions
Data processing
Weight initialization
Hyperparameter tuning
Data augmentation
[slides]
Neural Networks, Parts 1, 2, 3
Suggested Readings:
  1. Stochastic Gradient Descent Tricks
  2. Efficient Backprop
  3. Practical Recommendations for Gradient-based Training
  4. Deep Learning, Nature 2015
  5. An Overview of Gradient Descent Algorithms
  6. A Disciplined Approach to Neural Network Hyper-Parameters
04/27 Lecture 8: Recurrent Neural Networks
RNN, LSTM, GRU
Language modeling
Image captioning
Sequence-to-sequence
[slides]
Suggested Readings:
  1. DL book RNN chapter
  2. Understanding LSTM Networks
04/28 PyTorch Review Session
[Colab]
1:30-2:20pm PT
05/02 Lecture 9: Attention and Transformers
Self-Attention
Transformers
[slides]
Suggested Readings:
  1. Attention is All You Need [Original Transformers Paper]
  2. Attention? Attention [Blog by Lilian Weng]
  3. The Illustrated Transformer [Blog by Jay Alammar]
  4. ViT: Transformers for Image Recognition [Paper] [Blog] [Video]
  5. DETR: End-to-End Object Detection with Transformers [Paper] [Blog] [Video]
05/04 Lecture 10: Video Understanding
Video classification
3D CNNs
Two-stream networks
Multimodal video understanding
[slides]
05/05 RNNs & Transformers
[Colab] [slides]
1:30-2:20pm PT
05/08 Assignment 2 due
05/09 Lecture 11: Object Detection and Image Segmentation
Single-stage detectors
Two-stage detectors
Semantic/Instance/Panoptic segmentation
[slides]
FCN, R-CNN, Fast R-CNN, Faster R-CNN, YOLO
05/11 Lecture 12: Visualizing and Understanding
Feature visualization and inversion
Adversarial examples
DeepDream and style transfer
[slides]
05/12 Midterm Review Session
[slides]
1:30-2:20pm PT
05/13 Project milestone due
——— Generative and Interactive Visual Intelligence
05/16 In-Class Midterm
12:00-1:20pm Assignment 3 out
05/18 Lecture 13: Self-supervised Learning
Pretext tasks
Contrastive learning
Multisensory supervision
[slides]
Suggested Readings:
  1. Lilian Weng Blog Post
  2. DINO: Emerging Properties in Self-Supervised Vision Transformers [Paper] [Blog] [Video]
05/23 Lecture 14: Robot Learning
Deep Reinforcement Learning
Model Learning
Robotic Manipulation
[slides]
05/25 Lecture 15: Generative Models (Guest Lecture by Dr. Ruiqi Gao from Google DeepMind)
Generative Adversarial Network
Diffusion models
Autoregressive models
[slides]
05/30 Lecture 16: 3D Vision
3D shape representations
Shape reconstruction
Neural implicit representations
[slides]
Assignment 3 due
——— Human-Centered Applications and Implications
06/01 Lecture 17: Human-Centered Artificial Intelligence
06/06 Lecture 18: Guest Lecture by Prof. Sara Beery from MIT
06/08 Project final report due
06/14 Final Project Poster Session
Time: 1:00 PM - 4:30 PM
Location: Burnham Pavilion
Session A Check-in: 12:30 PM - 1:00 PM (30 minutes)
Session A: 1:00 PM - 2:30 PM (90 minutes)
Session B Check-in: 2:30 PM - 3:00 PM (30 minutes)
Session B: 3:00 PM - 4:30 PM (90 minutes)