Theory of Markov Processes: Selected Topics with Applications in ML and GenAI

Course ID 15853

Description Markov processes are a fundamental mathematical concept with broad applications, including emerging fields such as reinforcement learning and diffusion models. This course is structured into two parts. Part I covers the core theory of Markov processes, including discrete-time and continuous-time Markov chains, as well as Markov processes with continuous state space such as diffusion processes. Part II builds on the core theory and covers selected topics in the theoretical foundation of reinforcement learning and diffusion models in generative AI.

Key Topics
random processes, Markov processes, algorithm analysis, convergence analysis

Required Background Knowledge
strong background in probability and math skills in general

Course Relevance
This class is mainly for master's and PhD students. Undergraduate students need my permission to take it.

Course Goals
- Ability to understand and develop rigorous problem formulations using Markov processes.
- Theory tools for analyzing Markov processes
- Understanding of the theoretical foundations and algorithm designs in RL and diffusion models
- Understanding of important theoretical problems in RL and diffusion models

Learning Resources
Canvas, selected books and papers, lecture notes

Assessment Structure
Homework: 40%
Participation: 10%
Midterm: 20%
Project: 30%

Extra Time Commitment
n/a