
Since their introduction in 2014, generative adversarial networks (GANs) have achieved state-of-the-art performance on a wide array of machine learning tasks, outperforming standard maximum likelihood-based methods. In this seminar, we compare and contrast the GAN and maximum likelihood approaches, and provide theoretical evidence that the game-based design of GANs could contribute to a superior generalization performance from training samples to unseen data in comparison to deep maximum likelihood methods such as autoregressive and flow-based models. Furthermore, we demonstrate that the common divergence/distance measures targeted by GANs are more suitable for learning multi-modal distributions than the KL-divergence optimized by maximum likelihood learners. We discuss several numerical results supporting our theoretical comparison of the GAN and maximum likelihood frameworks.

Assistant Professor at The Chinese University of Hong Kong