This is a brief tutorial on learning Energy-based Generative Models, using denoising density esimators (DDEs):
Article: Learning Generative Models using Denoising Density Estimators (pdf) by S. A. Bigdeli, G. Lin, T. Portenier, L. A. Dunbar, M. Zwicker
Learned real density during training | Learned generated density during training | Real and generated samples |
---|---|---|
![]() |
![]() |
![]() |
Energy-based Generative Models use density representations of both real and generated data samples to compute their KL-divergence. Given that the optimization only requires the gradients of the KL-divergence loss, the densities do not need to be normalized. This gives the opportunity to use energy models that are not normalized (e.g. DDEs), but are very efficient in training and inference. Table below summarizes the benefits of these models compared to other techniques:
The code example uses Tensorflow 2, numpy, and matplotlib (for visualizaiton)
- egm_dde.ipynb: is a jupyter notebook with TF code to train DDEs
- energy_models.py: is a python file with DDE class implementation
- utils.py: is a python file with helper functions to generate samples from toy 2D distributions
- Geng's repository includes code on density estimation bench mark and generative model training examples for MNIST and CelebA.
Generated results on 32 × 32 images from the celebA dataset using DDEs. (See Geng's repositor for its implementation)
This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.