Deep generative models are neural networks with many hidden layers trained to approximate complicated, high-dimensional probability distributions.
When trained successfully, we can use the DGM to estimate the likelihood of a given sample and to create new samples that are similar to samples from the unknown distribution.
Popular DGMs are
Generative Adversarial Networks (GANs)
Variational Autoencoders (VAEs)
Auto-regressive models