Generative Networks primarily approaches the problem of estimating the distribution and sampling from the same by observing the data. In this Talk, I would discuss the evolutionary journey [and theoretical foundations] of Generative Networks through Gaussian Mixture Model (GMM), Auto-Encoder & Variational Auto-Encoder (VAE), Generative Adversarial Network (GAN) and Score-based Latent Diffusion Model.
Also, I would like to talk about some of the problem statements that we are working on at NeuroPixel.AI Labs using Conditional Generation and controlled manipulation of Latent Space of the score based Diffusion Model in conjunction with optimization methodologies [e.g. application of NVIDIA TensorRT Libraries, Knowledge Distillation etc.] to improve the performance of inference. Finally, I would take the audience through a couple of Demos of the solution that we’ve been building at NeuroPixel.