Today, bestselling author David Foster provides a fascinating technical introduction to cutting-edge Generative A.I. concepts including variational autoencoders, diffusion models, contrastive learning, GANs and (my favorite!) "world models".
David:
• Wrote the O'Reilly book “Generative Deep Learning”; the first edition from 2019 was a bestseller while the second edition was released just last week.
• Is a Founding Partner of Applied Data Science Partners, a London-based consultancy specialized in end-to-end data science solutions.
• Holds a Master’s in Mathematics from the University of Cambridge and a Master’s in Management Science and Operational Research from the University of Warwick.
Today’s episode is deep in the weeds on generative deep learning pretty much from beginning to end and so will appeal most to technical practitioners like data scientists and ML engineers.
In the episode, David details:
• How generative modeling is different from the discriminatory modeling that dominated machine learning until just the past few months.
• The range of application areas of generative A.I.
• How autoencoders work and why variational autoencoders are particularly effective for generating content.
• What diffusion models are and how latent diffusion in particular results in photorealistic images and video.
• What contrastive learning is.
• Why “world models” might be the most transformative concept in A.I. today.
• What transformers are, how variants of them power different classes of generative models such as BERT architectures and GPT architectures, and how blending generative adversarial networks with transformers supercharges multi-modal models.
The SuperDataScience podcast is available on all major podcasting platforms, YouTube, and at SuperDataScience.com.