OpenAI released many of the most revolutionary A.I. models of recent years, e.g., DALL-E 2, GPT-3 and Codex. Dr. Miles Brundage was behind the A.I. Policy considerations associated with each transformative release.
Miles:
• Is Head of Policy Research at OpenAI.
• He’s been integral to the rollout of OpenAI’s game-changing models such as the GPT series, DALL-E series, Codex, and CLIP.
• Previously he worked as an A.I. Policy Research Fellow at the University of Oxford’s Future of Humanity Institute.
• He holds a PhD in the Human and Social Dimensions of Science and Technology from Arizona State University.
Today’s episode should be deeply interesting to technical experts and non-technical folks alike.
In this episode, Miles details:
• Considerations you should take into account when rolling out any A.I. model into production.
• Specific considerations OpenAI concerned themselves with when rolling out:
• The GPT-3 natural-language-generation model,
• The mind-blowing DALL-E artistic-creativity models,
• Their software-writing Codex model, and
• Their bewilderingly label-light image-classification model CLIP.
• Differences between the related fields of AI Policy, AI Safety, and AI Alignment.
• His thoughts on the risks of AI displacing versus augmenting humans in the coming decades.
The SuperDataScience show's available on all major podcasting platforms, YouTube, and at SuperDataScience.com.