With its human-level capacity on tasks as diverse as question-answering, translation, and arithmetic, GPT-3 is a game-changer for A.I. This week's brilliant guest, Melanie Subbiah, was a lead author of the GPT-3 paper.
GPT-3 is a natural language processing (NLP) model with 175 billion parameters that has demonstrated unprecedented and remarkable "few-shot learning" on the diverse tasks mentioned above (translation between languages, question-answering, performing three-digit arithmetic) as well as on many more (discussed in the episode).
Melanie's paper sent shockwaves through the mainstream media and was recognized with an Outstanding Paper Award from NeurIPS (the most prestigious machine learning conference) in 2020.
Melanie:
• Developed GPT-3 while she worked as an A.I. engineer at OpenAI, one of the world’s leading A.I. research outfits.
• Previously worked as an A.I. engineer at Apple.
• Is now pursuing a PhD at Columbia University in the City of New York specializing in NLP.
• Holds a bachelor's in computer science from Williams College.
In this episode, Melanie details:
• What GPT-3 is.
• Why applications of GPT-3 have transformed not only the field of data science but also the broader world.
• The strengths and weaknesses of GPT-3, and how these weaknesses might be addressed with future research.
• Whether transformer-based deep learning models spell doom for creative writers.
• How to address the climate change and bias issues that cloud discussions of large natural language models.
• The machine learning tools she’s most excited about.
This episode does have technical elements that will appeal primarily to practicing data scientists, but Melanie and I put an effort into explaining concepts and providing context wherever we could so hopefully much of this fun, laugh-filled episode will be engaging and informative to anyone who’s keen to learn about the start of the art in natural language processing and A.I.
The SuperDataScience show's available on all major podcasting platforms, YouTube, and at SuperDataScience.com.
Filtering by Category: Podcast
Jon’s Answers to Questions on Machine Learning
The wonderful folks at the Open Data Science Conference (ODSC) recently asked me five great questions on machine learning. I thought you might like to hear the answers too, so here you are!
Their questions were:
1. Why does my educational content focus on deep learning and on the foundational subjects underlying machine learning?
2. Would you consider deep learning to be an “advanced” data science skill, or is it approachable to newcomers/novice data scientists?
3. What open-source deep learning software is most dominant today?
4. What open-source deep learning software are you looking forward to using more?
5. Do you have a case study where you've used deep learning in practice?
The SuperDataScience show's available on all major podcasting platforms, YouTube, and at SuperDataScience.com.
ODSC's blog post of our Q&A is here.
SuperDataScience Podcast LIVE at MLconf NYC and ScaleUp:AI!
It's finally happening: the first-ever SuperDataScience episodes filmed with a live audience! On March 31 and April 7 in New York, you'll be able to react to guests and ask them questions in real-time. I'm excited 🕺
The first live, in-person episode will be filmed at MLconf NYC on March 31st. The guest will be Alexander Holden Miller, an engineering manager at Facebook A.I. Research who leads bleeding-edge work at mind-blowing intersections of deep reinforcement learning, natural language processing, and creative A.I.
A week later on April 7th, another live, in-person episode will be filmed at ScaleUp:AI. I'll be hosting a panel on open-source machine learning that features Hugging Face CEO Clem Delangue.
I hope to see you at one of these conferences, the first I'll be attending in over two years! Can't wait. There are more live SuperDataScience episodes planned for New York this year and hopefully it won't be long before we're recording episodes live around the world.
My Favorite Calculus Resources
It's my birthday today! In celebration, I'm delighted to be releasing the final video of my "Calculus for Machine Learning" YouTube course. The first video came out in May and now, ten months later, we're done! 🎂
We published a new video from my "Calculus for Machine Learning" course to YouTube every Wednesday since May 6th, 2021. So happy that it's now complete for you to enjoy. Playlist is here.
More detail about my broader "ML Foundations" curriculum (which also covers subject areas like Linear Algebra, Probability, Statistics, Computer Science) and all of the associated open-source code is available in GitHub here.
Starting next Wednesday, we'll begin releasing videos for a new YouTube course of mine: "Probability for Machine Learning". Hope you're excited to get going on it :)
Effective Pandas
Seven-time bestselling author Matt Harrison reveals his top tips and tricks to enable you to get the most out of Pandas, the leading Python data analysis library. Enjoy!
Matt's books, all of which have been Amazon best-sellers, are:
1. Effective Pandas
2. Illustrated Guide to Learning Python 3
3. Intermediate Python
4. Learning the Pandas Library
5. Effective PyCharm
6. Machine Learning Pocket Reference
7. Pandas Cookbook (now in its second edition)
Beyond being a prolific author, Matt:
• Teaches "Exploratory Data Analysis with Python" at Stanford
• Has taught Python at big organizations like Netflix and NASA
• Has worked as a CTO and Senior Software Engineer
• Holds a degree in Computer Science from Stanford University
On top of Matt's tips for effective Pandas programming, we cover:
• How to squeeze more data into Pandas on a given machine.
• His recommended software libraries for working with tabular data once you have too many data to fit on a single machine.
• How having a computer science education and having worked as a software engineer has been helpful in his data science career.
This episode will appeal primarily to practicing data scientists who are keen to learn about Pandas or keen to become an even deeper expert on Pandas by learning from a world-leading educator on the library.
The SuperDataScience show's available on all major podcasting platforms, YouTube, and at SuperDataScience.com.
Sports Analytics and 66 Days of Data with Ken Jee
Ken Jee — sports analytics leader, originator of the ubiquitous #66daysofdata hashtag, and data-science YouTube superstar (190k subscribers) — is the guest for this week's fun and candid episode ⛳️🏌️
In addition to his YouTube content creation, Ken:
• Is Head of Data Science at Scouts Consulting Group LLC.
• Hosts the "Ken's Nearest Neighbors" podcast.
• Is Adjunct Professor at DePaul University.
• Holds a Masters in Computer Science with an AI/ML concentration.
• Is renowned for starting #66daysofdata, which has helped countless people create the habit of learning and working on data science projects every day.
Today’s episode should be broadly appealing, whether you’re already an expert data scientist or just getting started.
In this episode, Ken details:
• What sports analytics is and specific examples of how he’s made an impact on the performance of athletes and teams with it.
• Where the big opportunities lie in sports analytics in the coming years.
• His four-step process for how someone should get started in data science today.
• His favorite tools for software scripting as well as for production code development.
• How the #66daysofdata can supercharge your capacity as a data scientist whether you’re just getting started or are already an established practitioner.
Thanks to Christina, 🦾 Ben, Serg, Arafath, and Luke for great questions for Ken!
The SuperDataScience show's available on all major podcasting platforms, YouTube, and at SuperDataScience.com.
Jon’s Deep Learning Courses
This article was originally adapted from a podcast, which you can check out here.
Sometimes, during guest interviews, I mention the existence of my deep learning book or my mathematical foundations of machine learning course.
It recently occurred to me, however, that I’ve never taken a step back to detail exactly what content I’ve published over the years and where it’s available if you’re interested in it. So, today I’m dedicating a Five-Minute Friday specifically to detailing what all of my deep learning content is and where you can get it. In next week’s episode, I’ll dig into my math for machine learning content. But, yes, for today, it’s all about deep learning.
Read MoreThe Statistics and Machine Learning Quests of Dr. Josh Starmer
Holy crap, it's here! Joshua Starmer, the creative genius behind the StatQuest YouTube channel (over 675k subscribers!) joins me for an epic episode on stats, ML, and his learning and communication secrets.
Dr. Starmer:
• Provides uniquely clear statistics and ML education via his StatQuest You Tube channel.
• Is Lead A.I. Educator at Grid.ai, a company founded by the creators of PyTorch Lightning that enables you to take an ML model you have on your laptop and train it seamlessly on the cloud.
• Was a researcher at the University of North Carolina at Chapel Hill for 13 years, first as a postdoc and then as an assistant professor, applying statistics to genetic data.
• Holds a PhD in Biomathematics and Computational Biology.
• Holds two bachelor degrees, one in Computer Science and another in Music.
In this episode filled with silliness and laughs from start to finish, Josh fills us in on:
• His learning and communication secrets.
• The single tool he uses to create YouTube videos with over a million views.
• The software languages he uses daily as a data scientist.
• His forthcoming book, "The StatQuest Illustrated Guide to Machine Learning".
• Why he left his academic career.
• A question you might want to ask yourself to check in on whether you’re following the right life path yourself.
Today’s epic episode is largely high level and so will appeal to anyone who likes to giggle while hearing from one of the most intelligent and creative minds in education on data science, machine learning, music, genetics, and the intersection of all of the above.
The SuperDataScience show's available on all major podcasting platforms, YouTube, and at SuperDataScience.com.
Thanks to Serg, Nikolay, Phil, Jonas, and Suddhasatwa for great audience questions!
The Most Popular SuperDataScience Episodes of 2021
This article was originally adapted from a podcast, which you can check out here.
2021 was my first year hosting the SuperDataScience podcast and, boy, did I ever have a blast. Filming and producing episodes for you has become the highlight of my week. So, thanks for listening — this show wouldn’t exist without you and I hope I can continue to deliver episodes you love for years and years to come.
Speaking of episodes you love, it’s now been more than 30 days since the final episode of 2021 aired. Internally at the SuperDataScience podcast, we use the 30-day mark after an episode’s been released as our quantitative Key Performance Indicator as to how an episode’s been received by you. Episodes accrue tons more listens after the 30 day mark, but we can use that time point after each episode to effectively compare relative episode popularity.
So, you might have your own personal favorites from 2021 but let’s examine the data and see which — quantitatively speaking — were the top-performing episodes of the year.
Read MoreDeep Reinforcement Learning — with Wah Loon Keng
For an intro to Deep Reinforcement Learning or to hear about the latest research and applications in the field (which is responsible for the most cutting-edge "A.I."), today's episode with Wah Loon Keng is for you.
Keng:
• Co-authored the exceptional book "Foundations of Deep Reinforcement Learning" alongside Laura Graesser.
• Co-created SLM-Lab, an open-source deep reinforcement learning framework written with the Python PyTorch library.
• Is a Senior A.I. Engineer at AppLovin, a marketing solutions provider.
In this episode, Keng details:
• What reinforcement learning is.
• A timeline of major breakthroughs in the history of Reinforcement Learning, including when and how Deep RL evolved.
• Modern industrial applications of Deep RL across robotics, logistics, and climate change.
• Limitations of Deep RL and how future research may overcome these limitations.
• The industrial robotics and A.I. applications Deep RL could play a leading role in in the coming decades.
• What it means to be an A.I. engineer and the software tools he uses daily in that role.
The SuperDataScience show's available on all major podcasting platforms, YouTube, and at SuperDataScience.com.
Daily Habit #6: Write Morning Pages
This article was originally adapted from a podcast, which you can check out here.
At the beginning of the new year, in Episode #538, I introduced the practice of habit tracking and provided you with a template habit-tracking spreadsheet. Since then, Five-Minute Fridays have largely revolved around daily habits and that theme continues today. Indeed, having covered most of my morning habits already, namely:
Starting the day with a glass of water
Making my bed
Carrying out alternate-nostril breathing
Meditating
We’ve now reached my final morning habit, which is to compose something called morning pages.
I learned about the concept of morning pages from Julia Cameron’s book The Artist’s Way. It may seem hard to believe now that I’m releasing two podcast episodes and a YouTube tutorial every single week, but five years ago I had staggeringly little creative capacity. I excelled at evaluating other peoples’ ideas and I could execute on ideas very well once they were passed to me, but I self-diagnosed that if I was going to flourish as a data scientist and entrepreneur, I’d need to hone my creativity.
Read MoreEngineering Natural Language Models — with Lauren Zhu
Zero-shot multilingual neural machine translation, how to engineer natural language models, and why you should use PCA to choose your job are topics covered this week by the fun and brilliant Lauren Zhu.
Lauren:
• Is an ML Engineer at Glean, a Silicon Valley-based natural language understanding company that has raised $55m in venture capital.
• Prior to Glean, she worked as an ML Intern at both Apple and the autonomous vehicle subsidiary of Ford Motor Company; as a software engineering intern at Qualcomm; and as an A.I. Researcher at The University of Edinburgh.
• Holds BS and MS degrees in Computer Science from Stanford
• Served as a teaching assistant for some of Stanford University’s most renowned ML courses such as "Decision Making Under Uncertainty" and "Natural Language Processing with Deep Learning".
In this episode, Lauren details:
• Where to access free lectures from Stanford courses online.
• Her research on Zero-Shot Multilingual Neural Machine Translation.
• Why you should use Principal Component Analysis to choose your job.
• The software tools she uses day-to-day at Glean to engineer natural language processing ML models into massive-scale production systems.
• Her surprisingly pleasant secret to both productivity and success.
There are parts of this episode that will appeal especially to practicing data scientists but much of the conversation will be of interest to anyone who enjoys a laugh-filled conversation on A.I., especially if you’re keen to understand the state-of-the-art in applying ML to natural language problems.
The SuperDataScience show's available on all major podcasting platforms, YouTube, and at SuperDataScience.com.
Daily Habit #5: Meditate
This article was originally adapted from a podcast, which you can check out here.
At the beginning of the new year, in Episode #538, I introduced the practice of habit tracking and provided you with a template habit-tracking spreadsheet. Since then, Five-Minute Fridays have largely revolved around daily habits and that theme continues today with my daily habit of meditation.
If you’ve been listening to SuperDataScience episodes for more than a year, you’ll be familiar with my meditation practice already, as I detailed it back in Episodes #434 and 436 — episodes on what I called “attention-sharpening tools”. You can refer back to those episodes to hear all the specifics, but the main idea is that every single day — for thousands of consecutive days now — I go through a guided meditation session using the popular Headspace application.
Read MoreHow Genes Influence Behavior — with Prof. Jonathan Flint
How do genes influence behavior? This week's guest, Prof. Jonathan Flint, fills us in, with a particular focus on how machine learning is uncovering connections between genetics and psychiatric disorders like depression.
In this episode, Prof. Flint details:
• How we know that genetics plays a role in complex human behaviors incl. psychiatric disorders like anxiety, depression, and schizophrenia.
• How data science and ML play a prominent role in modern genetics research and how that role will only increase in years to come.
• The open-source software libraries that he uses for data modeling.
• What it's like day-to-day for a world-class medical sciences researcher.
• A single question you can ask to prevent someone committing suicide.
• How the future of psychiatric treatments is likely to be shaped by massive-scale genetic sequencing and everyday consumer technologies.
Jonathan:
• Is Professor-in-Residence at the University of California, Los Angeles, specializing in Neuroscience and Genetics.
• Leads a gigantic half-billion dollar project to sequence the genomes of hundreds of thousands of people around the world in order to better understand the genetics of depression.
• Originally trained as a psychiatrist, he established himself as a pioneer in the genetics of behavior during a thirty-year stint as a medical sciences researcher at the University of Oxford.
• Has authored over 500 peer-reviewed journal articles and his papers have been cited an absurd 50,000 times.
• Wrote a university-level textbook called "How Genes Influence Behavior", which is now in its second edition.
Today’s episode mentions a few technical data science details here and there but the episode will largely be of interest to anyone who’s keen to understand how your genes influence your behavior, whether you happen to have a data science background or not.
Thanks to Mohamad, Hank, and Serg for excellent audience questions!
The SuperDataScience show's available on all major podcasting platforms, YouTube, and at SuperDataScience.com.
Daily Habit #4: Alternate-Nostril Breathing
This article was originally adapted from a podcast, which you can check out here.
Back in Episode #538, I kicked off the new year of Five-Minute Fridays by introducing the practice of habit-tracking, including providing you with a template habit-tracking spreadsheet. I followed that up in Episodes #540 and 544 by detailing for you my habits of starting the day with a glass of water and making my bed, respectively.
Continuing on with my morning habits, today’s episode is about alternate-nostril breathing (ANB).
ANB is often associated with yoga classes so if you do a lot of yoga, you may have encountered this technique before. However, there’s no reason why you can’t duck into a quick ANB session for a couple of minutes at any time. I like having it as one of my morning rituals because it makes me feel centered, focused, and present; as a result, I find myself both enjoying being alive and ready to tackle whatever’s going to come at me through the day. That said, if I’m feeling particularly stressed out or out of touch with the present moment, I might quickly squeeze in a few rounds of ANB at any time of day.
Read MoreScaling Data-Intensive Real-Time Applications — with Matthew Russell
This week's guest is indefatigable Matthew Russell. An Air Force veteran and author of four data science books, Matthew is now Founder/CEO of Strongest AI, a leading tech platform for fitness.
In this episode, Matthew covers:
• The tech stack he uses to make it possible to provide data from fitness competitions to millions of spectators all over the world in real-time.
• How he rapidly tests machine learning models for deployment into portable devices like the iPhone and the Apple Watch.
• Multi-objective ML functions and why they’re so widely useful in real-world applications.
• The three critical traits he looks for in anyone he hires.
• The values instilled in him by pursuing a military education.
• The key skills he wishes he’d learned earlier in his career.
A bit more on Matthew... he's:
• Founder and CEO of Strongest, the leading technology platform for global fitness events, which is growing into an application that uses ML models to make you stronger, faster, and fitter than ever before.
• Author of four books published by O'Reilly Media, including the classic "Mining the Social Web", which is now in its third edition.
• Prior to founding Strongest, served as CTO at several firms.
• Holds a BS in Computer Science from the US Air Force Academy as well as an MS in Computer Science and Machine Learning from the US Air Force Institute of Technology.
Parts of today’s episode, particularly in the first half, do get fairly technical as we dig into the open-source software stack that enables the scalable deployment of data-intensive real-time applications. That said, much of the episode will appeal to anyone who’s excited about physical fitness or commercializing A.I.
Shout out to Austin Ogilvie for introducing me to Matthew 😀
The SuperDataScience show's available on all major podcasting platforms, YouTube, and at SuperDataScience.com.
Daily Habit #3: Make Your Bed
Back in Episode #538, I kicked off the new year of Five-Minute Fridays by introducing the practice of habit-tracking, including providing you with a template habit-tracking spreadsheet. I followed that up in Episode #540 by detailing for you my habit of starting the day with a glass of water.
So Daily Habit #1 was the meta-habit of tracking your habits. Daily Habit #2 is the morning water glass. That brings us today to my Daily Habit #3, which is another morning habit: making your bed.
Read MoreSparking A.I. Innovation — with Nicole Büttner
Looking for ideas on how to spark A.I. innovation in your organization? Nicole Büttner, the eloquent and effervescent Founder/CEO of Merantix Labs, has concrete A.I. innovation frameworks for you in this week's guest episode.
Merantix Labs is a renowned Berlin-based consultancy that enables companies to unlock the value of A.I. across all industries.
In addition to being Founder and CEO of Merantix Labs, Nicole:
• Is a member of the Management Board of Merantix Labs’ parent company Merantix, an A.I. Venture Studio that has raised $30m in funding from the likes of SoftBank Group Corp. to serially originate successful ML start-ups.
• Holds a Masters in Quantitative Economics and Finance from the University of St.Gallen, the world’s leading German-language business school.
• Was a visiting researcher in Economics at Stanford University.
In this episode, Nicole details:
• What an A.I. Venture Studio is and how she founded a thriving A.I. consultancy within it
• How to spark A.I. innovation in a company of any size
• How to effectively use the unlabelled, unbalanced data sets that abound in business
• How to engineer reusable data and software components to tackle related projects efficiently
• The three distinct types of founders she looks for when she puts together the founding team of an A.I. start-up
Today’s episode touches on a few technical details here and there but the episode will largely be of interest to anyone who’s keen to make the most of A.I. innovation in a commercial organization, whether you happen to have a deep technical background today or not.
Special shout-out to the St. Gallen Symposium (Svenja, Rolf), which Nicole and I discuss our love for (as well as how you can get free flights, accommodation, and access — deadline to apply is Feb 1) starting at the 34-minute mark.
The SuperDataScience show's available on all major podcasting platforms, YouTube, and at SuperDataScience.com.
Continuous Calendar for 2022
This article was originally adapted from a podcast, which you can check out here.
All right, so the past two Fridays, I had episodes for you on daily habits. We’ll continue on with that habit series next week, but I’m interrupting the series today to bring you a time-sensitive message.
Back in Episode #482, which aired in June, I provided you with an introduction to continuous calendars — a rarely used, but from my perspective, vastly superior way of viewing your upcoming deadlines relative to the much more common monthly or weekly calendars.
Read MoreData Observability — with Dr. Kevin Hu
This week's guest is the fun and wildly intelligent entrepreneur, Kevin Hu, PhD. Inspired by his doctoral research at MIT, he co-founded Metaplane, a Y-Combinator-backed data observability platform.
In a bit more detail, Kevin:
• Is Co-Founder/CEO of Metaplane, a platform that observes the quality of data flows, looks for abnormalities in the data, and reports issues
• Completed a PhD in machine learning and data science from the Massachusetts Institute of Technology
In this episode, Kevin covers:
• What data observability is and how it can help identify data quality issues immediately as well as more quickly resolve the source of the issue
• His PhD research on automating data science systems using ML
• How he identified the problem his start-up Metaplane would solve
• His experience in Y-Combinator accelerating Metaplane
• Pros and cons of an academic career relative to the start-up hustle
• The surprising complexity of the software tools he uses daily as a CEO
• What he looks for in the data engineers that he hires
This episode gets a little technical here and there but I think Kevin and I were pretty careful to define technical concepts when they came up, so today’s episode should largely be appealing to anyone who’s keen to learn a lot from a brilliant entrepreneur, especially if you’d like to found or grow a data science start-up yourself. Enjoy!
The SuperDataScience show's available on all major podcasting platforms, YouTube, and at SuperDataScience.com.