Sensational A.I. entrepreneur Husayn Kassai co-founded Onfido while an undergrad and served as its CEO for ten years, raising $200m in venture capital. Hear his tips for scaling your own A.I. firm in this week's episode.
Husayn:
• Co-founded the ML company Onfido in 2010, while he was an undergraduate student at the University of Oxford.
• Served as Onfido’s CEO for ten years, overseeing $200m in venture capital raised, the team growing to over 400 employees, and the client base growing to over 1500 firms.
• Holds a degree in economics and management from Oxford.
• Served as the full-time President of the Oxford Entrepreneurs student society, which is how I got to know him more than a decade ago.
Today’s episode is non-technical and will appeal to anyone who’s interested in hearing tips and tricks for building a billion-dollar A.I. start-up from scratch.
In the episode, Husayn details:
• Tips for deciding on whether you need co-founders.
• How to choose your co-founders if you need them.
• Finding product-market fit.
• How to scale up a company.
• How to identify start-up opportunities.
• Why there’s never been a better time than now to found an A.I. startup.
• A look at his next startup, which is currently in stealth.
The SuperDataScience show's available on all major podcasting platforms, YouTube, and at SuperDataScience.com.
Filtering by Category: Podcast
Tech Startup Dramas
Recently I was hooked on three series: The Dropout (on Theranos), WeCrashed (WeWork), and Super Pumped (Uber). The latter two even feature machine learning, but all three are educational and entertaining.
The SuperDataScience show's available on all major podcasting platforms, YouTube, and at SuperDataScience.com.
Optimizing Computer Hardware with Deep Learning
The polymath Dr. Magnus Ekman joins me from NVIDIA today to explain how machine learning is used to guide *hardware* architecture design and to provide an overview of his brilliant book "Learning Deep Learning".
Magnus:
• Is a Director of Architecture at NVIDIA (he's been there 12 years!)
• Previously worked at Samsung and Sun Microsystems.
• Was co-founder/CTO of the start-up SKOUT (acquired for $55m).
• Authored the epic, 700-page "Learning Deep Learning".
• Holds a Ph.D. in computer engineering from the Chalmers University of Technology and a masters in economics from Göteborg University.
Today’s episode has technical elements here and there but should largely be interesting to anyone who’s interested in hearing the latest trends in A.I., particularly deep learning, software and hardware.
In the episode, Magnus details:
• What hardware architects do.
• How ML can be used to optimize the design of computer hardware.
• The pedagogical approach of his exceptional deep learning book.
• Which ML users need to understand how ML models work.
• Algorithms inspired by biological evolution.
• Why Artificial General Intelligence won’t be obtained by increasing model parameters alone.
• Whether transformer models will entirely displace other deep learning architectures such as CNNs and RNNs.
The SuperDataScience show's available on all major podcasting platforms, YouTube, and at SuperDataScience.com.
Music for Deep Work
Five-Minute Friday this week is a fun one! My top music/audio recommendations for you while you "deep work" 🎶
The SuperDataScience show's available on all major podcasting platforms, YouTube, and at SuperDataScience.com.
Automating ML Model Deployment
Relative to training a machine learning model, getting it into production typically takes multiple times as much time and effort. Dr Doris Xin, the brilliant co-founder/CEO of Linea, has a near-magical, two-line solution.
In the episode, Doris details:
• How Linea reduces ML model deployment to two lines of Python code.
• The surprising extent of wasted computation she discovered when she analyzed over 3000 production pipelines at Google.
• Her experimental evidence that the total automation of ML model development is neither realistic nor desirable.
• What it’s like being the CEO of an exciting, early-stage tech start-up.
• Where she sees the field of data science going in the coming years and how you can prepare for it.
Today’s episode is more on the technical side so will likely appeal primarily to practicing data scientists, especially those that need to — or are interested in — deploying ML models into production.
Doris:
• Is co-founder and CEO of Linea, an early start-up that dramatically simplifies the deployment of machine learning models into production.
• Her alpha users include the likes of Twitter, Lyft, and Pinterest.
• Her start-up’s mission was inspired by research she conducted as a PhD student in computer science at the University of California, Berkeley.
• Previously she worked in research and software engineering roles at Google, Microsoft, Databricks, and LinkedIn.
The SuperDataScience show's available on all major podcasting platforms, YouTube, and at SuperDataScience.com.
Collaborative, No-Code Machine Learning
Emerging tools allow real-time, highly visual collaboration on data science projects — even in ways that allow those who code and those who don't to work together. Tim Kraska fills us in on how ML models enable this.
Tim:
• Is Associate Professor in the revered CSAIL lab at the Massachusetts Institute of Technology.
• Co-founded Einblick, a visual data computing platform that has received $6m in seed funding.
• Was previous a professor at Brown University, a visiting researcher at Google, and a postdoctoral researcher at Berkeley.
• Holds a PhD in computer science from ETH Zürich in Switzerland.
Today’s episode gets into technical aspects here and there, but will largely appeal to anyone who’s interested in hearing about the visual, collaborative future of machine learning.
In this episode, Tim details:
• How a tool like Einblick can simultaneously support folks who code as well as folks who’d like to leverage data and ML without code.
• How this dual no-code/Python code environment supports visual, real-time, click-and-point collaboration on data science projects.
• The clever database and ML tricks under the hood of Einblick that enable the tool to run effectively in real time.
• How to make data models more widely available in organizations.
• How university environments like MIT’s CSAIL support long-term innovations that can be spun out to make game-changing impacts.
The SuperDataScience show's available on all major podcasting platforms, YouTube, and at SuperDataScience.com.
A.I. For Crushing Humans at Poker and Board Games
The first SuperDataScience episode filmed with a live audience! Award-winning researcher Dr. Noam Brown from Meta AI was the guest, filling us in on A.I. systems that beat the world's best at poker and other games.
We shot this episode on stage at MLconf in New York. This means that you’ll hear audience reactions in real-time and, near the end of the episode, many great questions from audience members once I opened the floor up to them.
This episode has some moments here and there that get deep into the weeds of machine learning theory, but for the most part today’s episode will appeal to anyone who’s interested in understanding the absolute cutting-edge of A.I. capabilities today.
In this episode, Noam details:
• What Meta AI (formerly Facebook AI Research) is, how it fits into Meta.
• His award-winning no-limit poker-playing algorithms.
• What game theory is and how he integrates it into his models.
• The algorithm he recently developed that can beat the world’s best players at “no-press” Diplomacy, a complex strategy board game.
• The real-world implications of his game-playing A.I. breakthroughs.
• Why he became a researcher at a big tech firm instead of academia.
Noam:
• Develops A.I. systems that can defeat the best humans at complex games that computers have hitherto been unable to succeed at.
• During his Ph.D. in computer science at Carnegie Mellon University, developed A.I. systems that defeated the top human players of no-limit poker — earning him a Science Magazine cover story.
• Also holds a master’s in robotics from Carnegie Mellon and a bachelor’s degree in math and computer science from Rutgers.
• Previously worked for DeepMind and the U.S. Federal Reserve Board.
Thanks to Alexander Holden Miller for introducing me to Noam and to Hannah Gräfin von Waldersee for introducing me to Alex!
The SuperDataScience show's available on all major podcasting platforms, YouTube, and at SuperDataScience.com.
PaLM: Google's Breakthrough Natural Language Model
This month, Google announced a large natural language model called PaLM that provides staggering results on tasks like common-sense reasoning and solving Python-coding questions. Hear all about it in today's episode!
The SuperDataScience show's available on all major podcasting platforms, YouTube, and at SuperDataScience.com.
Open-Access Publishing
This week Dr. Amy Brand, the pioneering Director of The MIT Press and executive producer of documentary films, leads discussion of the benefits of — and innovations in — open-access publishing.
In the episode, Amy details:
• What open-access means.
• Why open-access papers, books, data, and code are invaluable for data scientists and anyone else doing research and development.
• The new metadata standard she developed to resolve issues around accurate attribution of who did what for a given academic publication.
• How we can change the STEM fields to be welcoming to everyone, including historically underrepresented groups.
• What it’s like to devise and create an award-winning documentary film.
Amy:
• Leads one of the world’s most influential university presses as the Director and Publisher of the MIT Press.
• Created a new open-access business model called Direct to Open.
• Is Co-Founder of Knowledge Futures Group, a non-profit that provides technology to empower organizations to build the digital infrastructure required for open-access publishing.
• Launched MIT Press Kids, the first university+kids publishers collab.
• Was the executive producer of "Picture A Scientist", a documentary that was selected to premiere at the prestigious Tribeca Film Festival and was recognized with the 2021 Kavli Science Journalism Award.
• She holds a PhD in Cognitive Science from MIT.
Today’s episode is well-suited to a broad audience, not just data scientists.
The SuperDataScience show's available on all major podcasting platforms, YouTube, and at SuperDataScience.com.
AGI: The Apocalypse Machine
Jeremie Harris's work on A.I. could dramatically alter your perspective on the field of data science and the bewildering — perhaps downright frightening — impact you and A.I. could make together on the world.
Jeremie:
• Recently co-founded Mercurius, an A.I. safety company.
• Has briefed senior political and policy leaders around the world on long-term risks from A.I., including senior members of the U.K. Cabinet Office, the Canadian Cabinet, as well as the U.S. Departments of State, Homeland Security and Defense.
• Is Host of the excellent Towards Data Science podcast.
• He previously co-founded SharpestMinds, a Y Combinator-backed mentorship marketplace for data scientists.
• He proudly dropped out of his quantum mechanics PhD to found SharpestMinds.
• He hold a Master’s in biological physics from the University of Toronto.
In this episode, Jeremie details:
• What Artificial General Intelligence (AGI) is
• How the development of AGI could happen in our lifetime and could present an existential risk to humans, perhaps even to all life on the planet as we know it.
• How, alternatively, if engineered properly, AGI could herald a moment called the singularity that brings with it a level of prosperity that is not even imaginable today.
• What it takes to become an AI safety expert yourself in order to help align AGI with benevolent human goals
• His forthcoming book on quantum mechanics
• Why almost nobody should do a PhD
Today’s episode is deep and intense, but as usual it does still have a lot of laughs, and it should appeal broadly, no matter whether you’re a technical data science expert already or not.
The SuperDataScience show's available on all major podcasting platforms, YouTube, and at SuperDataScience.com.
Clem Delangue on Hugging Face and Transformers
In today's SuperDataScience episode, Hugging Face CEO Clem Delangue fills us in on how open-source transformer architectures are accelerating ML capabilities. Recorded for yesterday's ScaleUp:AI conference in NY.
The SuperDataScience show's available on all major podcasting platforms, YouTube, and at SuperDataScience.com.
How to Rock at Data Science — with Tina Huang
Can you tell I had fun filming this episode with Tina Huang, YouTube data science superstar (293k subscribers)? In it, we laugh while discussing how to get started in data science and her learning/productivity tricks.
Tina:
• Creates YouTube videos with millions of views on data science careers, learning to code, SQL, productivity, and study techniques.
• Is a data scientist at one of the world's largest tech companies (she keeps the firm anonymous so she can publish more freely).
• Previously worked at Goldman Sachs and the Ontario Institute for Cancer Research.
• Holds a Masters in Computer and Information Technology from the University of Pennsylvania and a bachelors in Pharmacology from the University of Toronto
In this episode, Tina details:
• Her guidance for preparing for a career in data science from scratch.
• Her five steps for consistently doing anything.
• Her strategies for learning effectively and efficiently.
• What the day-to-day is like for a data scientist at one of the world’s largest tech companies.
• The software languages she uses regularly.
• Her SQL course.
• How her science and computer science backgrounds help her as a data scientist today.
Today’s episode should be appealing to a broad audience, whether you’re thinking of getting started in data science, are already an experienced data scientist, or you’re more generally keen to pick up career and productivity tips from a light-hearted conversation.
Thanks to Serg Masís, Brindha Ganesan and Ken Jee for providing questions for Tina... in Ken's case, a very silly question indeed.
The SuperDataScience show's available on all major podcasting platforms, YouTube, and at SuperDataScience.com.
Daily Habit #8: Math or Computer Science Exercise
This article was originally adapted from a podcast, which you can check out here.
At the beginning of the new year, in Episode #538, I introduced the practice of habit tracking and provided you with a template habit-tracking spreadsheet. Then, we had a series of Five-Minute Fridays that revolved around daily habits I espouse, and that theme continues today. The habits we covered in January and February were related to my morning routine.
Starting last week, we began coverage of habits on intellectual stimulation and productivity. Specifically, last week’s habit was “reading two pages”. This week, we’re moving onward with doing a daily technical exercise; in my case, this is either a mathematics, computer science, or programming exercise.
The reason why I have this daily-technical-exercise habit is that data science is both a limitlessly broad field as well as an ever-evolving field. If we keep learning on a regular basis, we can expand our capabilities and open doors to new professional opportunities. This is one of the driving ideas behind the #66daysofdata hashtag, which — if you haven’t heard of it before — is detailed in episode #555 with Ken Jee, who originated the now-ubiquitous hashtag.
Read MoreEngineering Data APIs
How you design a data API from scratch and how a data API can leverage machine learning to improve the quality of healthcare delivery are topics covered by Ribbon Health CTO Nate Fox in this week's episode.
Ribbon Health is a New York-based API platform for healthcare data that has raised $55m, including from some of the biggest names in venture capital like Andreessen Horowitz and General Catalyst.
Prior to Ribbon, Nate:
• Worked as an Analytics Engineer at the marketing start-up Unified.
• Was a Product Marketing Manager at Microsoft.
• Obtained a mechanical engineering degree from the Massachusetts Institute of Technology and an MBA from Harvard Business School.
In this episode, Nate details:
• What APIs ("application programming interfaces") are.
• How you design a data API from scratch.
• How Ribbon Health’s data API leverages machine learning models to improve the quality of healthcare delivery.
• How to ensure the uptime and reliability of APIs.
• How scientists and engineers can make a big social impact in health technology.
• His favorite tool for easily scaling up the impact of a data science model to any number of users.
• What he looks for in the data scientists he hires.
Today’s episode has some technical data science and software engineering elements here and there, but much of the conversation should be interesting to anyone who’s keen to understand how data science can play a big part in improving healthcare.
The SuperDataScience show's available on all major podcasting platforms, YouTube, and at SuperDataScience.com.
Daily Habit #7: Read Two Pages
At the beginning of the new year, in Episode #538, I introduced the practice of habit tracking and provided you with a template habit-tracking spreadsheet. Then, we had a series of Five-Minute Fridays that revolved around daily habits I espouse and that theme continues today. The habits we covered in January and February were my morning habits, specifically:
Starting the day with a glass of water
Making my bed
Carrying out alternate-nostril breathing
Meditating
Writing morning pages
Now, we’ll continue on with habits that extend beyond just my morning with a block of habits on intellectual stimulation and productivity. Specifically, today’s habit is “reading two pages”.
Read MoreGPT-3 for Natural Language Processing
With its human-level capacity on tasks as diverse as question-answering, translation, and arithmetic, GPT-3 is a game-changer for A.I. This week's brilliant guest, Melanie Subbiah, was a lead author of the GPT-3 paper.
GPT-3 is a natural language processing (NLP) model with 175 billion parameters that has demonstrated unprecedented and remarkable "few-shot learning" on the diverse tasks mentioned above (translation between languages, question-answering, performing three-digit arithmetic) as well as on many more (discussed in the episode).
Melanie's paper sent shockwaves through the mainstream media and was recognized with an Outstanding Paper Award from NeurIPS (the most prestigious machine learning conference) in 2020.
Melanie:
• Developed GPT-3 while she worked as an A.I. engineer at OpenAI, one of the world’s leading A.I. research outfits.
• Previously worked as an A.I. engineer at Apple.
• Is now pursuing a PhD at Columbia University in the City of New York specializing in NLP.
• Holds a bachelor's in computer science from Williams College.
In this episode, Melanie details:
• What GPT-3 is.
• Why applications of GPT-3 have transformed not only the field of data science but also the broader world.
• The strengths and weaknesses of GPT-3, and how these weaknesses might be addressed with future research.
• Whether transformer-based deep learning models spell doom for creative writers.
• How to address the climate change and bias issues that cloud discussions of large natural language models.
• The machine learning tools she’s most excited about.
This episode does have technical elements that will appeal primarily to practicing data scientists, but Melanie and I put an effort into explaining concepts and providing context wherever we could so hopefully much of this fun, laugh-filled episode will be engaging and informative to anyone who’s keen to learn about the start of the art in natural language processing and A.I.
The SuperDataScience show's available on all major podcasting platforms, YouTube, and at SuperDataScience.com.
Jon’s Answers to Questions on Machine Learning
The wonderful folks at the Open Data Science Conference (ODSC) recently asked me five great questions on machine learning. I thought you might like to hear the answers too, so here you are!
Their questions were:
1. Why does my educational content focus on deep learning and on the foundational subjects underlying machine learning?
2. Would you consider deep learning to be an “advanced” data science skill, or is it approachable to newcomers/novice data scientists?
3. What open-source deep learning software is most dominant today?
4. What open-source deep learning software are you looking forward to using more?
5. Do you have a case study where you've used deep learning in practice?
The SuperDataScience show's available on all major podcasting platforms, YouTube, and at SuperDataScience.com.
ODSC's blog post of our Q&A is here.
SuperDataScience Podcast LIVE at MLconf NYC and ScaleUp:AI!
It's finally happening: the first-ever SuperDataScience episodes filmed with a live audience! On March 31 and April 7 in New York, you'll be able to react to guests and ask them questions in real-time. I'm excited 🕺
The first live, in-person episode will be filmed at MLconf NYC on March 31st. The guest will be Alexander Holden Miller, an engineering manager at Facebook A.I. Research who leads bleeding-edge work at mind-blowing intersections of deep reinforcement learning, natural language processing, and creative A.I.
A week later on April 7th, another live, in-person episode will be filmed at ScaleUp:AI. I'll be hosting a panel on open-source machine learning that features Hugging Face CEO Clem Delangue.
I hope to see you at one of these conferences, the first I'll be attending in over two years! Can't wait. There are more live SuperDataScience episodes planned for New York this year and hopefully it won't be long before we're recording episodes live around the world.
My Favorite Calculus Resources
It's my birthday today! In celebration, I'm delighted to be releasing the final video of my "Calculus for Machine Learning" YouTube course. The first video came out in May and now, ten months later, we're done! 🎂
We published a new video from my "Calculus for Machine Learning" course to YouTube every Wednesday since May 6th, 2021. So happy that it's now complete for you to enjoy. Playlist is here.
More detail about my broader "ML Foundations" curriculum (which also covers subject areas like Linear Algebra, Probability, Statistics, Computer Science) and all of the associated open-source code is available in GitHub here.
Starting next Wednesday, we'll begin releasing videos for a new YouTube course of mine: "Probability for Machine Learning". Hope you're excited to get going on it :)
Effective Pandas
Seven-time bestselling author Matt Harrison reveals his top tips and tricks to enable you to get the most out of Pandas, the leading Python data analysis library. Enjoy!
Matt's books, all of which have been Amazon best-sellers, are:
1. Effective Pandas
2. Illustrated Guide to Learning Python 3
3. Intermediate Python
4. Learning the Pandas Library
5. Effective PyCharm
6. Machine Learning Pocket Reference
7. Pandas Cookbook (now in its second edition)
Beyond being a prolific author, Matt:
• Teaches "Exploratory Data Analysis with Python" at Stanford
• Has taught Python at big organizations like Netflix and NASA
• Has worked as a CTO and Senior Software Engineer
• Holds a degree in Computer Science from Stanford University
On top of Matt's tips for effective Pandas programming, we cover:
• How to squeeze more data into Pandas on a given machine.
• His recommended software libraries for working with tabular data once you have too many data to fit on a single machine.
• How having a computer science education and having worked as a software engineer has been helpful in his data science career.
This episode will appeal primarily to practicing data scientists who are keen to learn about Pandas or keen to become an even deeper expert on Pandas by learning from a world-leading educator on the library.
The SuperDataScience show's available on all major podcasting platforms, YouTube, and at SuperDataScience.com.