A popular perception, propagated by film and television, is that machines are nearly as intelligent as humans. They are not, and they will not be anytime soon. Today's episode throws cold water on "Artificial General Intelligence".
The SuperDataScience show's available on all major podcasting platforms, YouTube, and at SuperDataScience.com.
Filtering by Category: Data Science
Data Engineering for Data Scientists
Prolific data science content creator 🎯 Mark Freeman details what Data Engineering is and why it's a critically useful subject area for data scientists to be proficient in. Hear all about it in this week's episode.
Mark:
• Is a Senior Data Scientist, with a Data Engineering specialization, at Humu (startup that has raised $100m in venture capital).
• Posts data science and software engineering tips daily on LinkedIn.
• Previously was data scientist at Verana Health and data analyst at the Stanford University School of Medicine.
• Also holds a Master’s in Community Health and Prevention Research from the Stanford medical school.
Today’s episode is geared toward listeners who are already in a technical role such as data scientists, data engineers, ML engineers, or software engineers — as well as to folks who’d like to grow into these kinds of roles.
In today’s episode, Mark details:
• The differences between junior, senior, and staff data scientists.
• What it takes to get promoted into more senior data science roles.
• How data engineering differs from data science.
• His top tools for data extraction, modeling, and pipeline engineering.
• His top tip for getting hired at a fast-growing VC-backed startup.
• How behavioral nudges can drastically improve workplace experiences.
• Why all data scientists should be interested in web3.
The SuperDataScience show's available on all major podcasting platforms, YouTube, and at SuperDataScience.com.
Daily Habit #10: Limit Social Media Use
This article was originally adapted from a podcast, which you can check out here.
At the beginning of the new year, in Episode #538, I introduced the practice of habit tracking and provided you with a template habit-tracking spreadsheet. Then, we had a series of Five-Minute Fridays that revolved around daily habits and we’ve been returning to this daily-habit theme periodically since.
The habits we covered in January and February were related to my morning routine. In the spring, these habit episodes have focused on productivity, and I’ve got another such productivity habit for you today.
To provide some context on the impetus behind this week’s habit, I’ve got a quote for you from the author Robert Greene, specifically from his book, Mastery: "The human that depended on focused attention for its survival now becomes the distracted scanning animal, unable to think in depth, yet unable to depend on instincts."
This suboptimal state of affairs — where our minds are endlessly flitting between stimuli — is exemplified by countless digital distractions we encounter every day, but none is quite as pernicious as the distraction brought to us by social media platforms. When using free social media platforms, you are typically the product — a product being sold to in-platform advertisers. Thus, to maximize ad revenue, these platforms are engineered to keep you seeking cheap, typically unsatisfying dopamine hits within them for as long as they can.
Read MorePyMC for Bayesian Statistics in Python
Learn how Bayesian Statistics can be more powerful and interpretable than any other data modeling approach from Dr. Thomas Wiecki, a Core Developer of PyMC — the leading Bayesian software library for Python.
Thomas:
• Has been a Core Developer of PyMC for over eight years.
• Is Co-Founder and CEO of PyMC Labs, which solves commercial problems with Bayesian data models.
• Previously, he worked as VP Data Science at Quantopian Inc.
• Holds a PhD in Computational Neuroscience from Brown University.
Today’s episode is more on the technical side so will appeal primarily to practicing data scientists.
In this episode, Thomas details:
• What Bayesian statistics is.
• Why Bayesian statistics can be more powerful and interpretable than any other data modeling approach.
• How PyMC was developed and how it trains models so efficiently.
• Case studies of large-scale Bayesian stats applied commercially.
• The extra flexibility of *hierarchical* Bayesian models.
• His top resources for learning Bayesian stats yourself.
• How to build a successful company culture.
The SuperDataScience show's available on all major podcasting platforms, YouTube, and at SuperDataScience.com.
OpenAI Codex
OpenAI's Codex model is derived from the famed GPT-3 and allows humans to generate working code with natural language alone. It's flexibility and capability are quite remarkable! Hear all about it in today's episode.
The SuperDataScience show's available on all major podcasting platforms, YouTube, and at SuperDataScience.com.
The State of Natural Language Processing
As the LaMDA "sentience" hubbub highlights, Natural Language Processing is perhaps the most exciting and rapidly accelerating area of Machine Learning. Hear all about NLP from the deep expert Rongyao HUANG.
(LaMDA is definitely not sentient, by the way... but it is an impressive display of state-of-the-art conversational machine capabilities.)
Rongyao:
• Is Lead Data Scientist at CB Insights, a marketing intelligence platform.
• Previously she worked as a data scientist at a number of other New York start-ups and as a quantitative research assistant at Columbia University.
• She holds a masters in research methodology and quantitative methods from Columbia University in the City of New York.
Today’s episode is more on the technical side so will appeal primarily to practicing data scientists, however the second half of the episode does contain general sage guidance for anyone seeking to navigate career options as well as to balance personal and professional obligations.
In today’s episode, Rongyao details:
• The evolution of NLP techniques over the past decade through to the large transformer models of today.
• The practical implications of this dramatic NLP evolution.
• How the “scaling law” will impact NLP model capabilities over the coming decade.
• The major limitations of today’s NLP approaches and how we might overcome them.
• Her Bauhaus-inspired model for effective data science.
• Her pathfinding model for making effective career choices.
• Her top tips for staying sane while juggling career and family.
The SuperDataScience show's available on all major podcasting platforms, YouTube, and at SuperDataScience.com.
Live podcast recording with Hilary Mason at New York R Conference
Thanks to data science legend Hilary Mason and the engaging audience at the New York R Conference for making Friday's live-filmed episode of the SuperDataScience podcast an exhilarating and illuminating success ⚡️
Look out for Hilary's episode as #589, which will be released on July 5th.
Model Speed vs Model Accuracy
In the vast majority of real-world, commercial cases, the speed of a machine learning algorithm is more important than it's accuracy. Hear why in today's Five-Minute Friday episode!
The SuperDataScience show's available on all major podcasting platforms, YouTube, and at SuperDataScience.com.
Bayesian, Frequentist, and Fiducial Statistics in Data Science
Harvard stats prof Xiao-Li MENG founded the trailblazing Harvard Data Science Review. We cover that and why BFFs (Bayesians, frequentists and fiducial statisticians) should be BFFs (best friends forever).
Xiao-Li:
• Is the Founding Editor-in-Chief of the Harvard Data Science Review, a new publication in the vein of the renowned Harvard Business Review.
• Has been a full professor in Harvard’s Dept of Statistics for 20+ years.
• Chaired the Harvard Stats Dept for 7 years.
• Was Dean of Harvard’s Grad School of Arts and Sciences for 5 years.
• Has published 200+ journal articles on statistics, machine learning, and data science, and been cited over 25,000 times.
• Holds a PhD in Statistics from — yep! — Harvard.
Today’s episode will be of interest to anyone who’s keen to better understand the biggest challenges and most fascinating applications of data science today.
In the episode, Xiao-Li details:
• What the Harvard Data Science Review is, why he founded it, and the most popular topics covered by the Review so far.
• The concept of “data minding”.
• Why there’s no “free lunch” with data — tricky trade-offs abound no matter what.
• The surprising paradoxical downside of having lots of data.
• What the Bayesian, Frequentist, and Fiducial schools of statistics are and when each of them is most useful in data science.
The SuperDataScience show's available on all major podcasting platforms, YouTube, and at SuperDataScience.com.
Collecting Valuable Data
Recently, I've been covering strategies for getting business value from machine learning. In today's episode, we dig into the most effective ways to obtain and label *commercially valuable* data.
The SuperDataScience show's available on all major podcasting platforms, YouTube, and at SuperDataScience.com.
Transforming Dentistry with A.I.
Engineer and computer scientist Dr. Wardah Inam has raised $79m in venture capital to transform dentistry with machine learning. Hear about it, as well as her tips for scaling an A.I. company, in this week's episode.
Wardah:
• Is Co-Founder/CEO of Overjet, which is transforming dentistry with ML.
• Co-founded uLink Technologies, a start-up behind A.I.-driven power grids.
• Served as Lead Product Manager at Q Bio, a healthcare A.I. start-up.
• Was a Postdoc in MIT’s renowned CSAIL (Computer Science and A.I. Lab).
• Holds an MIT PhD in electrical engineering and computer science.
Today’s episode focuses more on practical applications of ML and growing an A.I. company than getting into the nitty-gritty of ML models themselves, so it should be broadly appealing to both technically-oriented and business-oriented folks.
In the episode, Wardah details:
• How Overjet not only classifies images but quantifies dental diagnoses with computer vision, enabling models to answer questions like “how large is this cavity?”
• How natural language processing can be essential for determining the correct dental diagnosis.
• The data-labeling challenges firms like Overjet need to overcome to enable ML models to learn from noisy, real-world data.
• Her tips for building a successful A.I. business.
• What she looks for in the data scientists and software engineers she hires.
The SuperDataScience show's available on all major podcasting platforms, YouTube, and at SuperDataScience.com.
Identifying Commercial ML Problems
The importance of effectively identifying a commercial problem *before* starting data collection or machine learning model development is the focus of this week's Five-Minute Friday.
The SuperDataScience show's available on all major podcasting platforms, YouTube, and at SuperDataScience.com.
Scaling A.I. Startups Globally
Sensational A.I. entrepreneur Husayn Kassai co-founded Onfido while an undergrad and served as its CEO for ten years, raising $200m in venture capital. Hear his tips for scaling your own A.I. firm in this week's episode.
Husayn:
• Co-founded the ML company Onfido in 2010, while he was an undergraduate student at the University of Oxford.
• Served as Onfido’s CEO for ten years, overseeing $200m in venture capital raised, the team growing to over 400 employees, and the client base growing to over 1500 firms.
• Holds a degree in economics and management from Oxford.
• Served as the full-time President of the Oxford Entrepreneurs student society, which is how I got to know him more than a decade ago.
Today’s episode is non-technical and will appeal to anyone who’s interested in hearing tips and tricks for building a billion-dollar A.I. start-up from scratch.
In the episode, Husayn details:
• Tips for deciding on whether you need co-founders.
• How to choose your co-founders if you need them.
• Finding product-market fit.
• How to scale up a company.
• How to identify start-up opportunities.
• Why there’s never been a better time than now to found an A.I. startup.
• A look at his next startup, which is currently in stealth.
The SuperDataScience show's available on all major podcasting platforms, YouTube, and at SuperDataScience.com.
Optimizing Computer Hardware with Deep Learning
The polymath Dr. Magnus Ekman joins me from NVIDIA today to explain how machine learning is used to guide *hardware* architecture design and to provide an overview of his brilliant book "Learning Deep Learning".
Magnus:
• Is a Director of Architecture at NVIDIA (he's been there 12 years!)
• Previously worked at Samsung and Sun Microsystems.
• Was co-founder/CTO of the start-up SKOUT (acquired for $55m).
• Authored the epic, 700-page "Learning Deep Learning".
• Holds a Ph.D. in computer engineering from the Chalmers University of Technology and a masters in economics from Göteborg University.
Today’s episode has technical elements here and there but should largely be interesting to anyone who’s interested in hearing the latest trends in A.I., particularly deep learning, software and hardware.
In the episode, Magnus details:
• What hardware architects do.
• How ML can be used to optimize the design of computer hardware.
• The pedagogical approach of his exceptional deep learning book.
• Which ML users need to understand how ML models work.
• Algorithms inspired by biological evolution.
• Why Artificial General Intelligence won’t be obtained by increasing model parameters alone.
• Whether transformer models will entirely displace other deep learning architectures such as CNNs and RNNs.
The SuperDataScience show's available on all major podcasting platforms, YouTube, and at SuperDataScience.com.
TEDxDrexelU on Deep Learning
I'm giving my first TED-format talk at TEDxDrexelU in Philadelphia on May 21. I'll provide a visual intro to Deep Learning and to the momentous opportunity we have to shape a bewilderingly prosperous world with A.I.
There are only 100 tickets available for sale (for $15!) but my understanding is that my talk will eventually be made available on the TED YouTube channel if you can't make it in-person.
The other compelling speakers are:
• Ebony White, PhD
• Adit Gupta
• Dale Moss
• Nadia Christina Jagessar, MBA
• Dr. Nyree Dardarian
• Raja Schaar, IDSA
Event and ticket details available here.
Automating ML Model Deployment
Relative to training a machine learning model, getting it into production typically takes multiple times as much time and effort. Dr Doris Xin, the brilliant co-founder/CEO of Linea, has a near-magical, two-line solution.
In the episode, Doris details:
• How Linea reduces ML model deployment to two lines of Python code.
• The surprising extent of wasted computation she discovered when she analyzed over 3000 production pipelines at Google.
• Her experimental evidence that the total automation of ML model development is neither realistic nor desirable.
• What it’s like being the CEO of an exciting, early-stage tech start-up.
• Where she sees the field of data science going in the coming years and how you can prepare for it.
Today’s episode is more on the technical side so will likely appeal primarily to practicing data scientists, especially those that need to — or are interested in — deploying ML models into production.
Doris:
• Is co-founder and CEO of Linea, an early start-up that dramatically simplifies the deployment of machine learning models into production.
• Her alpha users include the likes of Twitter, Lyft, and Pinterest.
• Her start-up’s mission was inspired by research she conducted as a PhD student in computer science at the University of California, Berkeley.
• Previously she worked in research and software engineering roles at Google, Microsoft, Databricks, and LinkedIn.
The SuperDataScience show's available on all major podcasting platforms, YouTube, and at SuperDataScience.com.
Exercises on Event Probabilities
In recent weeks, my YouTube videos have covered Probability concepts like Events, Sample Spaces, and Combinatorics. Today's video features exercises to test and cement your understanding of those concepts.
We will publish a new video from my "Probability for Machine Learning" course to YouTube every Wednesday. Playlist is here.
More detail about my broader "ML Foundations" curriculum (which also covers subject areas like Linear Algebra, Calculus, Statistics, Computer Science) and all of the associated open-source code is available in GitHub here.
Collaborative, No-Code Machine Learning
Emerging tools allow real-time, highly visual collaboration on data science projects — even in ways that allow those who code and those who don't to work together. Tim Kraska fills us in on how ML models enable this.
Tim:
• Is Associate Professor in the revered CSAIL lab at the Massachusetts Institute of Technology.
• Co-founded Einblick, a visual data computing platform that has received $6m in seed funding.
• Was previous a professor at Brown University, a visiting researcher at Google, and a postdoctoral researcher at Berkeley.
• Holds a PhD in computer science from ETH Zürich in Switzerland.
Today’s episode gets into technical aspects here and there, but will largely appeal to anyone who’s interested in hearing about the visual, collaborative future of machine learning.
In this episode, Tim details:
• How a tool like Einblick can simultaneously support folks who code as well as folks who’d like to leverage data and ML without code.
• How this dual no-code/Python code environment supports visual, real-time, click-and-point collaboration on data science projects.
• The clever database and ML tricks under the hood of Einblick that enable the tool to run effectively in real time.
• How to make data models more widely available in organizations.
• How university environments like MIT’s CSAIL support long-term innovations that can be spun out to make game-changing impacts.
The SuperDataScience show's available on all major podcasting platforms, YouTube, and at SuperDataScience.com.
DALL-E 2: Stunning Photorealism from Any Text Prompt
OpenAI just released their "DALL-E 2" multimodal model that defines "state of the art" A.I.: Provide it with (even extremely bizarre) natural-language requests for an image and it generates it! Hear about it in today's episode, and check out this interactive post from OpenAI that demonstrates DALL-E 2's mind-boggling capabilities.
The SuperDataScience show's available on all major podcasting platforms, YouTube, and at SuperDataScience.com.
Combinatorics
Combinatorics is a field of math devoted to counting. In this week's YouTube video, we use examples with real numbers to bring Combinatorics to life and relate it to Probability Theory.
We will publish a new video from my "Probability for Machine Learning" course to YouTube every Wednesday. Playlist is here.
More detail about my broader "ML Foundations" curriculum (which also covers subject areas like Linear Algebra, Calculus, Statistics, Computer Science) and all of the associated open-source code is available in GitHub here.