Microsoft’s Majorana 1 is a newly unveiled quantum computing chip that marks a major breakthrough in the quest for practical quantum computers. It’s the world’s first quantum processor built on a so-called Topological Core architecture – meaning it uses topological qubits (based on exotic Majorana particles that I’ll dig into more shortly) instead of the fragile qubits found in today’s machines. Microsoft believes this innovation could accelerate the timeline for solving real-world, industrial-scale problems with quantum computing from “decades” to just a few years.
Read MoreFiltering by Category: Five-Minute Friday
OpenAI’s “Deep Research”: Get Days of Human Work Done in Minutes
What does Deep Research do?
Read MoreIn Case You Missed It in February 2025
February was another insane month on my podcast. In addition to having stunning smiles, all four guests I hosted are fascinating, highly knowledgeable experts. Today's episode features highlights of my convos with them.
The specific conversation highlights included in today's episode are:
Professional-athlete-turned-data-engineer Colleen Fotsch on how DBT simplifies data modeling and documentation.
Engineer-turned-entrepreneur Vaibhav Gupta on the new programming language, BAML, he created for AI applications. He details how BAML will save you time and a considerable amount of money when calling LLM APIs.
Professor Frank Hutter on how TabPFN, the first deep learning approach to become the state of the art for modeling tabular data (i.e., the structured rows and columns of data that, until now, deep learning was feeble at modeling).
The ebullient Cal Al-Dhubaib on the keys to scaling (and selling!) a thriving data science consultancy.
The SuperDataScience podcast is available on all major podcasting platforms, YouTube, and at SuperDataScience.com.
Bringing Back Extinct Animals like the Wooly Mammoth and Dodo Bird
For this week’s Five-Minute Friday-style episode, I’m diving into a biotechnology story I found mind-blowing: bringing back extinct animals like the wooly mammoth and the dodo bird.
Read MoreIn Case You Missed It in January 2025
Happy Valentine's Day 💘 ! My high-calorie gift to you is today's episode, which features the best highlights from conversations I had with the (absolutely epic!) guests I hosted on my podcast in January.
The specific conversation highlights included in today's episode are:
Famed futurist Azeem Azhar on how to break your linear mindset to prepare for the exponential technological change that we are experiencing (and will experience even more rapidly in years to come).
Global quantum-computing expert Dr. Florian Neukart on practical, real-world applications of quantum computing today.
Kirill Eremenko and Hadelin de Ponteves — who have together taught over 5 million people data science — with their 12-step checklist for selecting an appropriate foundation model (e.g., large language model) for a given application.
Brooke Hopkins (former engineer at Waymo, now founder and CEO of Y Combinator-backed startup Coval) on why you should evaluate A.I. agents with reference-free metrics.
The SuperDataScience podcast is available on all major podcasting platforms, YouTube, and at SuperDataScience.com.
Are You The Account Executive We’re Looking For?
We’ve never done an episode like today’s… instead of covering a specific data science-related topic, in today’s episode I’m letting you know about a critical role that we’re hiring for on the SuperDataScience Podcast. Perhaps you are the person we’re looking for or you know the person we are looking for!
Read MoreThe Fastest-Growing Jobs Are AI Jobs
Assessing the fastest-growing job is tricky. For example, using job-posting data isn’t great because there could be lots of duplicate postings out there or a lot of the postings could be going unfilled. Another big issue is defining exactly what a job is: The exact same responsibilities could be associated with the job title “data scientist”, “data engineer” or “ML engineer”, depending on the particular job titles a particular company decides to go with. So, whoever’s evaluating job growth is going to end up bucketing groups of related jobs and responsibilities into one particular, standardized job-title bucket, probably these days in a largely automated, data-driven way; if you dug into individual examples, I’m sure you’d find lots of job-title standardizations you disagreed with but some kind of standardization approach is essential to ensuring identical roles with slightly different job titles get counted as the same thing.
Read MoreThe Six Epochs of Intelligence Evolution
The six epochs of intelligence evolution. I came across the definition of these six stages in the futurist Ray Kurzweil’s latest book, The Singularity is Nearer. Per Kurzweil, each of the six stages of intelligence builds on the complexity of the information processing in the preceding stage, so the third epoch depends on the second one happening and the second epoch depends on the first.
Read MoreIn Case You Missed It in December 2024
Today's "In Case You Missed it Episode"... is one not to miss! Several of the most fascinating conversations I've ever had on the SuperDataScience Podcast I host happened in December.
The specific conversation highlights included in today's episode are:
1. The legendary Dr. Andrew Ng on why LLM cost doesn't matter for your A.I. proof of concept.
2. Building directly on Andrew's segment, CTO (and my fellow Nebula.io co-founder) Ed Donner on how to choose the right LLM for a given application.
3. Extremely intelligent and clear-spoken Dr. Eiman Ebrahimi (CEO of Protopia AI) on the future of autonomous systems and data security in our Agentic A.I. future.
4. From our 2024 recap episode, Sadie St. Lawrence's three biggest A.I. "wow" moments of the year... as well as the biggest flop of the year. (One company was behind both!)
5. Harvard/MIT humanist chaplain Greg Epstein (and bestselling author on tech in society) on the ethics of accelerating A.I. advancements. Should we, for example, consider slowing A.I. progress down?
The SuperDataScience podcast is available on all major podcasting platforms, YouTube, and at SuperDataScience.com.
Continuous Calendar for 2025
Back in Episode #482, I provided a detailed introduction to continuous calendars — a calendar format that I personally find vastly superior to the standard weekly or monthly calendars. With today’s episode, we’re updating the calendar for the new year — for 2025.
Read MoreHappy Holidays from the SuperDataScience Podcast
2024 was unquestionably the fastest-moving year yet for A.I. innovation. In particular, we witnessed the meteoric rise of generative AI from its largely-proof-of-concept phase to being commercially indispensable. According to survey results, nearly two-thirds of organizations are now regularly using generative A.I. – a number that has almost doubled since a year earlier. From enhancing product development to facilitating medical breakthroughs, generative AI has become a cornerstone of innovation across industries. For those of who practice data science hands-on, GenAI has proved itself to be near-magical at composing functional code and debugging our errors.
Indeed, as we’ll discuss in detail in next Tuesday’s episode with Sadie St. Lawrence, this year GenAI models crossed reliability and accuracy thresholds, enabling it to power independently acting AI agents, even multi-agent systems that can tackle complex tasks without human supervision. 2025 looks set to be the year Agentic AI takes center stage, the next phase in A.I. transforming every industry and overhauling our way of life; if we get the tricky parts right, then for the better for all of us on this planet.
I hope you’ve enjoyed our exploration of these developments (and much more!) in depth over the course of the year through our podcast episodes, allowing you to hear directly from leading experts and practitioners like Andrew Ng, Bernard Marr and Sol Rashidi. Our discussions have covered a wide range of topics, from the industrialization of data science processes to the ethical considerations surrounding AI implementation.
Through exploring the tricky bits like ethics and equity alongside the breathtaking technological breakthroughs, I hope that overall we’ve left you feeling optimistic about our capacity as a species to get this tech revolution right and have it benefit all of us. This holiday season, I hope you’ll also be able to sit with these positive vibes, get some time away from your screened devices and enjoy the wonder of life — including how lucky we are to be alive at this extraordinary time in history — with your loved ones.
From all of us here at the SuperDataScience Podcast, happy holidays!
The SuperDataScience podcast is available on all major podcasting platforms, YouTube, and at SuperDataScience.com.
Delicate Viticultural Robotics
I’ve been excited all year this year about the potential for AI to revolutionize agricultural robotics and help us feed the planet with high-quality nutrition. So, I’m jazzed today to be digging into an innovative application of computer vision and robotics in agriculture, specifically in viticulture — the delicate cultivation of super-expensive grapes for making wine. And, yeah, wine may not provide the world with high-quality nutrition, but the same technologies developed for delicate wine grapes will be transferrable to other plants as well.
Read MoreHow to Become Happier, with Dr. Nat Ware
On most metrics, it's never been a better time to be alive. And yet, many of us are unhappy. In today's episode, Dr. Nat Ware explains why we're unhappy... and, mercifully, what we can do about it!
Nat:
• Is a renowned keynote speaker; he has one TEDx talk alone that has over 2 million views on YouTube (it forms the basis of the content in today’s episode).
• Is the social-impact entrepreneur behind 180 Degrees Consulting (the world's largest consultancy for non-profits) as well as Forté (a startup that facilitates cost-free reskilling of workforces).
• Holds both a doctorate in economics and an MBA from the University of Oxford.
Today’s episode should be fascinating to anyone. In it, Nat details:
• Why, despite life on this planet being better than ever before, humans are so unhappy.
• Concrete guidance on what you can do to become happier.
The SuperDataScience podcast is available on all major podcasting platforms, YouTube, and at SuperDataScience.com.
The “A.I.” Nobel Prizes (in Physics and Chemistry??)
A.I. was center stage at the 2024 Nobel Prizes, with Demis Hassabis sharing the Chemistry prize and Geoff Hinton sharing the Physics prize. Chem and Physics seems weird for A.I. though, no? Today's episode explains.
Read MoreIn Case You Missed It in September 2024
Another month, another set of invaluable conversations on the SuperDataScience Podcast I host. ICYMI, today's episode highlights the most fascinating moments from September.
The specific conversation highlights included in today's episode are:
Posit PBC engineering manager Dr. Julia Silge explains why Positron, the next-generation IDE she's leading development of, is better-suited to data scientists than any existing IDE.
PyTorch expert Luka Anicin provides his top tips for training more accurate and compute-efficient ML models.
Exceptional open-source developer Marco Gorelli on why Polars is anywhere from 10 to 100x faster than Pandas, the incumbent Python library for working with DataFrames.
Microsoft's Marck Vaisman on what companies hiring data scientists should be looking for... as opposed to what the typically (and mistakenly!) look for today.
The SuperDataScience podcast is available on all major podcasting platforms, YouTube, and at SuperDataScience.com.
NotebookLM: Jaw-Dropping Podcast Episodes Generated About Your Documents
Today’s episode topic is on Google’s newly-released (and frankly sensational) product NotebookLM. All you need is a Google login, which is as easy as having a Gmail account. Use of NotebookLM is likewise totally free.
Read MoreOpenAI's o1 "Strawberry" Models
Today’s episode, which, given the gravity of the event, could of course be none other than OpenAI’s new o1 series of models, which represent a tremendous leap forward in AI capabilities.
Read MoreSummer Reflections
This week, I’m enjoying the tail end of the northern-hemisphere summer by spending time with my family.
Read MoreThe Five Levels of Self-Driving Cars
Back in Episode #748 earlier this year, I covered the five levels of Artificial General Intelligence. Well, today, inspired by my first-ever experience in an autonomous vehicle (a Waymo ride while in San Francisco recently), we’ve got an episode on the five levels of motor-vehicle automation.
Read MoreLlama 3.1 405B: The First Open-Source Frontier LLM
Meta releasing its giant (405-billion parameter) Llama 3.1 model is a game-changer: For the first time, an "open-source" LLM competes at the frontier (against proprietary models GPT-4o and Claude).
Read More