Modern, cutting-edge A.I. basically depends entirely on the Transformer. But now, the first serious contender to the Transformer has emerged and it’s called Mamba; we’ve got the full paper—called "Mamba: Linear-TimeSequence Modeling with Selective State Spaces" and written by researchers at Carnegie Mellon and Princeton.
Read MoreFiltering by Category: Data Science
How to Speak so You Blow Listeners’ Minds, with Cole Nussbaumer Knaflic
Cole Nussbaumer Knaflic's book, "storytelling with data", has sold over 500k copies... wild! In today's episode, Cole details the best tricks from her latest book, "storytelling with you" — a goldmine on how to inform and profoundly engage people.
Cole:
• Is the author of “storytelling with data”, which has sold half a million copies, been translated into over 20 languages and is used by more than 100 universities. Nearly a decade old, the book is the #1 bestseller still today in several Amazon categories.
• Also wrote the follow-on, hands-on “storytelling with data: let’s practice!” a bestseller in its own right.
• Serves as the Founder and CEO of the storytelling with data company, which provides data-storytelling workshops and other resources.
• Previously she was a People Analytics Manager at Google.
• Holds a degree in math as well as an MBA from the University of Washington.
Today’s episode will be of interest to anyone who’d like to communicate so effectively and compellingly that people are blown away.
In this episode, Cole details:
• Her top tips for planning, creating and delivering an incredible presentation.
• A few special tips for communicating data effectively for all of you data nerds like me.
The SuperDataScience podcast is available on all major podcasting platforms, YouTube, and at SuperDataScience.com.
AlphaGeometry: AI is Suddenly as Capable as the Brightest Math Minds
Google DeepMind's open-sourced AlphaGeometry blends "fast thinking" (like intuition) with "slow thinking" (like careful, conscious reasoning) to enable a big leap forward in A.I. capability and match human Math Olympiad gold medalists on geometry problems.
KEY CONTEXT
• A couple weeks ago, DeepMind published on AlphaGeometry in the prestigious journal peer-reviewed Nature.
• DeepMind focused on geometry due to its demand for high-level reasoning and logical deduction, posing a unique challenge that traditional ML models struggle with.
MASSIVE RESULTS
• AlphaGeometry tackled 30 International Mathematical Olympiad problems, solving 25. This outperforms human Olympiad bronze and silver medalists' averages (who solved 19.3 and 22.9, respectively) and closely rivals gold medalists (who solved 25.9).
• This new system crushes the previous state-of-the-art A.I., which solved only 10 out of 30 problems.
• Beyond solving problems, AlphaGeometry also generates understandable proofs, making A.I.-generated solutions more accessible to humans.
HOW?
• AlphaGeometry uses a new method of generating synthetic theorems and proofs, simulating 100 million unique examples to overcome the limitations of (expensive, laborious) human-generated proofs.
• It combines a neural (deep learning) language model for intuitive guesswork with a symbolic deduction engine for logical problem-solving, mirroring "fast" and "slow thinking" processes akin to human cognition (per Daniel Kahneman's "Thinking, Fast and Slow" book).
IMPACT
• A.I. that can "think fast and slow" like AlphaGeometry could generalize across mathematical fields and potentially other scientific disciplines, pushing the boundaries of human knowledge and problem-solving capabilities.
The SuperDataScience podcast is available on all major podcasting platforms, YouTube, and at SuperDataScience.com.
Brewing Beer with A.I., with Beau Warren
In today's episode, Beau Warren of the innovative "Species X" brewery, details how we collaborated together on an A.I. model to craft the perfect beer. Dubbed "Krohn&Borg" lager, you can join us in Columbus, Ohio on Thursday night to try it yourself! 🍻
Read MoreA Code-Specialized LLM Will Realize AGI, with Jason Warner
Don't miss this mind-blowing episode with Jason Warner, who compellingly argues that code-specialized LLMs will bring about AGI. His firm, poolside, was launched to achieve this and facilitate an "AI-led, developer-assisted" coding paradigm en route.
Jason:
• Is Co-Founder and CEO of poolside, a hot venture capital-backed startup that will shortly be launching its code-specialized Large Language Model and accompanying interface that is designed specifically for people who code like software developers and data scientists.
• Previously was Managing Director at the renowned Bay-Area VC Redpoint Ventures.
• Before that, held a series of senior software-leadership roles at major tech companies including being CTO of GitHub and overseeing the Product Engineering of Ubuntu.
• Holds a degree in computer science from Penn State University and a Master's in CS from Rensselaer Polytechnic Institute.
Today’s episode should be fascinating to anyone keen to stay abreast of the state of the art in A.I. today and what could happen in the coming years.
In today’s episode, Jason details:
• Why a code-generation-specialized LLM like poolside’s will be far more valuable to humans who code than generalized LLMs like GPT-4 or Gemini.
• Why he thinks AGI itself will be brought about by a code-specialized ML model like poolside’s.
The SuperDataScience podcast is available on all major podcasting platforms, YouTube, and at SuperDataScience.com.
AI is Disadvantaging Job Applicants, But You Can Fight Back
In today's important episode, the author, professor and journalist Hilke Schellmann details how specific HR-tech firms misuse A.I. to facilitate biased hiring, promotion, and firing decisions. She also covers how you can fight back and how A.I. can be done right!
Hilke’s book, "The Algorithm: How A.I. Decides Who Gets Hired, Monitored, Promoted, and Fired and Why We Need to Fight Back Now", was published earlier this month. In the exceptionally clear and well-written book, Hilke draws on exclusive information from whistleblowers, internal documents and real‑world tests to detail how many of the algorithms making high‑stakes decisions are biased, racist, and do more harm than good.
In addition to her book, Hilke:
• Is Assistant Professor of Journalism and A.I. at New York University.
• Previously worked in journalism roles at The Wall Street Journal, The New York Times and VICE Media.
• Holds a Master’s in investigative reporting from Columbia University.
Today’s episode will be accessible and interesting to anyone. In it, Hilke details:
• Examples of specific HR-technology firms that employ misleading Theranos-like tactics.
• How A.I. *can* be used ethically for hiring and throughout the employment lifecycle.
• What you can do to fight back if you suspect you’ve been disadvantaged by an automated process.
The SuperDataScience podcast is available on all major podcasting platforms, YouTube, and at SuperDataScience.com.
The Five Levels of AGI
Artificial General Intelligence (AGI) is a term thrown around a lot, but it's been poorly defined. Until now!
Read More2024 Data Science Trend Predictions
What are the big A.I. trends going to be in 2024? In today's episode, the magnificent data-science leader and futurist Sadie St. Lawrence fill us in by methodically making her way from the hardware layer (e.g., GPUs) up to the application layer (e.g., GenAI apps).
Read MoreHow to Integrate Generative A.I. Into Your Business, with Piotr Grudzień
Want to integrate Conversational A.I. ("chatbots") into your business and ensure it's a (profitable!) success? Then today's episode with Quickchat AI co-founder Piotr Grudzień, covering both customer-facing and internal use cases, will be perfect for you.
Piotr:
• Is Co-Founder and CTO of Quickchat AI, a Y Combinator-backed conversation-design platform that lets you quickly deploy and debug A.I. assistants for your business.
• Previously worked as an applied scientist at Microsoft.
• Holds a Master’s in computer engineering from the University of Cambridge.
Today's episode should be accessible to technical and non-technical folks alike.
In this episode, Piotr details:
• What it takes to make a conversational A.I. system successful, whether that A.I. system is externally facing (such as a customer-support agent) or internally facing (such as a subject-matter expert).
• What’s it’s been like working in the fast-developing Large Language Model space over the past several years.
• What his favorite Generative A.I. (foundation model) vendors are.
• What the future of LLMs and Generative A.I. will entail.
• What it takes to succeed as an A.I. entrepreneur.
The SuperDataScience podcast is available on all major podcasting platforms, YouTube, and at SuperDataScience.com.
How to Visualize Data Effectively, with Prof. Alberto Cairo
The renowned data-visualization professor and many-time bestselling author Dr. Alberto Cairo is today's guest! Want a copy of his fantastic new book, "The Art of Insight"? I'm giving away ten physical copies; see below for how to get one.
Alberto:
• Is the Knight Chair in Infographics and Data Visualization at the University of Miami.
• Leads visualization efforts at the University of Miami’s Institute for Data Science and Computing.
• Is a consultant for Google, the US government and many more prominent institutions.
• Has written three bestselling books on data visualization, all in the past decade.
• His fourth book, "The Art of Insight", was just published.
Today’s episode will be of interest to anyone who’d like to understand how to communicate with data more effectively.
In this episode, which tracks the themes covered in his "The Art of Insight" book, Alberto details:
• How data visualization relates to the very meaning of life.
• What it takes to enter in a meditation-like flow state when creating visualizations.
• When the “rules” of data communication should be broken.
• His data visualization tips and tricks.
• How infographics can drive social change.
• How extended reality, A.I. and other emerging technologies will change data viz in the coming years.
The SuperDataScience podcast is available on all major podcasting platforms, YouTube, and at SuperDataScience.com.
Q*: OpenAI’s Rumored AGI Breakthrough
Today’s episode is all about a rumored new model out of OpenAI called Q* (pronounced “Q star”) that has been causing quite a stir, both for its purported role in Altmangate and its implications for Artificial General Intelligence (AGI).
Key context:
• Q* is reported to have advanced capabilities in solving complex math problems expressed in natural language, indicating a significant leap in A.I.
• The rumors about Q* emerged during OpenAI's corporate drama involving the firing and re-hiring of CEO Sam Altman.
• Reports suggested a connection between Q*'s development and the OpenAI upheaval, with staff expressing concerns about its potential dangers to humanity (no definitive evidence links Q* to the OpenAI CEO controversy, however, leaving its role in the incident ambiguous).
Research overview:
• OpenAI's recent published research on solving grade-school word-based math problems (e.g., “The cafeteria had 23 apples. They used 20 for lunch and bought 6 more. How many apples do they have?”) hints at broader implications of step-by-step reasoning in A.I.
• While today's Large Language Models (LLMs) show better results on logical problems when we use chain-of-thought prompting ("work through the problem step by step"), the contemporary LLMs do so linearly (they don't go back to correct themselves or explore alternative intermediate steps), which limits their capability.
• To develop a model that can be trained and evaluated at each intermediate step, OpenAI gathered tons of human feedback on math-word problems, amassing a dataset of 800,000 individual intermediate steps across 75,000 problems.
• Their approach involves an LLM generating solutions at each step and a second model acting as a verifier.
The Q* connection:
• The above research merges LLM reasoning abilities with search-tree methods, inspired by Google DeepMind's AlphaGo algorithm and its ilk.
• The decades-old Q* concept is used for training models to simulate and evaluate prospective moves, a concept from reinforcement learning.
• Q*'s potential for automated self-play could lead to significant advancements in AGI, particularly by reducing reliance on (expensive) human-generated training data.
Implications:
• Q* could yield significant societal benefits (e.g., by solving mathematical proofs humans can't or discovering new physics), albeit with potentially high inference costs.
• Q* raises concerns about security and the unresolved challenges in achieving AGI.
• While Q* isn't the final leap towards AGI, it would represent a major milestone in general reasoning abilities.
The SuperDataScience podcast is available on all major podcasting platforms, YouTube, and at SuperDataScience.com.
AI is Eating Biology and Chemistry, with Dr. Ingmar Schuster
For today's exceptional episode, I traveled to Berlin to find out how the visionary Dr. Ingmar Schuster is using A.I. to transform biology and chemistry research, thereby helping solve the world's most pressing problems, from cancer to climate change.
Ingmar:
• Is CEO and co-founder of Exazyme, a German biotech startup that aims to make chemical design as easy as using an app.
• Previously he worked as a research scientist and senior applied scientist at Zalando, the gigantic European e-retailer.
• Completed his PhD in Computer Science at Leipzig University and postdocs at the Université Paris Dauphine and the Freie Universität Berlin, throughout which he focused on using Bayesian and Monte Carlo approaches to model natural language and time series.
Today’s episode is on the technical side so may appeal primarily to hands-on practitioners such as data scientists and machine learning engineers.
In this episode, Ingmar details:
• What kernel methods are and how he uses them at Exazyme to dramatically speed the design of synthetic biological catalysts and antibodies for pharmaceutical firms and chemical producers, with applications including fixing carbon dioxide more effectively than plants and allowing our own immune system to detect and destroy cancer.
• When “shallow” machine learning approaches are more valuable than deep learning approaches.
• Why the benefits of A.I. research far outweigh the risks.
• What it takes to become a deep-tech entrepreneur like him.
The SuperDataScience podcast is available on all major podcasting platforms, YouTube, and at SuperDataScience.com.
Engineering Biomaterials with Generative AI, with Dr. Pierre Salvy
Today, the brilliant Dr. Pierre Salvy details the "double deep-tech sandwich" that blends cutting-edge A.I. (generative LLMs) with cutting-edge bioengineering (creating new materials). This is a fascinating one, shot live at the Merantix AI Campus in Berlin.
Pierre:
• Has been at Cambrium for three years. Initially as Head of Computational Biology and then Head of Engineering for the past two years, growing the team from 2 to 7 to bridge the gap between wet-lab biology, data science, and scientific computing.
• Holds a PhD in Biotechnology from EPFL in Switzerland and a Master’s in Math, Physics and Engineering Science from Mines in Paris.
Today’s episode touches on technical machine learning concepts here and there, but should largely be accessible to anyone.
In it, Pierre details:
• How data-driven R&D allowed Cambrium to go from nothing to tons of physical product sales inside two years.
• How his team leverages Large Language Models (LLMs) to be the biological-protein analogue of a ChatGPT-style essay generator.
The SuperDataScience podcast is available on all major podcasting platforms, YouTube, and at SuperDataScience.com.
Scikit-learn’s Past, Present and Future, with scikit-learn co-founder Dr. Gaël Varoquaux
For today's massive episode, I traveled to Paris to interview Dr. Gael Varoquaux, co-founder of scikit-learn, the standard library for machine learning worldwide (downloaded over 1.4 million times PER DAY 🤯). In it, Gaël fills us in on sklearn's history and future.
More on Gaël:
• Actively leads the development of the ubiquitous scikit-learn Python library today, which has several thousand people contributing open-source code to it.
• Is Research Director at the famed Inria (the French National Institute for Research in Digital Science and Technology), where he leads the Soda ("social data") team that is focused on making a major positive social impact with data science.
• Has been recognized with the Innovation Prize from the French Academy of Sciences and many other awards for his invaluable work.
Today’s episode will likely be of primary interest to hands-on practitioners like data scientists and ML engineers, but anyone who’d like to understand the cutting edge of open-source machine learning should listen in.
In this episode, Gaël details:
• The genesis, present capabilities and fast-moving future direction of scikit-learn.
• How to best apply scikit-learn to your particular ML problem.
• How ever-larger datasets and GPU-based accelerations impact the scikit-learn project.
• How (whether you write code or not!) you can get started on contributing to a mega-impactful open-source project like scikit-learn yourself.
• Hugely successful social-impact data projects his Soda lab has had recently.
• Why statistical rigor is more important than ever and how software tools could nudge us in the direction of making more statistically sound decisions.
The SuperDataScience podcast is available on all major podcasting platforms, YouTube, and at SuperDataScience.com.
A.I. Product Management, with Google DeepMind's Head of Product, Mehdi Ghissassi
The elite team at Google DeepMind cranks out one world-changing A.I. innovation after another. In today's episode, their affable Head of Product Mehdi Ghissassi shares his wisdom on how to design and release successful A.I. products.
Mehdi:
• Has been Head of Product at Google DeepMind — the world’s most prestigious A.I. research group — for over four years.
• Spent an additional three years at DeepMind before that as their Head of A.I. Product Incubation and a further four years before that in product roles at Google, meaning he has more than a decade of product leadership experience at Alphabet.
• Member of the Board of Advisors at CapitalG, Alphabet’s renowned venture capital and private equity fund.
• Holds five (!!!) Master’s degrees, including computer science and engineering Master’s degrees from the École Polytechnique, in International Relations from Sciences Po, and an MBA from Columbia Business School.
Today’s episode will be of interest to anyone who’s keen to create incredible A.I. products.
In this episode, Mehdi details:
• Google DeepMind’s bold mission to achieve Artificial General Intelligence (AGI).
• Game-changing DeepMind A.I. products such as AlphaGo and AlphaFold.
• How he stays on top of fast-moving A.I. innovations.
• The key ethical issues surrounding A.I.
• A.I.’s big social-impact opportunities.
• His guidance for investing in A.I. startups.
• Where the big opportunities lie for A.I. products in the coming years.
The SuperDataScience podcast is available on all major podcasting platforms, YouTube, and at SuperDataScience.com.
Data Science for Astronomy, with Dr. Daniela Huppenkothen
Our planet is a tiny little blip in a vast universe. In today's episode, the astronomical data scientist and talented simplifier of the complex, Dr. Daniela Huppenkothen, explains how we collect data from space and use ML to understand the universe.
Daniela:
• Is a Scientist at both the University of Amsterdam and the SRON Netherlands Institute for Space Research.
• Was previously an Associate Director of the Institute for Data-Intensive Research in Astronomy and Cosmology at the University of Washington, and was also a Data Science Fellow at New York University.
• Holds a PhD in Astronomy from the University of Amsterdam.
Most of today’s episode should be accessible to anyone but there is some technical content in the second half that may be of greatest interest to hands-on data science practitioners.
In today’s episode, Daniela details:
• The data earthlings collect in order to observe the universe around us.
• The three categories of ways machine learning is applied to astronomy.
• How you can become an astronomer yourself.
The SuperDataScience podcast is available on all major podcasting platforms, YouTube, and at SuperDataScience.com.
A.I. Agents Will Develop Their Own Distinct Culture, with Nell Watson
Nell Watson is the most insightful person I've spoken to on where A.I. is going in the coming decades and how it will overhaul our lives. In today's mind-bending episode, she conveys these insights with amusing analogies and clever literary references.
This sensational guest, Nell:
• Is IEEE — the Institute of Electrical and Electronics Engineers’ — A.I. Ethics Certification Maestro, a role in which she engineers mechanisms into A.I. systems in order to safeguard trust and safety in algorithms.
• Also works for Apple as an Executive Consultant on philosophical matters related to machine ethics and machine intelligence.
• Is President of EURAIO - European Responsible Artificial Intelligence Office.
• Is renowned and sought-after as a public speaker, including at venerable venues like The World Bank and the United Nations General Assembly.
• On top of all that, she’s currently wrapping up a PhD in Engineering from the University of Gloucestershire in the UK.
Today’s episode covers rich philosophical issues that will be of great interest to hands-on data science practitioners but the content should be accessible to anyone. And I do highly recommend that everyone give this extraordinary episode a listen.
In this episode, Nell details:
• The distinct, and potentially dangerous, new phase of A.I. capabilities that our society is stumbling forward into.
• How you yourself can contribute to IEEE A.I. standards that can offset A.I. risks.
• How we together can craft regulations and policies to make the most of A.I.’s potential, thereby unleashing a fast-moving second renaissance and potentially bringing about a utopia in our lifetimes.
The SuperDataScience podcast is available on all major podcasting platforms, YouTube, and at SuperDataScience.com.
Universal Principles of Intelligence (Across Humans and Machines), with Prof. Blake Richards
Today's episode is wild! The exceptionally lucid Prof. Blake Richards will blow your mind on what intelligence is, why the "AGI" concept isn't real, why AI doesn't pose an existential risk to humans, and how AI could soon directly update our thoughts.
Blake:
• Is Associate Professor in the School of Computer Science and Department of Neurology and Neurosurgery at the revered McGill University in Montreal.
• Is a Core Faculty Member at Mila, one of the world’s most prestigious A.I. research labs, which is also in Montreal.
• His lab investigates universal principles of intelligence that apply to both natural and artificial agents and he has received a number of major awards for his research.
• He obtained his PhD in neuroscience from the University of Oxford and his Bachelor’s in cognitive science and AI from the University of Toronto.
Today’s episode contains tons of content that will be fascinating for anyone. A few topics near the end, however, will probably appeal primarily to folks who have a grasp of fundamental machine learning concepts like cost functions and gradient descent.
In this episode, Blake details:
• What intelligence is.
• Why he doesn’t believe in Artificial General Intelligence (AGI).
• Why he’s skeptical about existential risks from A.I.
• The many ways that A.I. research informs our understanding of how the human brain works.
• How, in the future, A.I. could practically and directly influence your thoughts and behaviors through brain-computer interfaces (BCIs).
The SuperDataScience podcast is available on all major podcasting platforms, YouTube, and at SuperDataScience.com.
Use Contrastive Search to get Human-Quality LLM Outputs
Historically, when we deploy a machine learning model into production, the parameters that the model learned during its training on data were the sole driver of the model’s outputs. With the Generative LLMs that have taken the world by storm in the past few years, however, the model parameters alone are not enough to get reliably high-quality outputs. For that, the so-called decoding method that we choose when we deploy our LLM into production is also critical.
Read MoreSeven Factors for Successful Data Leadership
Today's episode is a fun one with the jovial EIGHT-time book author, Ben Jones. In it, Ben covers the seven factors of successful data leadership — factors he's gleaned from administering his data literacy assessment to 1000s of professionals.
Ben:
• Is the CEO of Data Literacy, a firm that specializes in training and coaching professionals on data-related topics like visualization and statistics.
• Has published eight books, including bestsellers "Communicating Data with Tableau" (O'Reilly, 2014) and "Avoiding Data Pitfalls" (Wiley, 2019).
• Has been teaching data visualization at the University of Washington for nine years.
• Previously worked for six years as a director at Tableau.
Today’s episode should be broadly accessible to any interested professional.
The SuperDataScience podcast is available on all major podcasting platforms, YouTube, and at SuperDataScience.com.