Assessing the fastest-growing job is tricky. For example, using job-posting data isn’t great because there could be lots of duplicate postings out there or a lot of the postings could be going unfilled. Another big issue is defining exactly what a job is: The exact same responsibilities could be associated with the job title “data scientist”, “data engineer” or “ML engineer”, depending on the particular job titles a particular company decides to go with. So, whoever’s evaluating job growth is going to end up bucketing groups of related jobs and responsibilities into one particular, standardized job-title bucket, probably these days in a largely automated, data-driven way; if you dug into individual examples, I’m sure you’d find lots of job-title standardizations you disagreed with but some kind of standardization approach is essential to ensuring identical roles with slightly different job titles get counted as the same thing.
Read MoreFiltering by Category: Computer Science
In Case You Missed It in August 2024
We had a slew of eye-opening conversations in August on the SuperDataScience Podcast I host. ICYMI, today's episode highlights the most fascinating moments from my convos with them.
Specifically, conversation highlights include:
1. ChainML's Head of A.I. Education Shingai Manjengwa on how multiple, individual A.I. agents can come together to perform complex actions.
2. Renowned futurist and entrepreneur Dr. Daniel Hulme on how A.I. can help us become better and faster at our jobs by circumventing the traditional corporate hierarchies that today seem only to slow us down.
3. Mathematical-optimization guru Jerome Yurchisin (of Gurobi Optimization) on how continuing education will be vital in our increasingly automated work environment... and how this education will be streamlined by A.I.
4. Nick Elprin, Co-Founder and CEO of the wildly successful Domino Data Lab, on why it's essential for enterprises to clearly define their A.I. infrastructure in order for their A.I. deployments to prosper.
Check out today's episode (#818) to hear all these eye-opening conversations. The "Super Data Science Podcast with Jon Krohn" is available on all major podcasting platforms and a video version is on YouTube.
The SuperDataScience podcast is available on all major podcasting platforms, YouTube, and at SuperDataScience.com.
A Code-Specialized LLM Will Realize AGI, with Jason Warner
Don't miss this mind-blowing episode with Jason Warner, who compellingly argues that code-specialized LLMs will bring about AGI. His firm, poolside, was launched to achieve this and facilitate an "AI-led, developer-assisted" coding paradigm en route.
Jason:
• Is Co-Founder and CEO of poolside, a hot venture capital-backed startup that will shortly be launching its code-specialized Large Language Model and accompanying interface that is designed specifically for people who code like software developers and data scientists.
• Previously was Managing Director at the renowned Bay-Area VC Redpoint Ventures.
• Before that, held a series of senior software-leadership roles at major tech companies including being CTO of GitHub and overseeing the Product Engineering of Ubuntu.
• Holds a degree in computer science from Penn State University and a Master's in CS from Rensselaer Polytechnic Institute.
Today’s episode should be fascinating to anyone keen to stay abreast of the state of the art in A.I. today and what could happen in the coming years.
In today’s episode, Jason details:
• Why a code-generation-specialized LLM like poolside’s will be far more valuable to humans who code than generalized LLMs like GPT-4 or Gemini.
• Why he thinks AGI itself will be brought about by a code-specialized ML model like poolside’s.
The SuperDataScience podcast is available on all major podcasting platforms, YouTube, and at SuperDataScience.com.
Generative A.I. without the Privacy Risks (with Prof. Raluca Ada Popa)
Consumers and enterprises dread that Generative A.I. tools like ChatGPT breach privacy by using convos as training data, storing PII and potentially surfacing confidential data as responses. Prof. Raluca Ada Popa has all the solutions.
Today's guest, Raluca:
• Is Associate Professor of Computer Science at University of California, Berkeley.
• Specializes in computer security and applied cryptography.
• Her papers have been cited over 10,000 times.
• Is Co-Founder and President of Opaque Systems, a confidential computing platform that has raised over $31m in venture capital to enable collaborative analytics and A.I., including allowing you to securely interact with Generative A.I.
• Previously co-founded PreVeil, a now-well-established company that provides end-to-end document and message encryption to over 500 clients.
• Holds a PhD in Computer Science from MIT.
Despite Raluca being such a deep expert, she does such a stellar job of communicating complex concepts simply that today’s episode should appeal to anyone that wants to dig into the thorny issues around data privacy and security associated with Large Language Models (LLMs) and how to resolve them.
In the episode, Raluca details:
• What confidential computing is and how to do it without sacrificing performance.
• How you can perform inference with an LLM (or even train an LLM!) without anyone — including the LLM developer! — being able to access your data.
• How you can use commercial generative models OpenAI’s GPT-4 without OpenAI being able to see sensitive or personally-identifiable information you include in your API query.
• The pros and cons of open-source versus closed-source A.I. development.
• How and why you might want to seamlessly run your compute pipelines across multiple cloud providers.
• Why you should consider a career that blends academia and entrepreneurship.
The SuperDataScience podcast is available on all major podcasting platforms, YouTube, and at SuperDataScience.com.
CatBoost: Powerful, efficient ML for large tabular datasets
CatBoost is making waves in open-source ML as it's often the top approach for tasks as diverse as classification, regression, ranking, and recommendation. This is especially so if working with tabular data that include categorical variables.
This justifiable excitement in mind, today's "Five-Minute Friday" episode of SuperDataScience is dedicated to CatBoost (short for “category” and “boosting”).
CatBoost has been around since 2017 when it was released by Yandex, a tech giant based in Moscow. In a nutshell, CatBoost — like the more established (and regularly Kaggle-leaderboard-topping approaches) XGBoost and LightGBM — is at its heart a decision-tree algorithm that leverages gradient boosting. So that explains the “boost” part of CatBoost.
The “cat” (“category”) part comes from CatBoost’s superior handling of categorical features. If you’ve trained models with categorical data before, you’ve likely experienced the tedium of preprocessing and feature engineering with categorical data. CatBoost comes to the rescue here, efficiently dealing with categorical variables by implementing a novel algorithm that eliminates the need for extensive preprocessing or manual feature engineering. CatBoost handles categorical features automatically by employing techniques such as target encoding and one-hot encoding.
In addition to CatBoost’s superior handling of categorical features, the algorithm also makes use of:
• A specialized gradient-based optimization scheme known as Ordered Boosting that takes advantage of the natural ordering of categorical variables to minimize the loss function efficiently.
• Symmetric decision trees, which have a fixed tree depth that enables a faster training time relative to XGBoost and a comparable training time to LightGBM, which is famous for its speed.
• Regularization techniques, such as the well-known L2 regularization as well as ordered boosting and symmetric trees already discussed, all together make CatBoost unlikely to overfit to training data relative to other boosted-tree algorithms.
The SuperDataScience podcast is available on all major podcasting platforms, YouTube, and at SuperDataScience.com.
Generative Deep Learning, with David Foster
Today, bestselling author David Foster provides a fascinating technical introduction to cutting-edge Generative A.I. concepts including variational autoencoders, diffusion models, contrastive learning, GANs and (my favorite!) "world models".
David:
• Wrote the O'Reilly book “Generative Deep Learning”; the first edition from 2019 was a bestseller while the second edition was released just last week.
• Is a Founding Partner of Applied Data Science Partners, a London-based consultancy specialized in end-to-end data science solutions.
• Holds a Master’s in Mathematics from the University of Cambridge and a Master’s in Management Science and Operational Research from the University of Warwick.
Today’s episode is deep in the weeds on generative deep learning pretty much from beginning to end and so will appeal most to technical practitioners like data scientists and ML engineers.
In the episode, David details:
• How generative modeling is different from the discriminatory modeling that dominated machine learning until just the past few months.
• The range of application areas of generative A.I.
• How autoencoders work and why variational autoencoders are particularly effective for generating content.
• What diffusion models are and how latent diffusion in particular results in photorealistic images and video.
• What contrastive learning is.
• Why “world models” might be the most transformative concept in A.I. today.
• What transformers are, how variants of them power different classes of generative models such as BERT architectures and GPT architectures, and how blending generative adversarial networks with transformers supercharges multi-modal models.
The SuperDataScience podcast is available on all major podcasting platforms, YouTube, and at SuperDataScience.com.
Business Intelligence Tools, with Mico Yuk
Today's guest is the straight shooter Mico Yuk, who pulls absolutely no punches in her assessment of, well, anything! ...but particularly about vendors in the business intelligence and data analytics space. Enjoy!
Mico:
• Is host of the popular Analytics on Fire Podcast (top 2% worldwide).
• Co-founded the BI Brainz Group, an analytics consulting and solutions company that has taught over 15,000 students analytics, visualization and data storytelling courses — included at major multinationals like Nestlé, FedEx and Procter & Gamble.
• Authored the "Data Visualization for Dummies" book.
• Is a sought-after keynote speaker and TV-news commentator.
In this episode, Mico details:
• Her BI (business intelligence) and analytics framework that persuades executives with data storytelling.
• What the top BI tools are on the market today.
• The BI trends she’s observed that could predict the most popular BI tools of the coming years.
The SuperDataScience podcast is available on all major podcasting platforms, YouTube, and at SuperDataScience.com.
XGBoost: The Ultimate Classifier, with Matt Harrison
XGBoost is typically the most powerful ML option whenever you're working with structured data. In today's episode, world-leading XGBoost XPert (😂) Matt Harrison details how it works and how to make the most of it.
Matt:
• Is the author of seven best-selling books on Python and Machine Learning.
• His most recent book, "Effective XGBoost", was published in March.
• Teaches "Exploratory Data Analysis with Python" at Stanford University.
• Through his consultancy MetaSnake, he’s taught Python at leading global organizations like NASA, Netflix, and Qualcomm.
• Previously worked as a CTO and Software Engineer.
• Holds a degree in Computer Science from Stanford.
Today’s episode will appeal primarily to practicing data scientists who are keen to learn about XGBoost or keen to become an even deeper expert on XGBoost by learning about it from a world-leading educator on the library.
In this episode, Matt details:
• Why XGBoost is the go-to library for attaining the highest accuracy when building a classification model.
• Modeling situations where XGBoost should not be your first choice.
• The XGBoost hyperparameters to adjust to squeeze every bit of juice out of your tabular training data and his recommended library for automating hyperparameter selection.
• His top Python libraries for other XGBoost-related tasks such as data preprocessing, visualizing model performance, and model explainability.
• Languages beyond Python that have convenient wrappers for applying XGBoost.
• Best practices for communicating XGBoost results to non-technical stakeholders.
The SuperDataScience podcast is available on all major podcasting platforms, YouTube, and at SuperDataScience.com.
Automating Industrial Machines with Data Science and the Internet of Things (IoT)
Despite poor lighting on my face in today's video version (my bad!), we've got a fascinating episode with the brilliant (and well-lit!) Allegra Alessi, who details how data science is automating industrial machines.
Allegra:
• Is Product Owner for IoT (Internet of Things) devices at BOBST, a Swiss industrial manufacturing giant.
• Previously, she worked as a Product Owner and Data Scientist for Rolls-Royce in the UK and as a Data Scientist for Alstom, the enormous train manufacturing company, in Paris.
• She holds a Master’s in Engineering from Politecnico di Milano in Italy.
In this episode, Allegra details:
• How modern industrial machinery depends on data science for real-time performance analytics, predicting issues before they happen, and fully automating their operations.
• The tech stack her team uses to build data-driven IoT platforms.
• The key methodologies she uses to be effective at product management.
• The kinds of data scientists that might be ideally suited to moving into a product role.
The SuperDataScience podcast is available on all major podcasting platforms, YouTube, and at SuperDataScience.com.
LLaMA: GPT-3 performance, 10x smaller
By training (relatively) small LLMs for (much) longer, Meta AI's LLaMA architectures achieve GPT-3-like outputs at as little as a thirteenth of GPT-3's size. This means cost savings and much faster execution time.
LLaMA, a clever nod to LLMs (Large Language Models), is Meta AI's latest contribution to the AI world. Based on the Chinchilla scaling laws, LLaMA adopts a principle that veers away from the norm. Unlike its predecessors, which boasted hundreds of millions of parameters, LLaMA emphasizes training smaller models for longer durations to achieve enhanced performance.
The Chinchilla Principle in LLaMA
The Chinchilla scaling laws, introduced by Hoffmann and colleagues, postulate that extended training of smaller models can lead to superior performance. LLaMA, with its 7 billion to 65 billion parameter models, is a testament to this principle. For perspective, GPT-3 has 175 billion parameters, making the smallest LLaMA model just a fraction of its size.
Training Longer for Greater Performance
Meta AI's LLaMA pushes the boundaries by training these relatively smaller models for significantly longer periods than conventional approaches. This method contrasts with last year's top models like Chinchilla, GPT-3, and PaLM, which relied on undisclosed training data. LLaMA, however, uses entirely open-source data, including datasets like English Common Crawl, C4, GitHub, Wikipedia, and others, adding to its appeal and accessibility.
LLaMA's Remarkable Achievements
LLaMA's achievements are notable. The 13 billion parameter model (LLaMA 13B) outperforms GPT-3 in most benchmarks, despite having 13 times fewer parameters. This implies that LLaMA 13 can offer GPT-3 like performance on a single GPU. The largest LLaMA model, 65B, competes with giants like Chinchilla 70B and PaLM, even preceding the release of GPT-4.
This approach signifies a shift in the AI paradigm – achieving state-of-the-art performance without the need for enormous models. It's a leap forward in making advanced AI more accessible and environmentally friendly. The model weights, though intended for researchers, have been leaked and are available for non-commercial use, further democratizing access to cutting-edge AI.
LLaMA not only establishes a new benchmark in AI efficiency but also sets the stage for future innovations. Building on LLaMA's foundation, models like Alpaca, Vicuna, and GPT4ALL have emerged, fine-tuned on thoughtful datasets to exceed even LLaMA's performance. These developments herald a new era in AI, where size doesn't always equate to capability.
The SuperDataScience podcast is available on all major podcasting platforms, YouTube, and at SuperDataScience.com.
AutoML: Automated Machine Learning
AutoML with Erin LeDell — it rhymes! In today's episode, H2O.ai's Chief ML Scientist guides us through what Automated Machine Learning is and why it's an advantageous technique for data scientists to adopt.
Dr. LeDell:
• Has been working at H2O.ai — the cloud A.I. firm that has raised over $250m in venture capital and is renowned for its open-source AutoML library — for eight years.
• Founded (WiMLDS) Women in Machine Learning & Data Science (100+ chapters worldwide).
• Co-founded R-Ladies Global, a community for genders currently underrepresented amongst R users.
• Is celebrated for her talks at leading A.I. conferences.
• Previously was Principal Data Scientist at two acquired A.I. startups.
• Holds a Ph.D. from the Berkeley focused on ML and computational stats.
Today’s episode is relatively technical so will primarily appeal to technical listeners, but it would also provide context to anyone who’s interested to understand how key aspects of data science work are becoming increasingly automated.
In this episode, Erin details:
• What AutoML — automated machine learning — is and why it’s an advantageous technique for data scientists to adopt.
• How the open-source H2O AutoML platform works.
• What the “No Free Lunch Theorem” is.
• What Admissible Machine Learning is and how it can reduce the biases present in many data science models.
• The new software tools she’s most excited about.
• How data scientists can prepare for the increasingly automated data science field of the future.
The SuperDataScience podcast is available on all major podcasting platforms, YouTube, and at SuperDataScience.com.
Subword Tokenization with Byte-Pair Encoding
When working with written natural language data as we do with many natural language processing models, a step we typically carry out while preprocessing the data is tokenization. In a nutshell, tokenization is the conversion of a long string of characters into smaller units that we call tokens.
Read MoreTEDx Talk: How Neuroscience Inspires A.I. Breakthroughs that will Change the World
My first TED-format talk is live! In it, I use (A.I.-generated!) visuals to color how A.I. will transform the world in our lifetimes, with particular emphases on climate change, food security, and healthcare innovations.
Thanks to Christina, Banu, and everyone at TEDxDrexelU for inviting me to speak, organizing a slick event, and masterfully editing the footage of my talk.
Thanks to Ed, Andrew, and Shaan at Nebula.io for providing invaluable feedback on drafts of my talk. It's only due to your constructive criticism that the final version turned out as well as it did. Thanks as well to Steven and Alex at Wynden Stark for kindly covering the travel costs of any employees that came down to Philadelphia to see the talk in-person.
Finally, thanks to Taya and Hannah at OpenAI for providing me with early access to custom images from their DALL-E 2 model. These were critical to me being able to tell the effectively convey the narrative I yearned to.
Data Science Interviews with Nick Singh
For an episode all about tips for crushing interviews for Data Scientist roles, our guest is Nick Singh — author of the bestselling "Ace the Data Science Interview" book and creator of the DataLemur SQL interview platform.
Nick:
• Co-authored “Ace the Data Science Interview”, an interview-question guide that has sold over 16,000 copies since it was released last year.
• Created the DataLemur platform for interactively practicing interview questions involving SQL queries.
• Worked as a software engineer at Facebook, Google, and Microsoft.
• Holds a BS in engineering from the University of Virginia.
Today's episode is ideal for folks who are looking to land a data science job for the first time, level-up into a more senior data science role, or perhaps land a data science gig at a new firm.
In this episode, Nick details:
• His top tips for success in data science interviews.
• Common misconceptions about data science interviews.
• How to become comfortable with self-promotion and increase your chances of landing your dream job.
• Strategies for when interviewers ask if you have any questions for them.
• The subject areas and skills you should master before heading into a data science interview.
The SuperDataScience show's available on all major podcasting platforms, YouTube, and at SuperDataScience.com.
Open-Ended A.I.: Practical Applications for Humans and Machines
In today's remarkable episode, Dr. Kenneth Stanley uses evidence from his machine learning research on Open-Ended A.I. and evolutionary algorithms to inform how you as a human can achieve great life outcomes.
Ken:
• Co-authored the book "Why Greatness Cannot be Planned", a genre-defying book that leverages his ML research to redefine how a human can optimally achieve extraordinary outcomes over the course of their lifetime.
• Was until recently Open-Endedness Team Leader at OpenAI, one of the world’s top A.I. research organizations.
• Led Core A.I. Research for Uber A.I.
• With Prof. Gary Marcus and others, founded A.I. startup Geometric Intelligence, which was acquired by Uber.
• Was Professor of Computer Science at the University of Central Florida.
• Holds a dozen patents for ML innovations, including open-ended and evolutionary (especially neuroevolutionary) approaches.
Today’s episode does get fairly deep into the weeds of ML theory at points so may be best-suited to technical practitioners. That said, the broad strokes of the episode could be not only informative but, again, could indeed be life-perspective-altering for any curious listener.
In this episode, Ken details:
• What genetic ML algos are and how they work effectively in practice.
• How the Objective Paradox — that you fail to achieve an objective you seek — is common across ML and human pursuits.
• How an approach called Novelty Search can lead to superior outcomes than pursuing an explicit objective, again both for machines and humans.
• What Open-Ended A.I. is and its intimate relationship with AGI, a machine with the same learning potential as a human.
• His vision for how A.I. could transform life for humans in the coming decades.
The SuperDataScience show's available on all major podcasting platforms, YouTube, and at SuperDataScience.com.
Data Mesh
"Data Mesh" may be the trendiest term in data science. What is it and how will its Distributed A.I. transform your organization? The founder of the Data Mesh concept herself, Zhamak Dehghani, explains in this episode.
Zhamak:
• Authored the O'Reilly Media book "Data Mesh" and also co-authored an O’Reilly book on software architecture.
• Is newly the CEO and founder of a stealth tech startup reimagining the future of the data developer experience though the Data Mesh.
• Previously worked as a software engineer, software architect, and as a technology incubation director.
• Holds a Bachelor of Engineering degree in Computer Software from the Shahid Beheshti University in Iran and a Masters in Information Technology Management from the University of Sydney in Australia.
Today’s episode should be broadly interesting to anyone who’s keen to get a glimpse of the future of how organizations will work with data and A.I.
In this episode, Zhamak details:
• What a data mesh is.
• Why data meshes are essential today and will be even more so in the coming years.
• The biggest challenges of distributed data architectures.
• Why now was the right time for her to launch her own data mesh startup.
• Her tricks for keeping pace with the rapid of pace of tech progress.
The SuperDataScience show's available on all major podcasting platforms, YouTube, and at SuperDataScience.com.
Upskilling in Data Science and Machine Learning
This week, iconic Stanford University Deep Learning instructor and entrepreneur Kian Katanforoosh details how ML powers his EdTech platform Workera, enabling you to systematically fill gaps in your data science skills.
Kian:
• Is Co-Founder and CEO of Workera, a Bay Area education technology company that has raised $21m in venture capital to upskill workers, with a particular early focus on upskilling technologists like data scientists, software developers, and machine learning specialists.
• Is a lecturer of computer science at Stanford University (specifically, he teaches the extremely popular CS230 Deep Learning course alongside Prof. Andrew Ng, one of the world’s best-known data scientists).
• Was awarded Stanford’s highest teaching award.
• Is also a founding member of DeepLearning.AI, a platform through which he’s taught over three million students deep learning.
• Holds a Masters in Math and Computer Science from CentraleSupélec.
• Holds a Masters in Management Science and Engineering from Stanford.
By and large, today’s episode will appeal to any listener who’s keen to understand the latest in education technology, but there are parts here and there that will specifically appeal to practicing technologists like data scientists and software developers.
In this episode, Kian details:
• What a skills intelligence platform is.
• Four ways that machine learning drives his skills intelligence platform.
• What frameworks and software languages they selected for building their platform and why.
• What he looks for in the data scientists and software engineers he hires.
The SuperDataScience show's available on all major podcasting platforms, YouTube, and at SuperDataScience.com.
Geospatial Data and Unconventional Routes into Data Careers
This week, the remarkably well-read Christina Stathopoulos, details open-source software for working with geospatial data... as well as how you can navigate your data-career path, no matter what your background.
Christina:
• Has worked at Google for nearly five years in several data-centric roles.
• For the past year, she’s worked as an Analytical Lead for Waze, the popular crowdsourced navigation app owned by Google.
• Is also an adjunct professor at IE Business School School in Madrid, where she teaches courses on business analytics, machine learning, data visualization, and data ethics.
• Previously worked as a data engineer at media analytics giant Nielsen.
• Holds a Master’s in Business Analytics and Big Data from IE Business School and a Bachelor’s in Science, Tech, and Society from North Carolina State University.
Today’s episode will appeal to a broad audience of technical and non-technical listeners alike.
In this episode, Christina details:
• Geospatial data and open-source packages for working with it.
• Her tips for getting a foothold in a data career if you come from an unconventional background.
• Guidance to help women and other underrepresented groups thrive in tech.
• The hard and soft skills most essential to success in a data role today.
• Her #bookaweekchallenge and her top data book recommendations.
The SuperDataScience show's available on all major podcasting platforms, YouTube, and at SuperDataScience.com.
Venture Capital for Data Science
Keen to get the inside scoop on the Venture Capital industry and tech startup investing? Sarah Catanzaro, an eloquent VC who specializes in growing the value of Data Science companies, is our guest this week.
Sarah:
• Is a General Partner at Amplify Partners, a Bay Area venture capital firm that specializes in investing in early-stage start-ups that are pioneering new applications of data science, analytics, and machine learning.
• Previously she worked as an investor at Canvas Ventures, as Head of Data at Mattermark, and as an embedded analyst at Palantir.
• She holds a Bachelor of Science degree from Stanford University.
Today’s episode will appeal to anyone who’s keen to understand investing in early-stage start-ups.
In this episode, Sarah details:
• What venture capital is and how it differs from private equity investment.
• How to go from a data science idea to obtaining funding.
• How to pick winning investments.
• What start-ups can do to survive or raise capital in the current economic climate.
• The lessons she’s learned from ten years of experience in the field of data science.
• How to break into the field of venture capital yourself.
The SuperDataScience show's available on all major podcasting platforms, YouTube, and at SuperDataScience.com.
MLOps: Machine Learning
Analogous to the role DevOps plays for software development, MLOps enables efficient ML training and deployment. MLOps expert Mikiko Bazeley is our guide!
Mikiko:
• Is a Senior Software Engineer responsible for MLOps at Intuit Mailchimp.
• Previously held technical roles at a range of Bay Area startups, with responsibilities including software engineering, MLOps, data engineering, data science, and data analytics.
• Is a prominent content creator on MLOps – across live workshops, her YouTube channel, her personal blog, and the NVIDIA blog.
Today’s episode will appeal primarily to hands-on practitioners such as data scientists and software engineers.
In this episode, Mikiko details:
• What MLOps is.
• Why MLOps is critical for the efficiency of any data science team.
• The three most important MLOps tools.
• The four myths holding people back from MLOps expertise.
• The six most essential MLOps skills for data scientists.
• Her productivity tricks for balancing software engineering, content creation, and her athletic pursuits.
The SuperDataScience show's available on all major podcasting platforms, YouTube, and at SuperDataScience.com.