In many situations, it's impractical (or even impossible!) to have A.I. executed in the cloud. In today's episode, Shirish Gupta details when to run A.I. locally and how Neural Processing Units (NPUs) make it practical.
Today's episode is about efficiently designing and deploying AI applications that run on the edge. Our guide on that journey is SuperDataScience Podcast fan, Shirish! Here's more on him:
• Has spent more than two decades working for the global technology juggernaut, Dell Technologies, in their Austin, Texas headquarters.
• Has held senior systems engineering, quality engineering and field engineering roles.
• For the past three years, has been Director of AI Product Management for Dell’s PC Group.
• Holds a Master’s in Mechanical Engineering from the University of Maryland.
Today’s episode should appeal to anyone who is involved with or interested in real-world A.I. applications.
In this episode, Shirish details:
• What Neural Processing Units (NPUs) are and why they're transforming A.I. on edge devices.
• Four clear, compelling reasons to consider moving AI workloads from the cloud to your local device.
• The "A.I. PC" revolution that's bringing A.I. acceleration to everyday laptops and workstations.
• What kinds of Large Language Models are best-suited to local inference on AI PCs.
• How Dell's Pro A.I. Studio toolkit will drastically reduce enterprise A.I. deployment time.
• Plenty of real-life A.I. PC examples, including how a healthcare provider achieved physician-level accuracy with a custom vision model.
The SuperDataScience podcast is available on all major podcasting platforms, YouTube, and at SuperDataScience.com.