Today's video introduces the Chain Rule — arguably the single most important differentiation rule for ML. It facilitates several of the most ubiquitous ML algorithms, such as gradient descent and backpropagation.
Gradient descent and backprop will be covered in great detail later in my "Machine Learning Foundations" video series. This video is critical for understanding those applications.
New videos are published every Monday and Thursday to my "Calculus for ML" course, which is available on YouTube.
More detail about my broader "ML Foundations" curriculum and all of the associated open-source code is available in GitHub.