Workbook
Introduction to Neural Networks
Build a complete mental model of deep learning from the ground up — from the 1943 mathematical neuron to modern training techniques. Master the architecture, the mathematics, and the practical toolkit every deep learning practitioner needs.
Chapters
Foundations of Neural Networks
From a 1943 dinner-table conversation to the architecture powering today's AI systems — this chapter builds neural networks from first principles. We trace the 90-year history, construct the artificial neuron mathematically, and develop the multi-layer perceptron that overcomes the limits of a single neuron. Along the way we examine activation functions, the bias term, and why deep learning is a distinct discipline within machine learning.
Training Neural Networks
This chapter covers the full training loop: measuring error with loss functions, minimizing it through gradient descent and backpropagation, and accelerating convergence with adaptive optimizers like Adam. We then tackle the practical challenges every practitioner faces — vanishing gradients, overfitting, and proper data splitting — and the techniques that address them: dropout, batch normalization, regularization, and early stopping.