<div><div><div><div><div>Chapter 1 : Optimization and neural networks<br></div><div>Subtopics:</div><div>How to read the book</div><div>Introduction to the book</div><div><br></div><div>Chapter 2: Hands-on with One Single Neuron</div>Subtopics:</div><div>Overview of optimization</div><div>A definition of learning</div><div>Constrained vs. unconstrained optimization</div><div>Absolute and local minima</div><div>Optimization algorithms with focus on Gradient Descent</div><div>Variations of Gradient Descent (mini-batch and stochastic)</div><div>How to choose the right mini-batch size</div><div><br></div><div>Chapter 3: Feed Forward Neural Networks</div><div>Subtopics:</div><div>A short introduction to matrix algebra</div>Activation functions (identity, sigmoid, tanh, swish, etc.)</div><div>Implementation of one neuron in Keras</div><div>Linear regression with one neuron</div><div>Logistic regression with one neuron</div><div><br></div><div>Chapter 4: Regularization</div>Subtop