Neural Networks: Zero to Hero - multilayer perceptron.


Details
Building makemore Part 2: MLP
We following a course created by Andrej Karpathy on building neural networks, from scratch, in code.
We start with the basics of backpropagation and build up to modern deep neural networks, like GPT. In my opinion language models are an excellent place to learn deep learning, even if your intention is to eventually go to other areas like computer vision because most of what you learn will be immediately transferable. This is why we dive into and focus on languade models.
Prerequisites: solid programming (Python), intro-level math (e.g. derivative, gaussian).
---
This is third event from this series.
We implement a multilayer perceptron (MLP) character-level language model. In this video we also introduce many basics of machine learning (e.g. model training, learning rate tuning, hyperparameters, evaluation, train/dev/test splits, under/overfitting, etc.).
If you was absent on previous you can watch all previous lessons.
Full curse is presented on page:
https://karpathy.ai/zero-to-hero.html
---
✅ We will follow course program, discuss code, explain fragments that are unclear and learn together.
✅ After course there will be time to eat dinner making new connections in AI worlds and sharing what you think.
Basically you can just view video at your home, but learning in group you can grasp some ideas faster asking questions and learn deeper explaining something that you understand to others.

Neural Networks: Zero to Hero - multilayer perceptron.