We're happy to co-host Seth for the second part in his on-going Deep Learning Series!
Learning rates. Momentum. Adding hidden layers. We’ve seen these terms thrown around when learning about neural nets. But what do they really mean? How can we make these notions precise and code them up? In this workshop, we’ll extend the example of coding a neural net from scratch we started out going through in the first Meetup of this year. We’ll learn a bunch of techniques to maximize the effectiveness of our neural nets, learning which tricks matter and which don’t. Attendees will leave here feeling more confident knowing "what is going on” with the tricks used to tune neural nets.
Seth has done quantitative consulting and data science for 5 years, in the consulting industry, internally for Capital One, and atTrunk Club, and is now Senior Data Scientist at Metis. A former aspiring mathematician, he has recently endeavored to understand the math behind neural nets and backpropagation, and, finding no satisfactory, mathematically rigorous explanations online (and not yet owning "Machine Learning Refined (http://mlrefined.com/)") he has started writing his own at