Nous sommes développeurs et chercheurs avec un intérêt dans l'apprentissage automatique. Nous nous retrouverons pour discuter concrètement nos projets dans l'apprentissage automatique, réseau de neurones artificiels, modèles graphiques probabilistes, et traitement automatique du langage naturel.
We're developers and scientists interested in Machine Learning, Probabilistic Graphical Models, Neural networks, and Natural Language Processing. In this meetup, we'll bring together machine learning practitioners and researchers to listen to each other's work.
Title : Sparsity in the neocortex, and its implications for machine learning
Most deep learning networks today rely on dense representations. This is in stark contrast to our brains which are extremely sparse. In this talk, I will first discuss what is known about the sparsity of activations and connectivity in the neocortex. I will also summarize new experimental data around active dendrites, branch specific plasticity, and structural plasticity, each of which has surprising implications for how we think about sparsity. In the second half of the talk, I discuss how these insights from the brain can be applied to practical machine learning applications. I will show how sparse representations can give rise to improved robustness, continuous learning, powerful unsupervised learning rules, and improved computational efficiency.
Subutai Ahmad is the VP of Research at Numenta, with experience in computational neuroscience, deep learning, computer vision, and entrepreneurship. His current research interests are focused around creating a detailed theory of the neocortex, and applying its concepts to machine learning. Subutai holds an A.B. in Computer Science from Cornell University, and a Ph.D in Computer Science from the University of Illinois at Urbana-Champaign.
Here's a neuroscience focused paper:
Why Neurons Have Thousands of Synapses, a Theory of Sequence Memory in Neocortex
Here's a more ML focused paper:
How Can We Be So Dense? The Benefits of Using Highly Sparse Representations