Deep Learning Representations by Yoshua Bengio

  • November 22, 2012 · 5:30 PM
  • This location is shown only to members

Date: November 22nd, 2012
Time: 5:30pm - 7:30pm
Location: 1253 McGill College, Suite 150, Montreal QC

This presentation will be in English, please feel free to ask your questions in French.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Speaker: Yoshua Bengio

PhD in CS from from McGill University, Canada, 1991, in the areas of HMMs, recurrent and convolutional neural networks, and speech recognition. Post-doc[masked] at MIT with Michael Jordan. Post-doc[masked] at Bell Labs with Larry Jackel, Yann LeCun, Vladimir Vapnik. Professor at U. Montreal (CS & operations research) since 1993. Canada Research Chair in Statistical Learning Algorithms since 2000. Fellow of the Canadian Institute of Advanced Research since 2005. NSERC industrial chair since 2006. Co-organizer of the Learning Workshop since 1998. NIPS Program Chair in 2008, NIPS General Chair in 2009. Urgel-Archambault Prize in 2009. Fellow of CIRANO. Current or previous associate/action editor for Journal of Machine Learning Research, IEEE Transactions on Neural Networks, Foundations and Trends in Machine Learning, Computational Intelligence, Machine Learning. Author of 2 books and over 200 scientific papers, with over 12000 Google Scholar citations in 2012.


Deep Learning of Representations


Yoshua Bengio will give an introduction to the area of Deep Learning, to which he has been one of the leading contributors. It is aimed at learning representations of data, at multiple levels of abstraction. Current machine learning algorithms are highly dependent on feature engineering (manual design of the representation fed as input to a learner), and it would be of high practical value to design algorithms that can do good feature learning. The ideal features are disentangling the unknown underlying factors that generated the data. It has been shown both through theoretical arguments and empirical studies that deep architectures can generalize better than too shallow ones. Since a 2006 breakthrough, a variety of learning algorithms have been proposed for deep learning and feature learning, mostly based on unsupervised learning of representations, often by stacking single-level learning algorithms. Several of these algorithms are based on probabilistic models but interesting challenges arise to handle the intractability of the likelihood itself, and alternatives to maximum likelihoods have been successfully explored, including criteria based on purely geometric intutions about manifolds and the concentration of probability mass that characterize many real-world learning tasks. Representation-learning algorithms are being applied to many tasks in computer vision, natural language processing, speech recognition and computational advertisement, and have won several international machine learning competitions, in particular thanks to their ability for transfer learning, i.e., to generalize to new settings and classes.

 

Agenda for this event:
5:30 - 6:00 pm Registration, networking
6:00 - 6:05 pm Welcome and intro
6:00 - 6:45 pm  Deep Learning Representations by Yoshua Bengio
6:45 - 7:00 pm Q&A

Please RSVP to this meetup with your full name, this will significantly speed up the registration process on the day of the event.

Google volunteers wearing Google Montreal t-shirts will be present at this event to answer any questions you may have.


====

Date : Le jeudi 22 novembre 2012

Heure : 17h30-19h30

Lieu : 1253 McGill College, bureau 150, Montréal QC


Cette présentation sera en anglais. S'il vous plaît, n'hésitez pas à poser vos questions en français ou en anglais.

Conférence sur L’apprentissage de représentations profondes

Yoshua Bengio


En 1991, il obtient un doctorat en informatique à l'université McGill au Canada, dans les domaines des HMMs, des réseaux de neurones récurrents et à convolution et de la reconnaissance vocale. De 1991- 1992, post doctorat au MIT avec Michael Jordan. De[masked], post doctorat au Bell Labs avec Larry Jackel, Yann LeCun, Vladimir Vapnik. Depuis 1993, professeur à l’Université de Montréal en informatique et en recherche opérationnelle. Depuis 2000, chercheur à la chaire de recherche du Canada en algorithmes d'apprentissage statistique. Depuis 2005, titulaire de l'Institut Canadien de recherches avancées. Depuis 2006, titulaire d'une chaire industrielle au Conseil de recherches en sciences naturelles et en génie du Canada. Coorganisateur du Learning Workshop depuis 1998. Program Chair de NIPS 2008, General Chair de NIPS 2009. Obtention du Prix Urgel-Archambault en 2009. Fellow du CIRANO. Actuel ou précédent éditeur associé du Journal of Machine Learning Research, IEEE Transactions on Neural Networks, Foundations and Trends in Machine Learning, Computational Intelligence, Machine Learning. Auteur de deux livres et de plus de 200 articles scientifiques, avec plus de 11000 citations sur Google Scholar en 2012.



L’apprentissage de représentations profondes



Yoshua Bengio donnera une introduction au domaine de l'apprentissage de représentations profondes, dont il a été l'un des principaux contributeurs. Ce domaine vise l'apprentissage des représentations de données, à plusieurs niveaux d'abstraction. Les algorithmes actuels d'apprentissage automatique sont fortement tributaires de l'ingénierie de caractéristiques (conception manuelle de la représentation alimentée en entrant à l'apprenant), et il serait très utile de pouvoir automatiser cette étape. Les intrants idéaux permettent de démêler les facteurs inconnus sous-jacents qui génèrent les données. Il a été démontré à la fois par des arguments théoriques et des études empiriques que les architectures profondes peuvent généraliser mieux que celles qui sont peu profondes. Depuis une percée  en 2006, une variété d'algorithmes d'apprentissage a été proposée pour l'apprentissage de représentations profondes, principalement basés sur l'apprentissage non supervisé de représentations, souvent en empilant des algorithmes d'apprentissage à un seul niveau. Plusieurs de ces algorithmes sont basés sur des modèles probabilistes, mais des défis intéressants se posent pour gérer l'explosion combinatoire du calcul de la probabilité elle-même, et les alternatives au maximum de vraisemblance ont été explorées avec succès, y compris les critères basés sur des intuitions purement géométriques et la concentration de la masse de probabilités qui caractérise de nombreuses tâches d'apprentissage du monde réel. Les algorithmes de représentation d'apprentissage sont appliqués à de nombreuses tâches en vision par ordinateur,  en traitement du langage naturel, en reconnaissance vocale et en publicité internet. Ils ont remporté plusieurs concours internationaux d'apprentissage automatique, en particulier grâce à leur capacité de transfert, c'est-à-dire à généraliser à des contextes différents et à de nouvelles classes.

 

Ordre du jour pour cet événement
17h30-18h00 Inscription et réseautage
18h00-18h05 Accueil et introduction
18h05-18h45 Conférence **L’apprentissage de représentations profondes**
18h45-19h00 Période de questions

Veuillez, s’il vous plaît, confirmer votre présence à cet événement en inscrivant votre nom complet, ce qui accélérera considérablement le processus d'inscription le jour de l'événement.

Des bénévoles portant un t-shirt Google Montréal seront présents lors de cet événement pour vous guider et répondre à vos questions.

 

Join or login to comment.

  • Homam B.

    Cool presentation. Very well explained however I was hoping to see some results from applying this model in the industry.

    I worked with the tech group at EA and I'm interested in seeing this model applied in games applications.

    November 23, 2012

    • Alexandre A.

      Interesting, do you happen to know any paper or information on where they apply it and on their results? (awesome presentation btw)

      1 · November 23, 2012

    • Yoshua B.

      There is already a first paper out, on match-making, by O. Delalleau (was a PhD in my lab, now works for Ubisoft): http://www.iro.umontr...­

      November 23, 2012

  • JC V.

    Très intéressante présentation ! Merci à M. Bengio et a Google !
    Now I need to good Deeper in my learning of Deep Learning :o)

    November 23, 2012

  • A former member
    A former member

    Conférence très intéressante! Merci!

    November 23, 2012

  • Claude C.

    Où peut-on trouver les diapos de la conférence?

    1 · November 23, 2012

  • Sacha L.

    Really excellent talk ! When Yoshua explains, all the concepts are clear. Deep learning yes, but deep understanding also. Thank you Yoshua.

    1 · November 23, 2012

    • Carole

      I agree with Sacha!

      1 · November 23, 2012

    • Claude C.

      Bien dit! Apprentissage profond et enseignement profond!

      November 23, 2012

  • A former member
    A former member

    Very interesting lecture. I found interesting ideas and got better understanding of deep learning. Machine learning is very interesting theme and is very actual with augmented reality. Yoshua Bengio showed an interesting difference with approaches between solving programming questions from small to big and machine learning from big to small. The split to levels is an interesting approach. Thanks to this lecture I found interesting ideas that I can use in my work.
    But I have some questions. Is it possible to choose levels automatically by machine or it is required human help? Sometimes find good criteria that is the question.

    November 23, 2012

  • Claude C.

    Un excellent survol de l'apprentissage profond par un des pionniers du domaine.

    November 22, 2012

  • Adrian T.

    Great presentation and the topics covered were very interesting and showed us formulae and algorithms used in Bengio's research that explained how Deep Learning helps us find the right image in Google, but also the workings behind neural networks and their practical applications in things such as voice recognition and reducing the errors made by saying a command to your phone for example. It got me thinking about ideas that I thought were never feasible without the research done on deep learning. Great job! :)

    November 22, 2012

  • Yoshua B.

    Excellent ;-)

    November 22, 2012

  • Lukas T.

    It was nice, although I did not learn much how to train deep belief networks. Significant part of the presentation was spent on too simple topics and then rest was quiet advanced, I was missing something between. Perhaps one or two slide on perceptrons and NN would be nice. It was more about applications that theory. So general review, good for general audience, not so much for ML researchers. Although, I loved the dropout trick, looking forward to try that one on my own algorithms.

    November 22, 2012

  • Corey C.

    Very interesting. Thanks for the great talk!

    November 22, 2012

People in this
Meetup are also in:

Sometimes the best Meetup Group is the one you start

Get started Learn more
Bill

I started the group because there wasn't any other type of group like this. I've met some great folks in the group who have become close friends and have also met some amazing business owners.

Bill, started New York City Gay Craft Beer Lovers

Sign up

Meetup members, Log in

By clicking "Sign up" or "Sign up using Facebook", you confirm that you accept our Terms of Service & Privacy Policy