Deep Learning is currently a big & growing trend in data analysis and prediction - and the main fuel of a new era of AI. Google, Facebook and others have shown tremendous success in pushing image, object & speech recognition to the next level.
But Deep Learning can also be used for so many other things! The list of application domains is literally endless.
Although rooted in Neural Network research already in the 1950's, the current trend in Deep Learning is unstoppable, and new approaches and improvements are presented almost every month.
We would like to meet and discuss the latest trends in Deep Learning, Neural Networks and Machine Learning, and reflect the latest developments, both in industry and in research.
The Vienna Deep Learning meetup is positioned at the cross-over of research to industry - having both a focus on novel methods that are published in such a fast pace, and interesting new applications in the startup and industry world. We usually have 2 speakers from either academia, startups or industry, complemented by a "latest news and hot topics" section. Occasionally we do tutorials about software frameworks and how to use Deep Learning in practice. Each evening ends with networking & discussions over drinks and snacks.
Please find all slides of our past meetups, links to photos and some video recordings of our meetups + a wealth of resources to Deep Learning tutorials and more here: https://github.com/vdlm/meetups
Note that this meetup has an intermediate to advanced level (we have done introductions to Deep Learning and neural networks only in the beginning, but try to repeat the most important concepts regularly).
Dear Deep Learners,
We start the new year with two exciting topics: Explainable Deep Learning and an extensive report from the number one AI conference, the NeurIPS:
Explainable Neural Symbolic Learning
by Ahmad Haj Mosa (AI Researcher, PwC Austria), Fabian Schneider (PoC-Engineer & Researcher, PwC Austria)
Explainable Deep Learning is getting more and more important. We show that the explainability vs. performance of a model is not an orthogonal but a beneficial relationship. Additionally we propose a neural-symbolic framework for implementing this relationship by using "Object Oriented Learning" and Haskell.
Report from NeurIPS conference
by Rene Donner (Head of Machine Learning & Engineering, Contextflow)
Reporting from the recent NeurIPS conference (formerly NIPS, largest AI conference in the world), we will present new trends and methods, including advances with are easily applicable in your current projects.
* Interpreting neural networks as differential equations
* New regularization techniques
* DL based image registration
* Distributed training
* Video to Video GANs
* Image generation from text
After the talks there will be time for networking and discussions. We thank Thomas Faast, Innovationsmanagement Fachhochschule Technikum Wien, for hosting us and providing drinks and snacks.
If you have hot topics to present or announcements to make please let us know beforehand.
Looking forward to kick-off the 2019 Deep Learning Meetup season,
Tom, Alex, Rene, Jan