What we're about

A meetup for academics, professionals and hobbyists interested in applications and latest developments in Machine Learning, and AI more broadly. We talk about:

• Computer vision, speech recognition, text mining, generative design

• New papers that we're excited about, and software that we use

• Cool applications of AI & machine learning, and how we made them

We strive to focus on the science & technology side, as opposed to the commercial side.

We typically meet the first Monday of every month.

We're always looking for interesting presentations. If you have a topic you want to talk about, anything from 10 to 45 minutes long, then please email gtrent@gmail.com. For talks we are explicitly *not* commercial. We organize this meetup because we are passionate about AI & ML, not to promote some product or service.

If an organization would like to host us, or sponsor food & drink, let us know.

Our official Twitter hashtag is #MLBerlin (https://twitter.com/search?q=%23MLBerlin).

VISIT US AT: http://machinelearning.berlin/

Upcoming events (3)

Compression & efficient processing of DNNs; more

Needs a location

Talk 1: Compression & efficient processing of DNNs Speaker: Simon Wiedemann (Fraunhofer HHI) Abstract: Part of the success of Deep Learning is due to DNN models becoming larger and larger over time, allowing them to solve more complex problems. However, this comes at the cost of consuming enormous amounts of compute power, so much so that the progress along current lines is rapidly becoming economically, technically, and environmentally unsustainable. In this talk I will introduce some of the most powerful techniques that reduce the memory as well as computational complexity of DNNs. I will also talk about how one can reduce the information content in DNN parameters by applying information theoretical principles, and subsequently compress them to minimal sizes by applying latest compression techniques from video coding. Bio: Simon has worked for the past 4 years at Fraunhofer HHI as a Project Manager and Research Associate in topics related to Efficient Communication and Processing of DNNs. He published several papers in this topic and is one of the main contributor of DeepCABAC, a compression engine for DNNs that is currently selected as the core technology for MPEGs upcoming standard for neural network compression. He is currently finishing up his PhD and will start his own company from October 2020, where he will help reduce the costs of running large DNN models in production by providing the world with the latest, cutting-edge optimization tools for DNNs. -- Talk 2: TBD Speaker: Abstract: Bio:

ML Group Berlin - topics TBD

Needs a location

Talk 1: TBD Speaker: Abstract: Bio: -- Talk 2: TBD Speaker: Abstract: Bio:

ML Group Berlin - topics TBD

Needs a location

Talk 1: TBD Speaker: Abstract: Bio: -- Talk 2: TBD Speaker: Abstract: Bio:

Past events (71)

Photos (35)