What we're about

We love machine intelligence and deep learning. There have been exciting developments in the field. From human-level translating, transcribing audio, recognising objects, to composing music, and general-domain reinforcement learning, a lot of stuff has been happening in the field. We are passionate about learning and sharing the knowledge in the community by hosting regular paper reading sessions and other events.

As a Canberra-based community, we look forward to collaborating with local researchers, organisers, and businesses owners.

To our sponsors:

Thanks to Trellis Data (http://www.trellisdata.com.au), we have a regular gathering place. They are all about deep learning, and are bringing the best Australian deep learning research into the government and everyday life.

Upcoming events (4)

Speeding up transformer: Sparse Transformer Layer

Trellis Data Office

Transformer layer has been widely used in recently published papers.

This paper added a few improvements to the Transformer Layers and called it "Sparse Transformer Layer".

It's faster, but can it achieve stronger modelling capability?

Paper Link: https://arxiv.org/pdf/1904.10509.pdf

Deep Dive on Self-supervised Learning

Trellis Data Office

Related papers and materials:

Representation Learning with Contrastive Predictive Coding
https://arxiv.org/abs/1807.03748

SELF-SUPERVISED LEARNING FROM A MULTI-VIEW PERSPECTIVE
https://openreview.net/pdf?id=-bdp_8Itjwp

Video:
https://www.youtube.com/watch?time_continue=2707&v=rjZCjosEFpI&feature=emb_logo

Is Google really pushing the limit in self-supervised learning?

Trellis Data Office

MetaAI (formerly known as FAIR) made strident progress in self-supervised learning in computer vision and audio signal processing.

However, Google claims they've broken the record on vision feature learning.

Your thoughts: Is this a true breakthrough or another fad?

Paper title: Pushing the limits of self-supervised ResNets: Can we outperform supervised learning without labels on ImageNet?

Authors: Nenad Tomasev, Ioana Bica, Brian McWilliams, Lars Buesing, Razvan Pascanu, Charles Blundell, Jovana Mitrovic

Blog link: https://syncedreview.com/2022/01/17/deepmind-podracer-tpu-based-rl-frameworks-deliver-exceptional-performance-at-low-cost-186/

GLIDE: Is this a better way to generate photorealistic images?

Trellis Data Office

Paper title: GLIDE: Towards Photorealistic Image Generation and Editing with Text-Guided Diffusion Models

Authors: Alex Nichol * Prafulla Dhariwal * Aditya Ramesh * Pranav Shyam Pamela Mishkin Bob McGrew IlyaSutskever MarkChen

Link: https://arxiv.org/pdf/2112.10741.pdf

Past events (117)

Magic Cropping for Seamless Photo Editing

Trellis Data Office

Photos (18)