What we're about

We love machine intelligence and deep learning. There have been exciting developments in the field. From human-level translating, transcribing audio, recognising objects, to composing music, and general-domain reinforcement learning, a lot of stuff has been happening in the field. We are passionate about learning and sharing the knowledge in the community by hosting regular paper reading sessions and other events.
As a Canberra-based community, we look forward to collaborating with local researchers, organisers, and businesses owners.

To our sponsors:

Thanks to Trellis Data, we have a regular gathering place. They are all about deep learning, and are bringing the best Australian deep learning research into the government and everyday life. 

Upcoming events (4+)

Paper Discussion: Zip-NeRF: Anti-Aliased Grid-Based Neural Radiance Fields

level 2/53 Blackall St

abstract:
Neural Radiance Field training can be accelerated through the use of grid-based representations in NeRF’s learned mapping from spatial coordinates to colors and volumetric density. However, these grid-based approaches lack an explicit understanding of scale and therefore often introduce aliasing, usually in the form of jaggies or missing scene content. Anti-aliasing has previously been addressed by mip-NeRF 360, which reasons about sub-volumes along a cone rather than points along a ray, but this approach is not natively compatible with current grid-based techniques. We show how ideas from rendering and signal processing can be used to construct a technique that combines mip-
NeRF 360 and grid-based models such as Instant NGP to yield error rates that are 8% – 76% lower than either prior technique, and that trains 22× faster than mip-NeRF 360.

demo: https://jonbarron.info/zipnerf/
paper: https://arxiv.org/pdf/2304.06706.pdf

1
Paper Discussion: Shap·E: Generating Conditional 3D Implicit Functions

level 2/53 Blackall St

abstract:
We present Shap·E, a conditional generative model for 3D assets. Unlike recent work on 3D generative models which produce a single output representation. Shap·E directly generates the parameters of implicit functions that can be rendered as both textured meshes and neural radiance fields. We train Shap·E in two stages: first, we train an encoder that deterministically maps 3D assets into the parameters of an implicit function; second, we train a conditional diffusion model on outputs of the encoder. When trained on a large dataset of paired 3D and text data, our resulting models are capable of generating complex and diverse 3D assets in a matter of seconds. When compared to Point·E, an explicit generative model over point clouds, Shap·E converges faster and reaches comparable or better sample quality despite modeling a higher-dimensional, multi-representation output space. We release model weights, inference code, and samples at https://github.com/openai/shap-e

code: https://github.com/openai/shap-e
paper: https://arxiv.org/pdf/2305.02463.pdf

Paper discussion: On the Spectral Bias of Neural Networks

level 2/53 Blackall St

Abstract from the paper:
"Neural networks are known to be a class of highly expressive functions able to fit even random input-output mappings with 100% accuracy. In this work we present properties of neural networks that complement this aspect of expressivity. By using tools from Fourier analysis, we highlight a learning bias of deep networks towards low frequency functions – i.e. functions that vary globally without local fluctuations – which manifests itself as a frequency-dependent learning speed. Intuitively, this property is in line with the observation that over-parameterized networks prioritize learning simple patterns that generalize across data samples. We also investigate the role of the shape of the data manifold by presenting empirical and theoretical evidence that, somewhat counter-intuitively, learning higher frequencies
gets easier with increasing manifold complexity."

Paper:
https://arxiv.org/pdf/1806.08734.pdf

Paper Discussion: LoRA: LOW-RANK ADAPTATION OF LARGE LANGUAGE MODELS

level 2/53 Blackall St

Fine-tuning a pre-trained model with domain specific data normally can improve the model's performance. LoRA provides an efficient way to fine-tuning a model. Let's dig into it.

LoRA paper: https://arxiv.org/pdf/2106.09685.pdf

Past events (175)

Paper Discussion: PaLM-E: An Embodied Multimodal Language Model

level 2/53 Blackall St