Code and Vision meet Art and Efficiency


Details
We have a broad range of talks this month! AutoCodeRover is an exciting code tool (launched soon after the devin.ai demos); ReFT and LLM2LLM are recent papers; Google's Gemini-Flash & PaliGemma have been released; and we'll also have a fire-side chat with a well-known AI Artist
"Fireside Chat with an Artist" - @niceaunties
Stable Diffusion, and its offspring, have captured the public's imagination with their output. But (no surprise, really), the most interesting images/videos are coming from artists who have been quick to adopt this new technology. We'll be hosting a chat with a well-know artist based in Singapore (NB: no cameras allowed for this segment).
"AutoCodeRover: Autonomous Software Engineering" - Yuntong Zhang
In this talk, Yuntong will discuss a recent work on automatically resolving GitHub issues using language model agents. AutoCodeRover is a tool which takes in a reported issue and attempts to automatically generate a patch that resolves it. The talk will focus on various design considerations of AutoCodeRover, and how it brings techniques from software debugging into LLM-based programming frameworks.
"How to Learn More Efficiently? Exploring ReFT and LLM2LLM" - Fangyuan Yu
ReFT achieves substantial gains in parameter efficiency, outclassing conventional PEFT methods by a factor of 10. LLM2LLM automatically addresses data distribution imbalances, leading to another 10-fold improvement in data efficiency. We'll also share early experiments from our work-in-progress paper that builds on these two approaches.
"Vision Models (VLMs) from Google" - Martin Andrews
Google I/O seemed to be almost entirely GenAI based this year. Martin will look at Gemini-Flash (a much faster/cheaper Gemini model) and PaliGemma (an Open Source vision model) to see where we're heading as token costs go down, and capabilities increase.

Code and Vision meet Art and Efficiency