About us
From the curious beginner to the advanced practitioner, this is a group for anyone interested in learning and sharing ideas in AI. We run events to get people together and to support all the great AI initiatives taking place in Queensland. Proudly supported by the Queensland AI Hub.
Follow us on Twitter
Join us on Facebook
Reach out on LinkedIn
Upcoming events
2

AI PAPER READING CLUB || April Edition || Scaling Agent Systems
Microsoft HQ - Brisbane, Level 28, 400 George Street, Brisbane, AUAI Paper Reading Club - Monthly Meetup
Join us for our monthly AI Paper Reading Club, a relaxed and welcoming space for anyone curious about the cutting edge of machine learning and artificial intelligence. Whether you're here for deep dives into the math behind the models or prefer to focus on the practical impact of applied research, this event has something for you.
Each session features a volunteer presenter who picks a recent or classic paper to unpack, ranging from rigorous theoretical work to industry-shaping applications.
Bring your questions, your insights, or just your curiosity. There’s no pressure to present, and all backgrounds are welcome.
We believe in learning together, at our own pace, no gatekeeping, no ego, just AI enthusiasts helping each other grow.Paper Title: "Towards a Science of Scaling Agent Systems"
Paper Link: https://arxiv.org/pdf/2512.08296v2
Presenter: Conor O'Neill
**Abstract: "**Agents, language model (LM)-based systems that are capable of reasoning, planning, and acting are becoming the dominant paradigm for real-world AI applications. Despite this widespread adoption, the principles that determine their performance remain underexplored, leaving practitioners to rely on heuristics rather than principled design choices. We address this gap by deriving quantitative scaling principles for agent systems. We first formalize a definition for agentic evaluation and characterize scaling laws as the interplay between agent quantity, coordination structure, model capability, and task properties. We evaluate this across four diverse benchmarks: Finance-Agent, BrowseComp-Plus, PlanCraft, and Workbench, spanning financial reasoning, web navigation, game planning, and workflow execution. Using five canonical agent architectures (Single-Agent System and four Multi-Agent Systems: Independent, Centralized, Decentralized, Hybrid), instantiated across three LLM families, we perform a controlled evaluation spanning 180 configurations, standardizing tools, prompt structures, and token budgets to isolate architectural effects from implementation confounds. We derive a predictive model using empirical coordination metrics, including efficiency, overhead, error amplification, and redundancy, that achieves cross-validated 𝑅 2=0.524, enabling prediction on unseen task domains by modeling task properties rather than overfitting to a specific dataset. We identify three dominant effects: (1) a tool-coordination trade-off: under fixed computational budgets, tool-heavy tasks suffer disproportionately from multi-agent overhead. (2) a capability saturation: we observe that coordination yields diminishing or negative returns (𝛽ˆ=−0.404, 𝑝<0.001) once single-agent baselines exceed an empirical threshold of ∼45%. (3) topology-dependent error amplification: independent agents amplify errors 17.2× through unchecked propagation, while centralized coordination contains this to 4.4×. Crucially, coordination benefits are task-contingent. Centralized coordination improves performance by 80.8% on parallelizable tasks like financial reasoning, while decentralized coordination excels on dynamic web navigation (+9.2% vs. +0.2%). Yet for sequential reasoning tasks, every multi-agent variant we tested degraded performance by 39–70%. The framework predicts the optimal coordination strategy for 87% of held-out configurations. Out-of-sample validation on GPT-5.2, released after our study, achieves MAE=0.071 and confirms four of five scaling principles generalize to unseen frontier models, providing a quantitatively predictive framework for agentic scaling based on measurable task properties."We thank Microsoft for generously sponsoring the venue for this event. The company support makes it possible for us to bring the AI community together, share knowledge, and grow as a collective.
!!! Please RSVP to our Humanitix link to confirm your attendance.!!!
30 attendees
AI Agent Autopsy - Frameworks & Harnesses
The Precinct, 315 Brunswick St, Fortitude Valley, QL, AUAI Agents are being discussed with reckless abandon, they are touted as both the solution to all problems and the end of humanity. Every day we are waking to news about phenomenal new abilities or corporate disasters.
All of which makes it difficult to understand what they really are, and where they should be used.
In this talk we will go through the spectrum of kinds of AI Agents led by John Hawkins - Chief Ai Officer || Machine Learning Researcher. We will discuss the historical ideas in AI research that led to the current idea of an AI Agent. We will then start to pull apart the programming pieces and take a look at the innards of an AI Agent to understand how they are built and where their abilities and flaws come from. We will discuss the reasons why we should build an agent and the potential issues you will encounter. Expect to hear about a range of agentic coding frameworks such as LangGraph, Pydantic AI, Crew AI and the OpenAI Agent SDK.
We finish the talk by diving into the innards of the more autonomous agentic applications like Claude Code and OpenClaw to pull apart the components that make up an Agent Harness.
About John:
John Hawkins, Chief AI Officer at Intersect AI, where he helps organisations unlock value through bespoke AI solutions.
John brings over 20 years of experience applying machine learning, statistics, and data science across a wide range of industries - from finance and insurance to media and biomedical research.
He has built real-time predictive systems for customer engagement, fraud detection, and scientific applications, and has published 30+ peer-reviewed research papers.
He is the author of the upcoming book Data Science First: Building AI Powered Applications With Language Models (Wiley, 2026), and previously wrote Getting Data Science Done.
John also contributes to ongoing research with the Pingla Institute and the Transitional AI Research Group at UNSW.51 attendees
Past events
138


