About us
From the curious beginner to the advanced practitioner, this is a group for anyone interested in learning and sharing ideas in AI. We run events to get people together and to support all the great AI initiatives taking place in Queensland. Proudly supported by the Queensland AI Hub.
Follow us on Twitter
Join us on Facebook
Reach out on LinkedIn
Featured event

Why Speech-to-Speech APIs Fail When Voice AI Needs to Evaluate
Google shipped Gemini Live. OpenAI launched the Realtime API. The pitch is seductive: stream audio in, get audio back. One WebSocket, sub-second latency. But what happens when your AI needs to evaluate a human, not just chat with them?
Join Niraj Kothawade as he walks through the architecture decisions behind MasterPrep AI — a voice AI platform that interviews and assesses candidates in real-time for enterprise hiring — and why he rejected speech-to-speech in favour of a server-side orchestration pipeline. Niraj will cover how state machines detect candidate behavioural patterns in real-time, why LLMs are unreliable at enforcing hard limits, and how the pipeline enables capabilities like AI plagiarism detection that speech-to-speech makes impossible.
In this talk, you will learn:
- Why speech-to-speech APIs break down when voice AI needs to evaluate, not just converse
- How server-side state machines detect behavioural patterns like Solution Traps and Logic Gaps in real-time
- The cost reality of audio tokens vs text tokens at scale
- What the pipeline unlocks that speech-to-speech can't — structured feedback, AI plagiarism detection, and deterministic control
About Niraj
Niraj Kothawade is a product leader and founder of MasterPrep AI, a voice AI platform for candidate interviews and assessment. He has 15+ years of experience building products at scale across companies including Deputy, Flipkart, and Yahoo. Find him on LinkedIn and X.
Upcoming events
2

AI Agent Autopsy - Frameworks & Harnesses
The Precinct, 315 Brunswick St, Fortitude Valley, QL, AUAI Agents are being discussed with reckless abandon, they are touted as both the solution to all problems and the end of humanity. Every day we are waking to news about phenomenal new abilities or corporate disasters.
All of which makes it difficult to understand what they really are, and where they should be used.
In this talk we will go through the spectrum of kinds of AI Agents led by John Hawkins - Chief Ai Officer || Machine Learning Researcher. We will discuss the historical ideas in AI research that led to the current idea of an AI Agent. We will then start to pull apart the programming pieces and take a look at the innards of an AI Agent to understand how they are built and where their abilities and flaws come from. We will discuss the reasons why we should build an agent and the potential issues you will encounter. Expect to hear about a range of agentic coding frameworks such as LangGraph, Pydantic AI, Crew AI and the OpenAI Agent SDK.
We finish the talk by diving into the innards of the more autonomous agentic applications like Claude Code and OpenClaw to pull apart the components that make up an Agent Harness.
About John:
John Hawkins, Chief AI Officer at Intersect AI, where he helps organisations unlock value through bespoke AI solutions.
John brings over 20 years of experience applying machine learning, statistics, and data science across a wide range of industries - from finance and insurance to media and biomedical research.
He has built real-time predictive systems for customer engagement, fraud detection, and scientific applications, and has published 30+ peer-reviewed research papers.
He is the author of the upcoming book Data Science First: Building AI Powered Applications With Language Models (Wiley, 2026), and previously wrote Getting Data Science Done.
John also contributes to ongoing research with the Pingla Institute and the Transitional AI Research Group at UNSW.158 attendees
Why Speech-to-Speech APIs Fail When Voice AI Needs to Evaluate
The Precinct, 315 Brunswick St, Fortitude Valley, QL, AUGoogle shipped Gemini Live. OpenAI launched the Realtime API. The pitch is seductive: stream audio in, get audio back. One WebSocket, sub-second latency. But what happens when your AI needs to evaluate a human, not just chat with them?
Join Niraj Kothawade as he walks through the architecture decisions behind MasterPrep AI — a voice AI platform that interviews and assesses candidates in real-time for enterprise hiring — and why he rejected speech-to-speech in favour of a server-side orchestration pipeline. Niraj will cover how state machines detect candidate behavioural patterns in real-time, why LLMs are unreliable at enforcing hard limits, and how the pipeline enables capabilities like AI plagiarism detection that speech-to-speech makes impossible.
In this talk, you will learn:
- Why speech-to-speech APIs break down when voice AI needs to evaluate, not just converse
- How server-side state machines detect behavioural patterns like Solution Traps and Logic Gaps in real-time
- The cost reality of audio tokens vs text tokens at scale
- What the pipeline unlocks that speech-to-speech can't — structured feedback, AI plagiarism detection, and deterministic controlAbout Niraj
Niraj Kothawade is a product leader and founder of MasterPrep AI, a voice AI platform for candidate interviews and assessment. He has 15+ years of experience building products at scale across companies including Deputy, Flipkart, and Yahoo. Find him on LinkedIn and X.
25 attendees
Past events
139


