Skip to content

Details

We're back! After a long hiatus, the NYC Product Engineering Group is firing up again — and we wanted to come back with something worth showing up for.

What to expect: Drinks, good conversation, and a room full of people who care about building great products. We missed this.

Something big is happening to the way we interact with software — and most of us aren't prepared for it.
You already talk to AI every day. You ask ChatGPT to draft emails, plan trips, debug code. But then it tells you to go open an app and click through five menus to actually do the thing. The smartest interface you've ever used... can't push a button.
Meanwhile, every app you use still works the same way it did ten years ago: menus, tabs, settings pages, and a search bar that doesn't understand what you actually mean.
What if that's all about to break?

The way we think about UX will be forever changed!
On-device AI models are shipping on every phone and laptop this year. Apple has put a 3B parameter LLM on-device with iOS 26. Google has Gemini Nano running locally on Android. Smaller models like MiniLM-L6-v2 can do embedding-based intent matching in under 10ms.
The point: you can understand what a user wants — fast, private, free — without a cloud round-trip. And that changes everything about how apps can work. The implications for how we use apps and websites are wild.

In this talk, Andrew Paul Simmons will explore:
• Why the "just add a chatbot" approach that every company is trying right now is completely wrong
• What happens when AI understands your intent \before\ you start clicking — and skips you straight to the screen that matters
• Live demos showing the pattern in action across healthcare, photo editing, project management, and e-commerce
• The design principles behind Chat+Tactile — a new interaction pattern where conversation and visual UI become a single flow
• How a four-layer processing pipeline — ML, embeddings, disambiguation, LLM fallback — keeps 90% of intent handling on-device and under 10ms

This talk is for anyone who:
• Builds products and wonders how AI should actually fit into the experience
• Uses apps every day and suspects there's a better way
• Is excited about where ML meets design — on-device models, edge inference, and what it all means for the products people actually use
• Wants to see live demos, not slides full of bullet points
This is our first event back, and we want to make it a good one. Come hang out, see something new, and argue about whether menus deserve to survive.

Andrew Paul Simmons is a product engineer who previously architected mobile frameworks at Uber for Uber One.

He's the creator of chatile.ai, a conversational navigation framework for apps and websites, and has been writing about the convergence of chat and visual UI since 2023. He spends an unreasonable amount of time thinking about why it takes four screens to reorder something you already bought.

You may also like