Separating Noise From Signal: AI Experiments That Actually Help Us Build Faster
Details
Summary:
At a insurance company, we noticed it was taking too long to get new ideas into production. Our IT director believed that fostering a stronger engineering culture could make a real difference — but for that to happen, leadership needed to step back and give teams the space and support to improve things themselves.
We set up a dedicated enablement team to support development by building internal tools, sharing knowledge, and removing friction. The impact? We went from releasing once a month to up to eight times a day. Operational work dropped from 70% to 25% — and it's still going down. Most of these improvements came directly from the teams themselves.
By the end of 2024, those same teams started asking: Can AI help us move even faster?
That’s when we started experimenting. We rolled out GitHub Copilot in VS Code, hired a few AI engineers, and ran small experiments to automate repetitive or time-consuming work — supported by people like Patrick Debois and teams at Microsoft.
Some of those experiments stayed small. Others evolved into stable, production-ready tools. For example, we’re now using an AI-based code reviewer that checks whether code follows our internal guidelines.
In this talk, I’ll walk you through what worked, what didn’t, and what was just “meh” in our AI transition.
I’ll also compare our experience with that of a development company where we’re running the same experiments to understand how AI is really impacting the development lifecycle.
These experiments will continue during the preparation of the talk — so new findings and insights will be added along the way.
Bio:
Pascal Dufour - Agile (test) consultant with a passion for People and Experiments. Who likes technology that makes sense. Currently working on PST and a side project http://scrumrows.org/home and http://productpracticescanvas.org
World Champion Software Testing 2016.
