Skip to content

Details

Schedule
5.30pm: Networking, food and drinks
6.00pm: Housekeeping and event overview
6.05pm: Functional Regression Test Automation - A Practical, Holistic V-Model Approach
Food and drinks provided by Hudson. Thank you, Hudson, for your support!

Summary
Functional Regression Test Automation
A Practical, Holistic V-Model Approach
Session Proposal
Abstract
The software testing community has no shortage of automation tools — but it does have a shortage of free/shareware coherent, end-to-end automation approaches.
Most teams assemble test automation from a collection of excellent but narrowly focused tools: one for UI interaction, another for APIs, another for mocking, another for reporting etc. The result is often a technically impressive solution but that is fragile, hard to maintain, poorly understood by non-technical stakeholders, and expensive to evolve.
This talk presents a practical, production-proven approach to functional regression test automation that treats automation as a system, not a collection of tools.
Based on over two years developing and two years of real-world use at Energy Australia, the session walks through how a cross-functional team designed and built an holistic automation framework that enables testers, developers, and business stakeholders to share a common understanding of what is tested, how it is tested, and what the results mean.

***

Problem Statement
Modern test automation commonly fails in predictable ways:

  • Tests are written for machines, not people
  • Business stakeholders cannot understand what is covered or why
  • Automation requires specialist engineers for day-to-day maintenance
  • Results are technically detailed but practically opaque
  • UI and API testing feel like entirely different disciplines
  • Test suites become brittle, inconsistent, and expensive to own

Despite the abundance of tools, there is no widely adopted free, holistic solution that addresses these problems together.

***

The Approach
At Energy Australia, a team of testers, SDETs, developers, business analysts, and product owners defined a set of non-negotiable goals that any automation solution had to meet. These goals shaped every design decision and acted as a constant filter for complexity.
The framework had to be:

  • Simple — understandable by non-technical stakeholders
  • Atomic — tests executable and analysable in isolation
  • Maintainable — primarily maintained by testers, not engineers
  • Visible — results meaningful at both business and technical levels
  • Consistent — the same patterns across teams and technologies
  • Usable — flexible execution for different testing contexts
  • Robust — resilient to application failure and change

Anything that failed one or more of these criteria was discarded.

***

What This Session Covers
Rather than focusing on tools, this talk focuses on design decisions and trade-offs, illustrated with real examples.
Attendees will see:

  • How a single automation model supports both UI and API testing without context switching
  • How local, test-controlled mocking enables true atomic tests
  • How a strict separation between test intent and technical implementation prevents step-definition sprawl
  • How a V-model–inspired component hierarchy removes brittle UI selectors
  • How test results can simultaneously support:
  • Business risk assessment
  • Developer defect reproduction
  • Test suite maintenance

Live demonstrations are used selectively to show test structure, behaviour, and reporting, not to “watch tests run”.

***

What This Is Not
This is not:

  • A tutorial on a specific tool
  • A “look how clever this framework is” demo (which is hard as it is 'easy' to come across as that without meaning too!)
  • A claim that this is the correct solution

The framework has limitations, sharp edges, and known design compromises — all of which are discussed openly, including what is currently being rewritten based on lessons learned.

***

Intended Audience
This session is aimed at:

  • Testers frustrated with brittle or over-engineered automation
  • Developers who don’t trust or can’t interpret test results
  • Leads and managers trying to understand the real cost of automation
  • Anyone interested in treating automation as a long-term product, not a short-term deliverable

The content is accessible to non-technical attendees while still offering depth for experienced engineers.

***

Outcomes for Attendees
After this session, attendees should be able to:

  • Evaluate their own automation against clear, practical criteria
  • Identify where complexity is coming from — and why
  • Apply structural ideas (not code) to improve existing test suites
  • Have better conversations between testing, development, and business roles

***

Presenter Background
Mathieu Walker has over 25 years’ experience in software test automation, with hands-on hands-on involvement dating back to the mid-1980s. The framework presented has been in continuous production use for over two years within a large, multi-team organisation and continues to evolve.

About our group
All Things Testing Brisbane Meetup aims to share knowledge in the software community about testing practices, ideas, tools and methodologies. All are welcome, whether you are a tester, a developer, or any other role. Only passion and enthusiasm are required.

Related topics

You may also like