Innovate QA November In Person Meetup: Playwright MCP and AIGate
Details
We’re excited to host our August In Person meetup at the Apex Bellevue Office! 🚀 This will be an incredible opportunity to connect with peers in the QA, testing, and AI communities while diving into how organizations are testing the next wave of AI-powered products.
We will have two speakers:
📅 Agenda
- 5:00 – 6:00 PM | Networking & Appetizers
- 6:00 – 7:00 PM | Deepak Kamboj: Supercharging Test Automation with Playwright MCP: Servers, Browser Extensions & Self-Healing Tests
- 7:00 - 7:30 PM | Wayne Roseberry: AIGATE - Framework for evaluating AI systems
- 7:30 – 8:00 PM | Networking
Come for the insights, stay for the connections (and food!). Space is limited—don’t miss your chance to join us for this special evening in downtown Bellevue.
📌 Topic: “Supercharging Test Automation with Playwright MCP: Servers, Browser Extensions & Self-Healing Tests”
Modern software demands reliable, scalable, and intelligent test automation. Playwright has already become the go-to framework for end-to-end testing, but by combining it with the Model Context Protocol (MCP), we unlock a new era of productivity.
In this session, we’ll explore:
--How to use the Playwright MCP Server to expose powerful testing tools to agents and developers.
--How the Playwright MCP Browser Extension makes test authoring and debugging seamless.
--Tackling one of the hardest challenges in test automation, flaky tests, with a custom Playwright Reporter that uses auto-generation and self-healing capabilities.
Expect live insights, real-world scenarios, and practical guidance you can bring back to your teams to accelerate testing velocity while reducing flaky failures.
Topic 2: AIGATE - Framework for evaluating AI systems
AI Guidance, Augmentation, Tolerance, and Enforcement analysis (AIGATE) is a methodology for analyzing safe design and use of AI-based systems. It provides a framework for identifying potential risks and harms in a system, and a means of describing ways to design to avoid, mitigate, and test for those harms.
The process is documented here:
https://waynemroseberry.github.io/assets/AI%20Guidance%20Augmentation%20Tolerances%20and%20Enforcement%20analysis.pdf