AI VS Human: Find AI flaws, gaps, and quality debts together
Details
To participate in the event, please complete your free registration here
AI has quietly entered every testing workflow. Test cases. Test data. Edge cases. Strategies.
The output looks impressive. The quality often isn’t.
This meetup will help testers think critically and responsibly while working with AI. In this practical session we will identify weaknesses of AI-generated testing outputs and find ways to improve it.
### 🧠 What to expect:
This is a practical meetup, so please be ready to open your mic and collaborate in small groups.
### 1️⃣Opening talk by the meetup guide Rahul Parwal (20 min)
### 2️⃣ Solo activity (20 min)
- Pick a sample feature specific (e.g. login, search, check outs, API)
- Use an AI tool to generate test cases / edge cases / automation code or test data
- Identify at least 3 weaknesses in AI output
- Submit:
- Original prompt
- Improved prompt,
- List of failures and gaps
### 3️⃣ Group Activity (35 min)
Each team:
- Receives a sample problem statement
- Uses AI to generate a test strategy
- Reviews and fixes AI mistakes
- Adds human-only insights (risks AI missed)
- Submits:
- The final test strategy
- “AI got this wrong…” list
🕒 Submissions: If time runs short during the meetup, participants will have up to 24 hours after the event to submit their work.
🏆 Winners: Two categories — Best Solo Submission and Best Group Submission.
AI summary
By Meetup
A practical meetup for testers to identify AI-generated testing flaws and collaboratively craft an improved AI-assisted test strategy.
AI summary
By Meetup
A practical meetup for testers to identify AI-generated testing flaws and collaboratively craft an improved AI-assisted test strategy.
