What we’re about
Are you curious, like to explore the latest trends, want to grow your automation skills?
We are a bunch of engineers from all walks of life with one thing in common, a passion for all things QA. We love learning new skills, expand our knowledge of all things testing, and spend time in the company of our fellow engineers.
In order to feed our addiction to testing, we hold quarterly workshops (don't forget your laptop!) and (virtual) monthly fireside chats, presentations and round tables.
We hunger for knowledge, so please reach out to our organizers if you have a skill or tool you'd love to share, a topic you feel especially passionate about or if you want to participate as a SME in one of our roundtables. If speaking in public is not your thing, but you have a topic you want to learn more about, please let us know. We have a vast network of contacts, so the chance is great we know someone that can help us hone our craft.
Looking forward to see you at our next event, as a speaker or participant!
Your organizers,
Sven Thirion
Karlin Kappala
David Tan
Web Site: http://testaholics-anonymous.com
Email: testaholics.anonymous@gmail.com
LinkedIn: https://www.linkedin.com/company/testaholicsanonymous
Slack: https://testaholicsanonymous.slack.com
YouTube: https://www.youtube.com/@testaholics
Upcoming events (1)
See all- Ensuring Quality in the Age of Generative AI: Beyond Traditional MetricsLink visible for attendees
The rapid adoption of generative AI is transforming industries, with companies embedding AI models and services into their products at unprecedented rates.
With the barrier to entry for AI tools now lower than ever, it is critical to establish quality standards, ethical guidelines, and comprehensive testing practices that address usability, fairness, safety, and robustness. These attributes require innovative testing approaches, combining manual exploratory methods with automated strategies.
Join Carlos Kidman as he explores the risks and biases inherent in AI development pipelines. This session will cover techniques to evaluate a model's behavior, robustness against security threats, and fairness across diverse user scenarios. By applying these methods to real-world challenges and state-of-the-art models, you'll gain actionable insights into designing and testing responsible AI systems that align with customer expectations.
Key Takeaways:
- Understanding AI risks and biases throughout the development lifecycle.
- Techniques for assessing model behavior, robustness, and fairness.
- Applying new quality attributes to ensure AI systems are usable, ethical, and secure.
Whether you're a developer, tester, or AI enthusiast, this session will equip you with the tools to define and test responsible AI systems that stand out in today's competitive landscape.
If you are able, please join us online or in-person for an engaging presentation followed by a lively networking session with food and drinks—don’t miss this chance to connect and collaborate!
Location: Chicago 200 East Randolph Street Suite 3700, Chicago, IL 60601
Check-in at the front desk and meet us at the 38th floor.Online: Join the Teams meeting
Meeting ID: 255 734 287 153
Passcode: wn6gQ68oPresenter info: Carlos Kidman is Senior Quality Architect at SeekWell but was formerly the Director of Engineering and AI at Qualiti.ai and Engineering Manager at Adobe. He is also an instructor at Test Automation University with courses around architecture, design, containerization, and Machine Learning. He is the founder of QA at the Point, which is the Testing and Quality Community in Utah, and does consulting, workshops, and speaking events all over the world. He has a YouTube channel, builds open-source software like Pylenium and PyClinic, and is an ML/AI practitioner. He loves soccer and Barcelona, anime, gaming, and building products in his free time.