Skip to content

Details

How can you automatically generate high-quality tests for existing codebases? In this talk, Dr. Michael Oberparleiter demonstrates AI-powered approaches to creating both unit and end-to-end tests using Large Language Models (LLMs), achieving high code coverage and comprehensive business logic validation across diverse projects.

Unit Test Generation: They employ a divide-and-conquer strategy that provides models with targeted repository context for deep codebase understanding. A containerized feedback loop enables iterative test refinement through execution results, continuously improving test quality. In the talk, the speakers compare performance across leading commercial and open-weights LLMs to identify optimal solutions.

End-to-End Test Generation: Starting from simple natural language descriptions of user flows, they demonstrate how combining specialized LLMs for visual application understanding and test code generation produces comprehensive, maintainable Playwright test suites.
This approach delivers tests that follow best practices, reducing manual testing effort while maintaining quality standards.

Timeline
18:00–18:30 Doors Open
18:30­–19:15 Talk: Rise of the AI testers
19:15–... Food & Drinks, Networking Time
Hosted by TNG Technology Consulting GmbH

Events in Karlsruhe
Cloud Computing
Private Cloud
GDPR - General Data Protection Regulation

Members are also interested in