In an effort to focus more time toward automated testing, Chegg has been working with it's manual test engineers to start building automated test cases. We believe that exploratory is invaluable but this type of testing needs to be supplemented with a robust suite of automated tests to cover all of the existing code base.
I'd like to share the slide deck that I discussed at Chegg. It explains in detail the steps that we need to take to better utilize our testing time to not only think about the current release, but to also build out automated tests with an eye toward the future.
Some of the topics that will be discussed are:
• independent deployments
• who should make the go/no-go decision?
• catalog of manual & automated tests in one place
• DRY - don't repeat yourself
• testable code
• 80/20 rule