MoT Cork: Performance Testing Tools - Vegeta and Locust
Details
We are pleased to announce a Meetup that will cover 2 talks covering Vegeta and Locust.
Agenda
- Lana Vidrashchuk will open the Meetup
- Michael Murphy will talk about Vegeta
- Stephen Meehan will talk about Locust
- Lana Vidrashchuk will host the Q&A.
About Stephen Meehan
I started out as a tester further back than I care to admit but most days I still feel like it’s day one since there is so much to learn. I love to hear others’ take on testing and how teams can best deliver quality software in a timely and sustainable way that brings real value to the end-user. I have a special interest in test automation and enjoy sharing what I’ve learned with others.
Locust Abstract
Performance testing is critical to the success of most software projects, increasingly so with the popularity of mobile app's and the need to scale services to meet exponential growth in demand.
Even if our customers find no issues with the functionality of our app's, if performance is not good enough then the perception of the product is poor and effectively, as far as the customer is concerned, the app is not functional.
Recently I was tasked with leading the performance testing effort for a mobile app, having had limited experience in this area it was difficult and frustrating at times but I found the subject to be very interesting and it was one of the best initiatives I was involved with from a learning and growth perspective.
In this talk, I want to take you on my journey as a novice to the point where I could contribute to the success of a performance testing intiative, with some detours where things didn't go to plan. I'll talk about our goal, how we evaluated the alternative performance testing frameworks and how our choice - Locust.io - proved to be one that meet the needs for our project. I hope it will be beneficial, especially for those starting out on this journey.
I'll also do a quick demo of Locust.io to give you an idea of what it's all about.
About Michael Murphy
I have worked in software for six years with companies such as Dell and Glasslewis. My primary focus tends to be on the backend of large scale systems in a dotnet environment and the approach to handling that data. I've worked on both aws, azure and enjoy dipping into devops on occassion. In my spare time I like to run.
Vegeta Abstract
## Background of how we utilise it
* We primarily use it for seeing our top throughput and preparing for seasonal traffic during black friday.
* It also tends to surface infrastructural issues. Eg being throttled on things like dynamo when under load
* Can monitor behaviour via things like splunk or datadog.
## The Setup
* Initially used locally in a docker container, a shell script runs and alternates the traffic. We run slow and fast phases to allow containers to scale out and to try simulate fast and slow phases.
* Locally, we tend to max out at a 1000 RPS.
* To truly push the load, we started to distribute the requests via kubernetes. Introducing more pods and splitting out the traffic across them.
## Dynamic traffic
* Vegeta is built in golang, some groups tend to develop it at that level.
* My chosen approach to develop more dynamic requests was to generate them via a dotnet tool. Vegeta will allow you to continuously read via lazy mode or it can be a large json file containing the requests.
## Metrics
* Little bit on the metrics that we can gather from vegeta
## Demo tool
