It seems that a lot of developer teams fall in one of these two categories for end-to-end testing:
- Have no end-to-end tests because they are too hard to create
- Have too many end-to-end tests, making many fulltime jobs to maintain
Either one of these is not practical as the first one is causing the team to rely on manual testing for every release and the second is not worth the effort of creating all the end to end tests. Some tech books and blogs state that you should simply cover all your acceptance criteria with end-to-end tests. It sounds good in theory, but they clearly underestimate the work it takes in practice to maintain these tests. If it cost several team member’s full-time attention to maintain the tests, it is often a clear sign that you have introduces too many end-to-end tests and should instead narrow your focus on what is important to cover with tests.
This post will cover a pragmatic way to work with end-to-end tests in your team, including the preparation for working with end-to-end tests as well as how to write the tests.
Disclaimer: No political bias.
Step one: Create a Testing Strategy
The team should agree on how to test the software before releases and make procedures for that.
You should agree on the which kinds of tests to use to cover the system’s functionality and which can/should be automated. My recommendation from my testing guide is that you should follow the testing pyramid by having unit, integration and end-to-end tests split like this: 80 % of the tests should be unit tests, 15 % is integration tests and 5 % is UI tests. Of course, these ratios can be shifted for your specific case but the key is that unit tests are faster and easier to maintain than integration and UI tests.
For my recommended procedures check my continuous delivery post, where I recommend creating feature environments for every pull request and do the testing there before it is shipped to production as soon as the pull request is merged to master. That way your team can easily do 5+ releases a day and even with more confidence than if you did a traditional “bulk-stage” of many features, because the delta is smaller in every release, making errors easier to spot.
Step two: Define the top 5 use cases of your system
Arrange a meeting with the testers, product owners and other relevant stakeholders with the agenda of defining the top 5 use cases of your system. Why is it relevant to know the top 5 use cases of your system? Because these are the ones you should write automatic end-to-end tests for. This exercise forces you to define what is critical in your application so you know what to cover with tests. After this meeting, you should create tasks for the different end-to-end tests to be made and other corresponding tasks such as getting test data.
Step three: Create a simple smoke test
To get started with having a simple end-to-end test covering the application you should create a smoke test. A smoke test will ensure that you can log in, can see the front page and log out. That’s it, don’t make it more complex as the purpose here is to assert that the page is loading correctly. At this point, your unit and integration tests should keep you covered.
Step four: Getting the test data
For implementing the top 5 use cases you should create the necessary test data. By grooming the tasks for the 5 end-to-end tests, it should be clear what test data is needed. For keeping the test as stable as possible you test users and test data used in the end-to-end tests to be 100 % separate from other usages such as manual testing. The test data needed should be created in the database and the data and credentials should be documented a central place for the team to access.
Step five: Create Seed Scripts to prepare the test data for running automatic end-to-end tests
To ensure that the end-to-end tests are run in a consistent initial state you should use a seed script to seed the test data before the tests are run. If a test stops halfway, you don’t want that to be able to break the tests being run later.
One way is to create a dedicated endpoint on the back end to seed the test data. You can guard this so only the end-to-end test user can call this. If you need data from external systems you don’t control, you can stub out the external call and return the needed test data instead.
Step six: Implement the top 5 use cases as end-to-end tests
Now we are ready to implement the top 5 use cases as end-to-end tests. These should only cover the “sunshine” scenarios to keep them as focused on the most important features. Prioritize the use cases and implement the most important ones first. At the beginning of every test, call the seed endpoint to ensure a consistent initial state.
Because end-to-end tests are fragile by nature you should code them in an anti-fragile way. To make the tests more anti-fragile you needed to make them mimic the behavior of a real tester by:
- Wait a reasonable amount of time for an element to be visible, clicking on an element etc. Blue-harvest has action helpers for this, where you can click on elements with slow.click(‘#e2e-loginbtn’) to ensure that the browser will wait a reasonable time for the element to be visible and clickable.
- Retry expects using my retry expects helper. This will enable you to run an expressing every couple of seconds for up to N times. This is useful if you eg. are waiting for an HTTP call to finish before expecting.
- Use id as e2e selectors and prefix them with “e2e” to make it very visible that they are used in e2e-test, so your team members don’t break your tests!
Creating your Protractor tests with these tools should make your tests very stable because normally timing and changing element selectors are the biggest cause of flakiness in end-to-end tests.
Bonus tip: to implement the above features you are going to get a lot of promise nesting if you don’t use async-await, so make sure to use that. And if you use async/await make sure to turn off control flow by setting
In your Protractor config.
For a complete example of how to implement Protractor end-to-end tests using these guidelines check my Github.
Step seven: Run the end-to-end tests on your deployment pipeline
These end-to-end tests should be run right after you have deployed your code. From my preferred setup, I deploy code to a feature environment and run end-to-end tests on them right after deployment, to ensure that the new code has been deployed successfully. Deployment servers such as Octopus Deploy supports automatic rollback in case of deployment errors. If you trigger your end-to-end tests from Octopus you can make it rollback automatically if the end-to-end tests are failing. I would only do this when I’m confident that the end-to-end tests are working reliably or otherwise this would be annoying with false positives.
To avoid potential concurrency problems you should only allow your deployment pipeline to run one end-to-end test at a time. This ensures that no other deployment is changing the test data at the same time. Of course, you can work on running tests in parallel if this becomes a bottleneck, such as having a pool of identical end-to-end test users with data to take from on every test run.
Step eight: Maintenance of the end-to-end tests
The future steps are to maintain these end-to-end tests as end-to-end tests are kind of like getting a puppy: At first, it doesn’t seem to cost much, but you need to nurture and take care of it for a long time. If the maintenance overhead is not a big problem in your team you might want to extend the end-to-end tests to even more use cases, but I would recommend only doing that when you are ready for more puppies.
In this post, we looked at a pragmatic way of introducing end-to-end tests to your project. The basic idea is that you focus on the important parts of the application and because end-to-end tests can be expensive to maintain you start out with only implementing end-to-end tests for the top 5 use cases. To implement these you get the needed test data to seed the data on every run as well as code your tests in a robust way. Lastly, you run your end-to-end tests on your deployment pipeline and use them to verify whether every deployment went successfully.
I recommend Continuous Delivery, which explains how to have an efficient delivery pipeline, including end-to-end testing in a pragmatic way.