Cypress has become the preferred way of doing UI testing of Angular apps by many Angular experts.
It offers great improvements over Selenium-based testing tools by making the testing experience more like a real user using the built-in retry mechanism of assertions and commands (eg. click on the element), a user-friendly GUI which makes it easy to trace what is going on in the tests.
But used wrongly, you will still face some of the classic end-to-end/UI test problems such as flakiness and slow test execution.
This post is extending upon the Cypress best practices combined with my experience from helping companies with Cypress testing.
Before you start reading, I want to you know that this post comes with a fee… It’s free but if you like it, please share it with your teammates and/or on Twitter and LinkedIn 🙂
TLDR
The most common beginner mistakes I see with Cypress testing is:
- Not making commands/assertions retry-able
- Not creating page objects
- Not using dedicated test selectors
- Not making tests deterministic
- Too many end to end tests
- Not running the tests on each PR (CI)
- Not knowing how to debug Cypress tests
- Not mocking out external dependencies
- Not retrying fragile tests in flaky environments
This is part of the testing training in Angular Architect Accelerator – a course helping you become an Angular architect and beyond. If you want to learn more about this and get to the fun phrase in your software developer career in the fastest way possible, I recommend you check it out and register for the free warmup workshop.
#1: Not making commands/assertions retry-able
This is the #1 reason people complain over Cypress being “flaky” as this destroys the main stability benefit of Cypress.
Let’s consider how Cypress’s retry mechanism works: Cypress retries the last command/assertion.
For that reason, you risk flakiness if you do two commands in a row.
The Cypress docs use the example:
- Consider testing a todo list app. One todo item is already created and you want to write a test that adds another item (todo B)
- The todo b item is added with a 100 ms delay
- The test spec is looking like this:
cy.get('.new-todo').type('todo B{enter}') cy.get('.todo-list li') // queries immediately, finds 1 <li> .find('label') // retried, retried, retried with 1 <li> .should('contain', 'todo B') // never succeeds with only 1st <li>
As you see above, this will cause flakiness.
It finds the first li
item (existing todo item) but will retry the should assertion only on a nested selector of the first li
element, that was existing when the commands/assertions first ran. We can retry the last selector forever but it will never register the todo B item, even after it is added to the DOM (as it is locked in on the first li
item).
Solution: You either need to combine commands or alternate commands and assertions
To fix the problem above, we must either combine commands (so we are retrying the whole todo item selector) or alternate commands and assertions so we only continue when the command has passed an assertion.
Combine commands:
To combine commands we can simply combine the todo item element selector into on cy.get
:
cy.get('.new-todo').type('todo B{enter}') cy.get('.todo-list li label') // 1 query command .should('contain', 'todo B') // assertion
Alternate commands and assertions
We can fix the problem by alternate each command with an assertion, ensuring we only continue to the next command, when the current has passed its’ assertion:
cy.get('.new-todo').type('todo B{enter}') cy.get('.todo-list li') // command .should('have.length', 2) // assertion .find('label') // command .should('contain', 'todo B') // assertion
#2: Not creating page objects
A core aspect of a maintainable testing architecture is to split the test specifications from the low-level DOM access and commands. That gives you an easier overview of the test specifications and enables you to update the test implementation without touching the test specs.
The common mistake I see is repeating DOM selectors and having big cluttered spec files.
The solution is simply to create a class for each page (page object), that encapsulates the selectors and commands needed for the page. The page objects are then simplifying the spec files by acting as a user-friendly abstraction.
In a Todo app, the todo list page might have a page object such as:
Note here, how I am making the properties and methods static, as we normally keep page objects stateless anyway and thus don’t want to bother with instantiation.
Now we have the page object in place, we get a clean spec file looking as:
#3: Not using dedicated test selectors
One of the reasons that GUI testing is normally considered fragile is because the DOM can change a lot. To make selections more stable is to use a dedicated selector for GUI tests that informs the developers, that an element is involved in a GUI test.
The Cypress Docs mentions, that it is beneficial to use data-*
selectors over CSS selectors, eg. classes, as it informs the developers that the element is used in a GUI test and it won’t be affected by eg. styling. I recommend using data-test
as your GUI test selectors, as this is a framework-agnostic selector, portraying a clear intent, that the element is used in a GUI test.
This can then be used like (in the corresponding page object):
#4: Not making tests deterministic
I sometimes see engineers wanting to engineer a “testing robot”; a testing framework that will fill out a page with random values and random pathways. This is an attempt to fulfill the utopia of having a robot being able to automatically test the app regardless of DOM changes and is able to fill in random data.
This makes the tests fragile and slower to write because you need to maintain a complicated test engine, that will only grow and grow in complexity. That will normally cause the E2E tests to be fragile and once they become fragile they get ignored in the development process and lose all their value.
You need to remember, that writing end-to-end tests are an investment decision and it needs to be cheaper than doing manual testing, to be profitable. When you create such a beast of a test engine, I have more often than not seen it causing more costs and stress than just doing manual testing. Don’t get me wrong, it can be fun as hell to build such a tool, but most likely it is not a profitable investment.
Also, conditional testing has some limitations if you depend on the GUI state in conditionals. You can’t do it without flake and/or explicit wait with a client-side rendered app, as you don’t know when the app is completely rendered.
The solution
What I invite you to do instead, is to write a few, highly stable, simple, and deterministic end-to-end tests.
Deterministic tests mean no conditionals in the test code. Each test execution will go through the same test flow.
Quality > quantity when it comes to testing, especially GUI testing, and we rather have a few high-value tests that run stable than a huge test engine to maintain.
You might choose to do conditional tests anyways and there might be reasonable use cases for it but my advice is still to have your most important use cases covered in the simples way before you do anything fancy, which leads us to…
5: Too many end to end tests
Again, we need to remember that automatic testing is an investment decision. The higher up we get in the testing pyramid, the more expensive the tests become to develop and maintain, but the yield is also higher because of the bigger test scope ie. more realistic.
By following the proportions laid out in the testing pyramid, we get a nicely covered application, and we invest the right amounts of each test type.
What is often happening is teams want to test “everything” with end-to-end tests. Never have I seen a stable testing suite come out of that. In fact, I often see this pattern:
- The team introduces end-to-end testing and wants to test “everything”
- The team spends months/years building a suite, without making it run on the CI ie. using it
- The team finally get the suite “ready” and starts running it on the CI/use it
- CI builds become flaky as too many end-to-end tests are hard to maintain
- The team stops running e2e tests on the CI because they become too “busy” and thus the e2e suite is no longer providing any value but is just keeping a lot of people busy
To avoid this from happening, I recommend this formula for testing
- Initially, E2E test only the sunshine scenarios of the most important use cases of the system, I recommend starting with defining the top 5 use cases in your system (use your PO for help) and get them covered with a simple and stable suite
- Integration tests testing sunshine scenarios for all use cases. Now, this can both be done with Cypress and Angular unit testing tools such as Jasmine/Karma
- Unit testing business logic critical to use cases. These files are critical to use cases we want to test them with 100% coverage. That means 100% coverage on services and pure functions containing business logic
#6: Not running the tests on each PR (CI)
Again, this comes down to getting value out of an E2E testing suite. If it is not integrated into the software development flow the E2E tests provide close to no value.
For that reason, I recommend that you set up the CI to run on e2e tests as soon as you have written one test.
I recommend you have a smoke test suite for running on each code check-in and a longer suite running 2 times daily on the master branch (eg. 10 pm/am).
The E2E tests might not run stable enough yet for you to use it as a merge check on pull requests, so until you get to that level, you can run it as a pre-push git hook locally and/or make it an optional merge check on pull requests, so flaky tests won’t block pull requests. Ideally, you have the tests running on the CI and make them a mandatory merge check on the pull requests once the tests have proven to be stable/trustworthy.
#7: Not knowing how to debug Cypress tests
Cypress tests run in the same JS context as your application. That means you can easily just write debugger statements/create breakpoints in your Cypress testing browser by opening DevTools in your Cypress browser. From here, you can break in places you need to debug.
You can also add an NgRx meta reducer running for E2E tests only, to log state and actions for easier tracing (if you use NgRx).
Also, when running the Cypress tests on the CI, make sure to run it in headless mode and make it record a video. You want to publish this video upon CI E2E test failure, eg. as an artifact, so you can easily troubleshoot why the test failed.
#8: Not mocking out external dependencies
What I recommend, to get the most “bang for the buck” test scope is, you run your Cypress tests together with your BFF (backend for frontend) BUT you mock out external calls from the BFF/backend. Often we can have critical business logic on the BFF, we want to test in combination with the frontend to have a prober end-to-end test.
Running this together will require that your BFFs can run in a test mode/pass a test header to requests, indicating that the BE should stub out external dependencies.
This might not be necessary for everyone, eg. if your staging environment is already fast and stable, but if you are dealing with a fragile staging environment (often seen in financial institutions), this is a must for running stable E2E tests.
#9: Not retrying fragile tests for fragile environments
As a worst-case scenario, if you are dealing with a flaky environment, you can use the cypress-plugin-retries plugin to retry failed tests. This is not to be lazy and swipe the flaky tests under the carpet but should only be used if the flakiness is out of your control eg. flaky environment/dependencies and you can’t mock out the flaky dependency (as previously mentioned).
Remember, Cypress tests are only trying to automate what a manual tester would already do when testing. A manual tester might experience flaky behavior on the first try, then retry, and see the test pass to conclude the status as green, ie. some flakiness is accepted.
If you are running Cypress against an environment, that should not tolerate flake, eg. production, don’t use this plugin. This should fail loudly and raise alarms.
This is how you set up retry for your flaky tests:
1) Install
To set this up, you first install the plugin from npm:
npm i -D cypress-plugin-retries
2) Setup plugin
In your support/index.js
file, you set up the plugin:
// Retry failed tests: require('cypress-plugin-retries')
3) Configure amount of retries
Either set a default in your Cypress environment config:
{ "env": { "RETRIES": 2 } }
Or set the number of retries for a specific test:
it('test', () => { Cypress.currentTest.retries(2) })
The number of retries should match the threshold for a manual tester.
And voila, now you will see something like this in when running Cypress tests with flake.
Conclusion
In this post, we saw the nine most common mistakes I see companies make with Cypress E2E testing, which makes the return of investment from testing less than optimal:
- Not making commands/assertions retry-able
- Not creating page objects
- Not using dedicated test selectors
- Not making tests deterministic
- Too many end to end tests
- Not running the tests on each PR (CI)
- Not knowing how to debug Cypress tests
- Not mocking out external dependencies
- Not retrying fragile tests in flaky environments
If you liked this post, you owe it to your team members, LinkedIn and Twitter followers to share it, so that we together can make E2E testing a great experience!
Do you want to become an Angular architect? Check out Angular Architect Accelerator.
3 thoughts on “The Most Common Cypress Mistakes”
I use “data-cypress” instead of “data-test” to know that the element is used for e2e tests.
I like to return “this” after each static page object method. That way my team can keep the methods single-purpose in the PO and compose them in the spec itself. The result looks something like
SomePO
.openModal()
.verifyShown()
.closeModal()
nice tips, see mine:
https://medium.com/@tomastrojcak/how-to-write-cypress-tests-effective-49d25e9e4265