Set Up End-to-End Tests with Puppeteer and Mocha

Set Up End-to-End Tests with Puppeteer and MochaHow to log in before the main test suite and run several suites in parallel browser instancesElena SufievaBlockedUnblockFollowFollowingFeb 13Photo by rawpixel on UnsplashRecently, I managed to set up end-to-end tests on Puppeteer and Mocha in our project, and I want to talk about the general process and techniques when working with these tools.

By end-to-end (e2e in the rest of the article) tests I mean automated scripts which would go through your app and check the work of interface elements, network requests, and other pieces of the application system.

It can be complex scenarios like logging in and creating an order for a pizza but it can also be much simpler cases of appBars navigation or using a filter tool.

Puppeteer is an extraordinarily popular library, and its use in combination with Jest is found in the examples most often.

However, at some point, we faced not-so-convenient Jest aspects: the failure of asynchronous tests due to timeout and complicated dependency management when updating one of the libraries.

The universality of Jest and the decisions it made for us also sometimes became an obstacle, as for e2e tests we wanted to have more fine-grained control over their execution.

(I admit, I tried Jest for this purpose about six months ago, and possibly since then setup and debugging has become much easier.

)Note: My example reflects a rather specific setting, but I hope that the description of this particular case of using Puppeteer will help you in your work and experiments.

This article does not pretend to be a generalization or an example of a universal approach to writing e2e tests.

I made a branch in my Sandbox project on Github to share the boilerplate code for the test setup with basic examples.

Please note that there is no business logic there nor the real components, it’s only used as an approach demo.

However, I think the very skeleton of the main.


ts file could be helpful to those who are struggling with similar problems.

I hope so very much.

Tests setup and groupingSo, here is the initial state of our project, which needs e2e testing.

This is a web application on Typescript and React which provides a different level of access to the users with different rights and roles.

There are more roles than two traditional ‘user’ and ‘admin’.

Depending on the user’s identification, a certain interface is rendered and completely different scenarios are available, so authentication is necessary for our the test script to work correctly.

We assume that in the future the monolithic app can be broken apart, and potentially every final type of application delivered to a particular type of user can become such a part.

If the different parts of our application can be mapped to different user types, the same can be done with tests.

As a result, I came to this scheme: we have a common input point that starts the process for all the scripts available (it is main.


ts), and the test folders corresponding to each type of user:Tests are grouped by user typeWe decided to write tests for different user types as independent suites.

In my example, we would have 3 directories within the common E2eTests folder: for Doctor, Parent, and Admin user types.

Also, we would have a testUtils folder with an authentication helper function and some test data.

So, what about the main entry point?I wanted to make a short main function which would launch 3 browser instances for three users, run all tests with Mocha and after that would quit successfully/with an error.

I didn’t try a lot of ways to run several browser instances in parallel: you can find various libraries or code snippets while googling the problem.

The first solution I tried worked well, so for the moment it’s generic-pool library that creates a factory for our Chromium instances and I use this answer from one of the Puppeteer issues:Create a pool of browser instancesAlso, we need a function to create a Mocha instance.

I like this feature of Mocha very much: you can start your tests running from the code, using some conditional options that would be impossible or not so simple from the npm script.

So, in our main setup, we create 3 Mocha instances and give to each of them a folder with tests targeted for a concrete user:Create Mocha instance and give it appropriate filesAs we intended to use these tests in our CI, I wanted to get the failure if one of the tests failed.

I decided to use Promise.

all mechanism to control all test suites: the main function creates 3 Promises for 3 user types and waits for the resolve/reject result:A part of main function where suite promises are createdThis is a function that creates a Promise, it takes prepared earlier browser and Mocha instance and runs the tests referring results by the user type:Run test suite and resolve.

reject the PromiseChalk is useful for colorizing results: we use Gitlab console and it’s easier to read colorful log there.

You may have noticed I didn’t show any example of the test code itself.

This is because there are plenty of clean and useful test examples around Puppeteer tutorials.

There are basic examples of test cases in the repo.

In the meantime, there is a very important test which is crucial for running all rest suites: authentication.



This script uses standard Login form in the app interface and resolves the Promise when authentication has succeeded.

It’s before all for the suite, so it’s a part of suite Promise:‘Login’ helper functionFinally, our main function looks like this:Main function which would be started in CI scriptThere is always room for improvement but this basic configuration is enough to cover our app with simple e2e tests.

Bonus: configure Gitlab CI for running e2e testsI mentioned we use Gitlab CI so I would add a .


yml example of configuring a job for our tests.

It’s a simplified version of the configuration, without cache and other stages and jobs which could be running in your CI as well.

There is one thing that you should manage for running Chromium in Gitlab runner: the browser would need a bunch of Chromium dependencies available for launching.

It is possible to prepare them in advance by installing all the necessary packages or the Chromium itself on the machine that would run the tests.

In our example, we install dependencies before the script:A simple version of .


yml fileThe log of our tests result would look like this:Tests results log in the terminal/Gitlab consoleThat’s all!.I realize this story may seem to describe a specific situation and the solution with this authentication could be a little strange.

However, currently, this setup fits for our project needs and I think it could be nice to share it with you.

Sometimes you find yourself googling a very specific question or digging in the infinite Github issues ????.

Maybe this article would help you at this moment ????.

Thank you for your time and attention.

Learn TypeScript – Best TypeScript Tutorials (2019) | gitconnectedThe top 18 TypeScript tutorials – learn TypeScript for free.

Courses are submitted and voted on by developers, enabling…gitconnected.

com.. More details

Leave a Reply