- 1. Is it possible to test applications faster?
- 2. Creating preconditions using APIs or using a database directly
- 3. Creating preconditions using APIs or using a database directly
- 4. Converting Tests to Lower Level Tests
- 5. Breaking down tests into smaller packages and grouping
- 6. Reducing the number of similar tests
- 7. Changing the test structure – matching the test to the tested functionality
- 8. Appropriate use of waits
- 9. Dedicated database for testing
- 10. Use of headless mode
- 11. Suitable equipment for testing
- 12. Summary
Is it possible to test applications faster?
The dynamically developing technology and growing user requirements make the design of efficient and reliable web applications increasingly complex, and ensuring the proper functioning of the system requires a great deal of effort not only from the development team, but also from the test team. A good set of tests helps to address these challenges. Unfortunately, a focus on quality does not always go hand-in-hand with speed.
It is not only test managers and project managers who dislike the long wait for test results. This is also problematic for developers – the lengthy application testing process may be one of the reasons for delays in software development. For the sake of the project, tests should be carried out as soon as possible (using common sense – they also should be reliable and give real information about the tested application). Below are some ways to speed up testing without compromising quality.
Parallel running of tests
Running tests in parallel is the best and most effective way to significantly reduce testing time. By default, in many tools, tests are carried out in a sequential way (only one test is performed at a time, and once it is completed, another one is launched). In general, this option depends on the test runner used and the options it provides for this purpose.
For example, XUnit uses a concept called test collections, which defines how tests can be performed in parallel. By default, each test class is a unique collection, and tests in the same class will not run in parallel – for this to happen, they must be separated into two or more classes. Additionally, this tool allows you to create custom collections where the user can apply special attributes to create them, making it easy to manage and group tests as well as to run them accordingly, depending on the context. Other test runners might have similar functionalities. For example, in the Playwright tool, you can run tests in parallel both within one test class and throughout the project using configuration parameters. This option is enabled by default.
How much time can be saved with this approach? Let’s assume (without taking into account machine performance, server latency, network bandwidth, and other factors affecting testing performance) that we have a collection of 100 tests, each performed for an average time of one minute. It will take 1 hour and 40 minutes to start them sequentially. When running them in two threads (2 tests performed at the same time), the run time needed will be reduced by half – up to 50 minutes. Adding more threads will allow for an even greater reduction in the time needed to execute them.
Potential problems
Unfortunately, running tests in this way is not the golden mean, where you just need to change the configuration and increase the number of threads. Be aware of potential problems related to running tests in parallel, for example:
- the performance of one test may (e.g., through changes to the database or resource consumption / load) affect the performance of other ones. Such tests should be identified and run sequentially (before or after a collection of tests performed in parallel). Thanks to this, we will provide some isolation for all tests, and they will not affect each other while running;
- computer / server performance – select the maximum number of tests performed in parallel to avoid performance issues.
The appropriate approach to test analysis and dividing them into two groups (those that can be run in parallel and those that should be run sequentially) and selecting the right parameters allowing you to determine the maximum number of tests running can be a challenge, but the benefits of changing the way tests are run will allow you to significantly reduce the time taken for testing. Not only will this give you extra time for other activities, but most of all it will enable you to get feedback from tests earlier and address potential errors and failures more quickly.
Read also: Test automation frameworks. Introduction to Atata
Creating preconditions using APIs or using a database directly
When testing the user interface, it is often necessary to properly prepare the configuration of the tested application or use certain data necessary to run the test (these are most often described as preconditions). These types of actions should be performed using the API. If possible, the data can be prepared directly in the database. This increases the speed of tests (direct HTTP requests or modification of data in the database are faster than „clicking” individual options in the interface) and their stability.
Potential problems
Here, too, we face potential problems, such as the need to assure compliance with the API or database. While the API is easy to maintain (in most cases it is a matter of adding/removing fields in requests), in the case of a database you need to know its structure and the relationships between tables.
Converting Tests to Lower Level Tests
Performing tests through the user interface is generally slower and less reliable than lower-level tests such as API or unit tests. According to the test pyramid, there should be fewer UI/E2E tests than tests on lower levels.
UI tests require the browser to run and interact with it, which equals more time and resources. The interface rendering also takes time; it takes longer for the page to load properly for various reasons (e.g., a loaded application server or database). Sometimes the page may behave unpredictably (the component will not load properly, the dynamic element will not appear, etc.). This can not only lengthen the time of testing, but also make it unstable.
If you have multiple UI tests, consider converting them to lower-layer tests if possible. For example, a page access test, which can only be viewed by a logged-in user (e.g., a user of the account panel), can be performed entirely on the UI layer – including logging in. However, you can modify it accordingly and implement login to the application through the API, so that the UI layer will be used only to verify a specific page. Moving the login „lower” will speed up the execution of the test, because its exact execution on the UI side will be possible to start with the already logged-in user going directly to the desired page.
Breaking down tests into smaller packages and grouping
Grouping tests does not itself affect the performance of tests which have already been created. The goal here is to enable the selection of specific tests or sets of tests due to their properties (tested area, functionality, type, priority, etc.). Thanks to the appropriate specification of test cases, we get an option to run only those tests that are needed at a given point in time. The main split can be done by isolating stability, smoke and regression tests, and providing tests with additional attributes will allow you to start them in almost any configuration (e.g., due to the above-mentioned area and type of tests).
It is not always necessary to run the entire set of tests. Often we want to test only a selected part of the system or a specific functionality. Manual searching and running them would be a very time-consuming activity, and when we have a large set of tests, there is a chance that some may be missed. The use of attributes for filtration allows you to specify all tests that are needed in a given situation. For example, when you have an online store and are working on a new product type, you can only run tests related to adding a product to your basket or wish list before you decide on a full regression.
Read more: What is regression testing
Split testing – case study
Let me give you a real-world example. While working on a project for a customer from the bookmaking industry, we launched regression tests, among others, during each release.
Challenge:
At the very beginning, they were not categorized except for a few general tags describing what type of test (regression, smoke) it was, its ID in testRail, and who the author was. All tests were performed over about 2 days. Then came the meticulous process of analyzing failures.The client decided to change his approach and make running tests and subsequent analysis easier and faster.
Solution:
Together with other teams, we developed a process on the basis of which tests were analyzed and assigned to the teams responsible for a given application area. We marked them with new tags corresponding to the tested area, among others; we also created documentation – tests were unified and described. In addition, API/UI tests were extracted. At a later stage, we analyzed whether they were stable – on this basis, we isolated stable and unstable tests (flaky tests). We moved the unstable tests from the regression tests to a separate section and updated them to subsequently add them back to the regression set. During the next stage, we analyzed UI tests and converted them to API tests whenever possible.
Result:
Thanks to these solutions, we were able to reduce the time needed to perform all tests to almost 18 hours, and run additional tests, e.g., for specific areas of the application, so that we did not have to run a full regression every time.
Reducing the number of similar tests
It may turn out that a set of test cases consists of tests verifying practically identical functionality, but using different data (e.g., adding 1 product to the basket, adding 5 products to the basket, adding a product from the „household devices” category to the basket, adding a product from the „Consoles” category to the basket). If the second test case does not verify additional criteria (e.g., add a 5% discount when the customer has at least 5 products in the cart), consider whether the test case should continue to be used. This happens especially with a large number of test cases created for large systems, where as they develop you can lose track and create a test case that already exists. Periodic analysis of test cases and updating and verifying them will help to identify such test cases, and further research will answer the question of whether to keep the analyzed test case, modify it or completely remove it.
Changing the test structure – matching the test to the tested functionality
Tests should not only be stable and fast, but also properly written – so that they test a specific area of application or functionality. In a properly designed test, the number of steps needed to fully verify functionality should be as small as possible to avoid verifying something that should not be covered by testing. For example, when testing adding a product to the basket (assuming that the user must be logged in), we should not focus on asserting whether the user has actually been logged in – this should be checked when testing the customer’s login area to access the store. In particular, you can fall into such a trap by creating tests according to the BDD process (example: specflow), where the test steps are often simply copied from the existing one and placed in a new one, without editing it to fit them into the context.
Let’s consider a test of user login to the application:
Given I navigate to the homepage And I click the Login button When I log in as ‘testUser’ with password ‘testPassword’ Then I assert that the My Account button is displayed And the ‘testUser’ name is displayed
You can use its steps and add them at the beginning of the test of placing a product in the basket:
Scenario: Add the product to the basket Given I navigate to the homepage And I click the Login button When I log in as ‘testUser’ with password ‘testPassword’ Then I assert that the My Account button is displayed And the ‘testUser’ name is displayed When I navigate to a test product page And I add the product to the basket Then the Basket should contain the test product
Steps regarding the correct user login are not relevant in the context of adding a product to the basket – moreover, the login test is duplicated. Its significance is questioned, because logging in will be checked in every test that will be written on that basis.
Instead, it should be modified accordingly to include only the function of adding a product to the basket:
Scenario: Add a product to the basket When I navigate to a test product page When I log in as ‘testUser’ with password ‘testPassword’ And I add the product to the basket Then the Basket should contain the test product
Thanks to this, the test was optimized in terms of speed and tested functionality.
Referring to the aspects related to creating preconditions, the login can be moved to this section and done on the API page, meaning that the test could start directly from navigation to the product page, where the user would already be logged in:
Background: User is Logged In Given I login as a test user through API Scenario: Add the product to the basket When I navigate to a test product page And I add the product to the basket Then the Basket should contain the test product
Going a step further referring to the test conversion, if it is not necessary to check something on the UI side, you can try to convert the test to the API level and speed up its execution even more.
Appropriate use of waits
The content of the application can be displayed statically or dynamically. AJAX technology, in which communication with the server takes place in an asynchronous manner without the need to reload the entire document, is used for the dynamic display of content. Thanks to this, the elements are loaded on the page at different intervals.
Unfortunately, this behavior forces the use of the right approach to interact with dynamically loaded elements. Using the static Thread.Sleep() method, which suspends code execution for a specified period of time, is not recommended. This results in an unnecessary extension of test time, and if the desired item is loaded faster than the declared pause, the program will wait idle until the specified time. If the item is not loaded before the specified time, the test will fail, as there are many factors affecting the load of the item (server load, network bandwidth, etc.), and sometimes it may take several seconds. Excessive wait time can also affect the stability of tests, which can sometimes fail.
Multiple automation frameworks (e.g., Selenium, Cypress, Playwright or Spock) contain built-in solutions. It is also worth mentioning external libraries such as Awaitability, which can extend the functionality of other tools. For this purpose, you can implement methods that will wait for further code execution until a specific condition is met (conditional waits). An additional advantage of this approach is the ability to define these methods globally and use them without duplicating code in tests, which also makes it easier to maintain the repository.
Dedicated database for testing
An undoubted advantage is the separation of data from the rest of the environments used in the project, thanks to which the processes do not disturb each other – sudden implementation in a given environment jointly used by the entire team or changes made manually will not affect the performance of tests. To create such a database, you can use the production base, which will give you a very similar test environment to that utilized by end users of the application.
When using a separate database, remember to manage it properly – for example, resetting the data when the tests are completed.
Use of headless mode
Headless mode means that the browser is launched without a graphical interface. Its operation and functions remain unchanged compared to the classic browser, but the resources are not additionally burdened with the rendering of the web application and the launch of the browser interface itself. Running tests using this mode will allow you to reduce the time of tests compared to running tests using a typical browser. This is especially useful when we run multiple tests at the same time – browsers with an active GUI use many more resources than those running in headless mode. An additional advantage is the option of using them to conduct tests on the server or container side, and hence a simple way to include them in the CI/CD process.
Suitable equipment for testing
Performing tests, in particular E2E, in browsers requires quite a lot of computing power. Tests perform many operations (launching a browser instance, rendering an application, executing a test code, etc.). Carrying them out with poorly performing equipment, especially if it is done in parallel, can not only negatively affect the time of their execution, but also lead to instability. The simplest solution is to bring the computer / server on which we perform tests up to date. This will increase the speed of the tests (and the number of parallel tests).
If there are reasons why updating the hardware is impossible, using cloud services might be a good idea.
Summary
The right test automation tool will certainly help streamline the entire process, but it is the automation tester, as the person responsible for the tests, who should follow good practices and promote their use in the project. With the above tips, you can speed up the execution of automatic tests, so that information on the reliability of the tested application will be available faster. This is important from a business perspective – the competition does not rest on their laurels, customers and end users expect the developed application to run smoothly and reliably, and any fixes will be made as soon as possible. Thanks to the swift testing process, we will be able to meet these expectations.
It is not obligatory to use all the listed methods at the same time, but using each of them will reduce the time of testing by several dozen minutes or even hours when we run lots of tests.
It is important to periodically check the set of tests and discuss optimization options in all respects – each member of the team responsible for software development should suggest corrections which will give us diversified feedback (from a technical and business point of view), making it possible to conduct in-depth discussions and draw appropriate conclusions, using them to optimize the tests.
- 1. Is it possible to test applications faster?
- 2. Creating preconditions using APIs or using a database directly
- 3. Creating preconditions using APIs or using a database directly
- 4. Converting Tests to Lower Level Tests
- 5. Breaking down tests into smaller packages and grouping
- 6. Reducing the number of similar tests
- 7. Changing the test structure – matching the test to the tested functionality
- 8. Appropriate use of waits
- 9. Dedicated database for testing
- 10. Use of headless mode
- 11. Suitable equipment for testing
- 12. Summary