Functional testing is considered standard and is carried out in the vast majority of projects. The need for functional testing is intuitive – the programmer adds a new functionality, and the tester checks if it works properly. But how can we justify the need for non-functional testing, and what are non-functional tests? In this article, I will try to take a more detailed look at the types of non-functional tests and the risks of skipping them in projects. How is the functional testing of software testing done? Why is it important to verify the non-functional requirements of a software system?
Functional testing – functional requirements
Functional testing focuses on the most basic requirements. This type of testing aims to check whether the functional aspects of software applications work as intended. The project team gets the requirements ready, and the client or employer knows more or less exactly what they want to get. During the process itself, these requirements may change. Nevertheless, we still have the target in front of our eyes and know where we are headed. A tester, programmer, business analyst or even the end user can test the application against the functional requirements, and there is usually no need to acquire complex theoretical knowledge about how to test. The application needs to be “clicked through,” checking whether it works according to the business requirements.
Also read:
Non-functional testing
Non-functional tests are also referred to as quality tests. This aspect is more complicated than in the case of functional tests. Most often, the scope they check is defined by a short sentence: “How the system works”. Let’s start with a breakdown based on the ISTQB Test Analyst syllabus.
Non-functional tests can check non-functional aspects such as:
- usability
- security
- reliability
- performance
- maintainability
- portability
- compatibility
Importance of non-functional tests
Only those companies that are aware of the issue and the risks related to not testing decide to carry out non-functional tests and check non-functional requirements. However, not everything can be predicted or found. From time to time, even giants like LinkedIn (from which the data of 700 million users, 93% of accounts, was leaked and put up for sale) can also learn that the hard way. Facebook (and the related applications such as WhatsApp, Messenger, and Instagram), which in October 2021 experienced its biggest-ever outage and was inaccessible for almost 6 hours, has also faced similar problems. That failure generated losses of about $6 billion! The public projects case is particularly interesting. Although they seem to be subject to several legal requirements, and therefore much more difficult to implement, they are frequently subject to failure, such as:
- Security vulnerabilities – such as a vulnerability in the e-Toll system which allowed for the downloading of user data.
- Performance problems – such as those faced by government servers.
- Unsuitability for users – the low intuitiveness of the system hinders the operation of applications for tolling on highways, which has brought about criticism from many drivers. So why aren’t non-functional tests performed? In small (e.g., up to 10 people) and short-term projects, even if awareness of non-functional testing is high, the reason for overlooking them is obvious: the costs involved.
Consult your project directly with a specialist
Book a meetingNon-functional tests
Performance tests
Among the non-functional tests mentioned earlier on, the awareness of 2 types of tests in particular is increasing, and they are worth looking into.
The first, with a lower entry level, is performance testing. The subject seems quite complicated, but after learning a few tools (e.g., JMeter, Gatling), it turns out that you can relatively quickly start running basic performance tests that will catch serious problems in an application. The following types of performance tests can be distinguished:
- Load testing – testing the performance of the system according to different load levels, falling within the assumed range,
- verloading – testing performance beyond the maximum expected load or with resource availability lower than the minimum requirements,
- Scalability testing – checking the system’s ability to increase or decrease the resources used depending on the load,
- Spikes – testing the system’s ability to process a spike in load and then return to a typical load,
- Endurance – testing the stability of the system under a given load over a period that is adequate and proportional to production realities,
- Concurrency – testing the situation in which the same action is performed simultaneously,
- Throughput – determining the maximum load the system can handle while meeting performance requirements.
Requirements for performance testing
When developing new applications, it is difficult to give exact requirements for performance tests. They are usually based on defining production profiles or on predictions or experience with similar applications that are already running. It is easier to define requirements when implementing a newer version of the software, because here one can refer to production data, such as the number of queries per second, bandwidth, or computing power. One of the risks in performing performance tests is inappropriate execution (e.g., by inexperienced people without the requisite competences). This can even lead to a lack of analysis of key system components or the misinterpretation of results. Another big risk is conducting tests on test environments, which are usually not the same as production environments due to configuration, data volume and capacity.
Safety tests
The second, much broader field is security. Security tests check whether the system sufficiently protects data and functionality from third-party attacks. The importance of such tests is constantly growing. Who hasn’t heard of various hacks, leaks, frauds or thefts resulting from insufficient security or low user awareness?
However, it isn’t easy to find a tester who deals with functional testing on a daily basis, in addition to being able to conduct security tests from time to time. That’s why security audits are most often (if not always) outsourced after a system (or part of it) has been developed. This aspect is very broad, and it’s a good idea to start learning by exploring the OWASP Top 10 – a document that lists the 10 most common security risks. Some types of security tests are as follows:
- Vulnerability scanning – performed using tools that scan a program to look for known vulnerabilities in its components,
- Security scanning – focuses on the security of application configurations, networks and other systems,
- Penetration testing – a process that simulates an actual attack,
- Security audit – an extensive, structured application inspection process that follows defined standards,
- Risk assessment – analyzing and identifying the biggest security risks,
- Security health check – a combination of security scanning, penetration testing and risk assessment to determine the level of current security measures and their effectiveness.
Usability tests
These tests check whether the user can easily learn how to use the application and whether their experience will be positive. Aspects such as aesthetics, intuitiveness, and the ability for people with disabilities to use the app also come into play here. It’s a good idea to keep this in mind and focus on these issues from the very beginning of the project, especially in agile projects, where we have constant contact with the client and a real opportunity to influence changes to the functionality being created. Indiscriminately adding functionalities according to the requirements of the client (who is not always technically-minded) is not the best approach. Conducting usability testing only after the application is completed can result in a significant increase in project costs due to additional work.
Reliability tests
Reliability testing means verifying if a system can operate correctly for a specified period under specified conditions. However, it is difficult to simulate production conditions on test environments over a long enough period, which is why, among other things, such testing can take place partially on production servers. An example of reliability testing is placing a new version of an application on one of many servers (usually a less frequently used one) and leaving the previous version on the others. After an assumed period of application monitoring, the new version is uploaded to the remaining servers. This approach must be accompanied by appropriate configuration so that if there are problems with the new version on a single machine, traffic is automatically redirected to others. The reliability characteristics are:
- Maturity – the level of fulfillment of reliability requirements,
- Reachability – the time after which the system is available,
- Fault tolerance – the ability of a system to continue operating during a failure,
- Reproducibility – the ability to recover from a disaster, measured in time and the amount of lost data.
Maintainability tests
Maintainability testing allows you to analyze the complexity of maintaining a system in the future. They are rather rarely conducted, despite the fact that the maintenance of the program will (as assumed) last longer than its development. Prevention plays an important role from the beginning of the project – systematic code review and keeping records. It is very likely that in the future the system will be maintained or developed by other people rather than those who created it. For this reason, code and documentation transparency are key.
The main objectives of the maintainability tests are:
- Minimizing the cost of application maintenance,
- Minimizing the application downtime needed for maintenance.
Portability testing
Portability testing helps determine the level of complexity involved in moving a software component or application from one environment to another. In this case, too, separating them from the testing process and focusing only on such types of testing is rare. Agile design frameworks such as Scrum naturally foster portability testing. In an iterative approach to software development, portability is tested on frequent releases of successive versions (moving the application from the development/testing environment to production), even as often as every 2 weeks. Other situations in which it is worth conducting portability testing include moving from Internet Explorer to Chrome, from one version of a database to another, or extending the program’s installability to successive versions of Windows or Mac systems. Portability can be measured by the amount of work required to move a software component from one environment to another. The characteristics of portability are:
- Installability – the capability to install the program on a new system,
- Adaptability – the capability to install applications on all target systems,
- Substitutability – the capability to replace an existing software module.
Compatibility tests
Compatibility tests verify the capability of different programs to coexist in the same environment, as well as verifying the ability of a program to work according to different parameters. A very popular case is compatibility testing, which aims to verify the operation of applications in different browsers, such as Chrome, Firefox, Internet Explorer, Safari or Opera. Another case is the testing of mobile applications on many different devices. This approach identifies problems that customers may encounter using non-standard operating systems or browsers. Compatibility can be tested according to:
- equipment
- operating system
- software
- network
- browser
- mobile device
- software version
Test cases for the non-functional type of software testing
The non-functional type of software testing focuses on the aspects of the software that are not related to performance, usability, security, and scalability. When creating test cases for non-functional testing, it is important to consider various scenarios that can impact the overall performance and user experience of the software.
Non-functional testing tools
Non-functional testing tools can help a great deal in the testing process. There are different tools available used to test non-functional aspects (stress testing, volume testing, performance, etc.). Some examples of non-functional testing tools are JMeter, Loadster, and Loadrunner.
Testing non-functional requirements – key takeaways
- Non-functional testing is a type of testing that checks non-functional requirements (performance, usability, security, and scalability).
- Non-functional tests are also referred to as quality tests.
- It is important to recognize the significance of non-functional testing and understand the potential consequences of skipping this type of testing in a project.
Non-functional software testing – summary
Functional testing is very important, and in many projects, no one questions its relevance. Functional testing verifies functional requirements. Ongoing manual testing and regression development and maintenance practically exhaust the subject. However, we need to be aware of the importance of non-functional testing as well as the risks involved in skipping this type of testing.
Non-functional aspects are important. However, even at the test planning stage, there is often a lack of thorough analysis of what non-functional tests should be carried out, with the result that only some bugs are found. The risks are high: an unresponsive online store on Black Friday, leakage of confidential customer data, blocking of the site for ransomware, non-intuitiveness of the application or an unattractive design discouraging users from using it, and payments not working after uploading a new version… All these can prove to be much more costly than investing in systematic non-functional testing.
Elevate Your Application Development
Our tailored Application Development services meet your unique business needs. Consult with Marek Czachorowski, Head of Data and AI Solutions, for expert guidance.
Schedule a meeting