Quality assurance plays a vital role in software development. An effective QA process ensures a high-quality product and saves money because errors identified early in the development life cycle are cheaper to fix. QA engineers use different tests to evaluate software: integration, load, regression, accessibility, and many others. To help you better understand these concepts, we have put together a list of the most common types of testing techniques.
Manual vs. Automated Testing
Before diving deeper into various types of tests, it is important to distinguish between the two main approaches to performing testing: manual and automated. Manual testing means that human testers execute test cases manually. During automated testing, QA automation engineers write code to simulate certain actions, and tests are then performed by software tools.
Each of these approaches has its advantages and disadvantages. Here is a brief overview:
Pros of Manual Testing
- Expensive automation tools are not required
- Can be applied to the majority of test types
- Cost-effective for small projects
- Easily adapts to changes in requirements
- Provides accurate feedback on user experience
Cons of Manual Testing
- Time-consuming and takes up human resources
- Higher risk of human error during testing
- Problematic to apply to performance tests
- Less reliable on large projects because it is hard to achieve the same test coverage as automated testing
Pros of Automated Testing
- Cost-effective for large projects
- Faster in most cases
- More accurate results and better test coverage
- Less prone to human errors
- Well-suited to load and stress testing
Cons of Automated Testing
- Requires more initial investment in tools and hiring qualified professionals
- Impossible to test user experience
- Not well suited for projects where requirements change frequently
- Cannot find errors that are beyond the scope of its code
Usually, QA teams incorporate both manual and automated tests into the QA process to ensure the best results. The right balance of these two approaches depends on the scope of a project, the functionality that needs testing, and available resources.
White Box, Black Box, and Gray Box Testing
Another way to categorize testing activities is by access to source code. There are three main techniques:
- White box testing
- Black box testing
- Gray box testing
In white box testing, also known as glass box or clear box testing, software code is visible to testers. They examine the application's internal structure to verify input-output flows, identify poorly optimized code parts, and discover security vulnerabilities.
Black box testing is a method that analyzes the functionality of an application from an end-user perspective without peering into its internal design. Testers do not know the code structure or implementation details and rely only on functional requirements. In other words, they know what the application is supposed to do but not how it does it.
And finally, gray box testing combines both techniques. QA analysts know some of the application's code and internal structure, as well as all requirements and specifications. Therefore, the testing is performed from the user's perspective but with access to internal information.
Deciding which testing technique to adopt depends on the goals for the tests and the project's nature.
Positive vs. Negative Testing
In addition to the approaches described above, we can distinguish between negative and positive testing.
Positive testing means the QA team uses valid input data and compares the output with the expected results. Let us take a look at a simplified example. Suppose testers need to test a text box where a user will enter their phone number. According to the requirements, the system should accept only numeric values. As the QA team has decided on a positive testing technique, they input only numbers and see if it works.
During negative testing, on the other hand, testers use invalid input data. For our phone number example, negative testing would mean entering letters instead of digits. Such tests help make sure that the system does not accept improper data and shows the correct error message.
Usually, QA teams combine positive and negative testing approaches to identify more bugs and enhance the quality of the final product.
Now that we know the differences between the main software testing techniques, we will look at the most common types of testing.
Functional testing is when QA analysts examine various components of an application against requirement specifications. In simple terms, testers verify that the application does what it is supposed to do. During functional testing, QA teams test each feature and make sure that the actual output matches the expected result. For example, they might check if users can successfully log in to the application once they provide valid credentials.
Whether functional testing is done manually or automatically, QA professionals usually follow these steps:
- Analyze the functional requirements
- Identify testing goals
- Create test scenarios
- Prepare input data
- Design test cases
- Execute test cases
- Compare actual and expected results
We can distinguish different types of functional tests depending on where the application stands in the software development life cycle. The following are several types of functional tests.
Unit tests are the first level of functional testing. QA teams perform such tests to ensure that individual parts of an application work correctly.
Suppose developers build a calculator app. One unit test will check software behavior when the user attempts to sum up two numbers. Other unit tests will verify how subtraction, multiplication, and division work. Of course, this example is highly simplified. QA professionals might run hundreds of unit tests in complex software products to examine each functionality.
Unit testing identifies bugs at the earliest stages of the software development life cycle.
The next level of functional testing is called integration testing. During integration tests, QA analysts validate if two or more components work together properly. For example, testers check that users can successfully log in to an e-commerce website after adding some items to their carts. Testers ensure that the integration between these two functionalities—log in and cart—works as expected.
When QA teams use incremental integration testing, they start with integration tests on two modules and add a third module if everything looks good. The testers continue to add modules one by one until they have tested the entire application. In non-incremental or big bang testing, all modules are integrated at once and tested together.
When testers are done with unit and integration tests, they can proceed with system testing to evaluate how the complete and fully integrated product works. For example, testing the system of an e-commerce website would include verifying the following:
- If it launches in all major browsers
- If users can register, log in, and log out
- If users can search for specific products or filter them based on various parameters
- If it is possible to add items to the cart
- If users can complete payment
System testing is often used interchangeably with end-to-end testing, which involves evaluating a fully integrated application. However, system testing focuses on smooth workflow during various use cases rather than testing the entire product against the requirements.
Regression tests are used to verify that recent changes in application code, like a bug fix or a new feature, do not negatively impact existing functionality. For example, after developers add the option to pay with rewards points on an e-commerce application, the QA team has to check if other payment options, such as credit card payment or bank transfer, still work correctly.
Smoke testing refers to preliminary tests that verify whether all the application's main features work correctly. QA teams often use smoke tests after a new build to evaluate the system's basic functionality quickly before running more thorough and expensive tests. These tests are also used after deployment to ensure that the software works as expected in the new environment.
Sanity tests are performed on a specific part of an application after a new build with minor code or function modifications. The purpose is to verify that earlier reported bugs have been fixed, new issues have not appeared, and changes have not affected associated modules. Like smoke testing, sanity tests do not give QA analysts detailed results, but they help quickly validate that the changed part of an application still works.
Acceptance testing refers to formal tests usually performed by customers. These tests help determine if an application meets all business requirements and is ready to be delivered. Acceptance testing mimics user behavior to verify that all functions work correctly and the system's performance is acceptable.
Functional testing is a core part of the QA process. The tests described above help ensure that the product meets all the business requirements.
When developing a high-quality product, verifying that it works is not enough. An application also needs to be secure, reliable, and user-friendly. In addition to functional testing, QA teams ensure that all aspects of an application work as desired by running various non-functional tests. Here is a brief overview of the most commonly used types of non-functional testing.
With performance tests, QA analysts check the performance of the product overall. They analyze such parameters as response time, processing speed, data transfer velocity, memory utilization, and network bandwidth usage of the developed application.
Load tests are used to simulate demand on an application and analyze its performance. They help verify that the software will work smoothly under the expected load. For example, the QA team measures the response time for a particular action performed by a certain number of concurrent users within a set duration. Load testing helps identify such issues as slow page load times or application crashes.
Stress tests are used to analyze system stability by simulating heavy loads beyond standard operational capacity. Testing teams perform stress tests to determine breaking points or safe usage limits and check how the system behaves under extreme conditions.
Stability tests help verify that a software application can continuously function for a stated period of time without failure. During these tests, QA professionals try to identify such issues as memory leaks, systems slowing down, software crashes, or unexpected restarts.
During scalability testing, testers examine how the application can handle an increasing workload (user traffic, data volume, number of specific requests, etc.). They analyze such factors as response time, memory usage, or network usage.
Security testing helps uncover an application's vulnerabilities and identify potential security risks. QA teams run this type of test to find all flaws and weaknesses in the software system that might result in a loss of data, revenue, or company reputation.
Penetration testing is a subtype of security testing that uses simulated cyberattacks to identify security flaws in the application and evaluate the effectiveness of protective mechanisms. In other words, testers conduct authorized hacking attempts.
Compatibility tests ensure that a software application can work across different browsers, operating systems, devices, or networks. These tests also help QA teams verify that a new product is compatible with other software applications. Compatibility tests can be divided into two subgroups:
- Backward compatibility tests check compatibility with older versions of the software or hardware
- Forward compatibility tests check compatibility with the newest versions of software or hardware
Recovery tests help verify an application's ability to recover from software or hardware crashes, power failure, database overload, external servers not responding, network failures, etc. During these tests, the QA team analyzes such things as the time needed to resume operations and how easily lost data can be retrieved.
Volume tests or flood tests allow us to evaluate how the software behaves if a massive volume of data is added to the database. With these tests, QA analysts verify the system's capacity and analyze the impact on response time.
Compliance testing, also known as conformance testing, helps ensure that the system complies with relevant internal and external regulations, standards, and laws. Compliance testing is an audit that controls:
- Whether the development process meets the company's guidelines
- Whether detailed functional and non-functional testing is performed
- Whether project documentation is complete and correct
- Whether the product will be subject to complaints from regulatory authorities
Globalization testing is necessary when software is used worldwide. Such tests help ensure that the application is stable, all features are available, and the data is represented correctly in all locales. In other words, this type of testing verifies that the product works in any region.
Localization tests examine if the software is adapted for use in a specific locale. During localization testing, the QA team checks the language, date and currency formats, time zones, appropriateness of the product name translation, and so on.
Accessibility tests help ensure that the application works for all users, including those with hearing impairment, reading disabilities, color blindness, or other physical differences. While performing accessibility testing, QA analysts check that all images have alt tags, the color contrast of each visual element is sufficient, the application is compatible with screen reader tools or special keyboards, etc.
Non-functional tests are just as important as functional tests. An effective testing strategy combines both types of testing to create the highest quality products.
Other Types of Software Testing
There are many more types and subtypes of software testing than we can cover in this article. However, we want to mention three additional types of testing that are becoming popular in software development companies, especially those following Agile methodology.
When using an exploratory testing technique, testers do not follow strict test cases but rely on their intuition and experience. Testers usually do not have in-depth knowledge of the application's requirements or internal structure but explore and investigate the software on the fly. This approach helps testers find issues they might have missed if the team relied only on scripted tests.
Ad-hoc testing is another unstructured testing technique. Similar to exploratory testing, ad-hoc tests do not follow strict test cases and do not require thorough documentation. However, QA analysts have detailed knowledge of the product during ad-how tests. These tests can be done randomly at any stage of the software development life cycle and help uncover bugs that cannot be found by following a formal testing process.
Usability or User Experience (UX) testing is a technique used to evaluate a product by testing it on a small group of target end-users. The goal of this testing is to verify that the software is user-friendly. It allows companies to answer questions such as:
- Is it easy to learn how to use the application?
- Is the navigation effective?
- Is the user interface aesthetically pleasing?
- Are the error messages helpful?
If done properly, usability testing provides unbiased feedback on the developed application and allows companies to make adjustments before the product is released.
We have briefly described the most common test types and approaches to software testing. It can be challenging to grasp all the nuances of these tests when you are just starting your QA journey. However, we hope this article gives you a better understanding of the techniques testers use in their day-to-day work. If you want to learn more about software testing, consider enrolling in our intensive Manual QA training.