Integration testing (sometimes called Integration and Testing, abbreviated “I&T”) is the phase in software testing in which individual software modules are combined and tested as a group. It occurs after unit testing and before system testing. Integration testing takes as its input modules that have been unit tested, groups them in larger aggregates, applies tests defined in an integration test plan to those aggregates, and delivers as its output the integrated system ready for system testing.
Purpose
The purpose of integration testing is to verify functional, performance, and reliability requirements placed on major design items. These “design items”, i.e. assemblages (or groups of units), are exercised through their interfaces using Black box testing, success and error cases being simulated via appropriate parameter and data inputs. Simulated usage of shared data areas and inter-process communication is tested and individual subsystems are exercised through their input interface. Test cases are constructed to test that all components within assemblages interact correctly, for example across procedure calls or process activations, and this is done after testing individual modules, i.e. unit testing. The overall idea is a “building block” approach, in which verified assemblages are added to a verified base which is then used to support the integration testing of further assemblages.
Some different types of integration testing are big bang, top-down, and bottom-up.
Big Bang
In this approach, all or most of the developed modules are coupled together to form a complete software system or major part of the system and then used for integration testing. The Big Bang method is very effective for saving time in the integration testing process. However, if the test cases and their results are not recorded properly, the entire integration process will be more complicated and may prevent the testing team from achieving the goal of integration testing.
A type of Big Bang Integration testing is called Usage Model testing. Usage Model testing can be used in both software and hardware integration testing. The basis behind this type of integration testing is to run user-like workloads in integrated user-like environments. In doing the testing in this manner, the environment is proofed, while the individual components are proofed indirectly through their use. Usage Model testing takes an optimistic approach to testing, because it expects to have few problems with the individual components. The strategy relies heavily on the component developers to do the isolated unit testing for their product. The goal of the strategy is to avoid redoing the testing done by the developers, and instead flesh out problems caused by the interaction of the components in the environment. For integration testing, Usage Model testing can be more efficient and provides better test coverage than traditional focused functional integration testing. To be more efficient and accurate, care must be used in defining the user-like workloads for creating realistic scenarios in exercising the environment. This gives that the integrated environment will work as expected for the target customers.
Top-down and Bottom-up
Bottom Up Testing is an approach to integrated testing where the lowest level components are tested first, then used to facilitate the testing of higher level components. The process is repeated until the component at the top of the hierarchy is tested.
All the bottom or low-level modules, procedures or functions are integrated and then tested. After the integration testing of lower level integrated modules, the next level of modules will be formed and can be used for integration testing. This approach is helpful only when all or most of the modules of the same development level are ready. This method also helps to determine the levels of software developed and makes it easier to report testing progress in the form of a percentage.
- Top Down Testing is an approach to integrated testing where the top integrated modules are tested and the branch of the module is tested step by step until the end of the related module.
- Sandwich Testing is an approach to combine top down testing with bottom up testing.
The main advantage of the Bottom-Up approach is that bugs are more easily found. With Top-Down, it is easier to find a missing branch link.
Limitations
Any conditions not stated in specified integration tests, outside of the confirmation of the execution of design items, will generally not be tested.
DevOps Integration Testing
Integration tests validate behaviors between components, and are most often written by developers. These can involve checking behaviors for web services, database calls, or other API interactions. Integration tests are much slower than unit tests because they need to handle significant amounts of “ceremony” to stand up connections, handle authentications, as well as deal with service and network latency.
Integration tests generally should avoid granular validations (those are best left to unit tests) and should instead focus on more significant validations. For example, continuing with the payroll example from the unit test section above, a good integration test would be: when the payroll service is invoked with valid data, do I get a correct response. The integration test should not cycle through all test cases for validating the payroll algorithm—that’s the responsibility of the unit tests!
In a DevOps environment, the responsibility of testing falls on developers, testers, and a variety of other team members. However, each of these respective groups is generally responsible for different stages of testing. For example, developers will perform both unit testing, and integration testing. Dedicated Testers will perform Systems Test, and various types of user groups perform User Acceptance tests.
In an Agile or DevOps environment where continuous delivery pipelines are common, integration testing should be carried out as each module is completed or adjusted. For example, in many continuous delivery pipeline environments, it’s not uncommon to have multiple code deployments per developer per day. Running a quick set of integration tests at the end of each development phase prior to deployment should be a standard practice in this type of environment.
To efficiently test in this manner, the new component must either be tested against existing completed modules in a dedicated test environment or against Stubs and Drivers. Depending on your needs, it’s generally a good idea to keep a library of Stubs and Drivers for each application module in a folder or library to enable quick repetitive Integration testing use. Keeping Stubs and Drivers organized like this makes it easy to perform iterative changes, keeping them updated and performing optimally to meet your ongoing testing needs.
Another option to consider is a solution originally developed around 2002, called Service Virtualization. This creates a virtual environment, simulating module interaction with existing resources for testing purposes in a complex enterprise DevOps or Agile environment.
In any development cycle, bugs and issues inevitably arise. To manage these over many developers and teams, you should consider using a good issue tracking software to record, track, and manage the reported issues throughout the lifecycle of issue resolution. Jira and Remedy are both great solutions for issue tracking and have a long history of refinement.
Integration Testing Best Practices
Here are a few integration testing best practices
Do Integration Testing Before or After Unit Testing – Continually merge source code updates from all developers on a team into a shared mainline. This continual merging prevents a developer’s local copy of a software project from drifting too far afield as new code is added by others, avoiding catastrophic merge conflicts.
In the Waterfall days of software development, you absolutely had to do integration testing after unit testing. But today, you have a lot more flexibility to choose the right time to perform integration testing.
Separate Unit Testing Suites From Integration Testing Suites – While integration tests can be run at the times you need them, they shouldn’t be run at the same time as unit tests. Developers need space to work on the business logic in the code by running unit tests and getting immediate feedback. This is done to ensure broken code isn’t being committed to the mainline. If their test suite takes a long time to finish — and unit tests should happen quickly — they may end committing bad code or stop running tests altogether. This can also lead to a situation where unit tests are not properly maintained.
Keeping your test suites separate allows developers to run fast unit tests and saves the long integration testing process for the build server in another test suite.
Log as Much as Possible – If a problem arises during a unit test, it’s fairly easy to identify the cause and fix the issue. But because of the scope and complexity of integration tests – usually spanning several modules and hardware components – identifying the cause of an integration failure is much more difficult. To get around this, you should use log your progress. Logging helps you better analyze the failure and maintain a record of potential reasons for the failure, as well as ruling out other reasons, narrowing down the true cause.