Skip to main content

What about integration tests?

In my previous two blog posts I talked a lot about unit tests and how to write valuable unit tests. But what about integration tests? When should you write these, and how do you write valuable integration tests? I’ll try and answer these questions in this blog post.

Integration Testing


Integration tests operate on a higher level of abstraction than unit tests. The main difference between integration and unit testing is that integration tests actually affect external dependencies. Integration tests interact with external dependencies and are therefore much slower than unit tests. Integration tests are needed to test the application services layer, which interacts with external systems. You may have direct control over some of these external systems, such as a local database, or they may be third party systems over which you have no direct control.

In my previous blog post I discussed the Collaboration Verification style of unit testing and how it is not a good style for unit testing. When it comes to integration testing, however, collaboration testing comes into its own. Integration testing is all about testing collaboration between applications and external systems.

So what is the best way to write integration tests? Vladimir Khorikov argues there is little value in writing integration tests that do not deal with the external dependencies directly. If you mock out the external dependencies such as databases, then all you are really testing is boilerplate code, and so the chance of catching regression errors is small. The use of mocks also increases the maintenance cost of the integration test. These two issues mean this type of integration testing is not valuable and usually not worth doing.

If you work directly with the external dependency (such as a database) in your integration tests, the tests will be slower but will be much more valuable overall. They have a high chance of catching regression errors and a low maintenance cost, and tend to produce few false positives.

When writing integration tests you need to consider the type of external dependencies you are dealing with. If you have full control over the dependency, for example, an internal database or file system, you should test that dependency directly in the integration tests. If you do not have control over the dependency, for example, a third party web service or customer system, you will most likely need to substitute them with test doubles (such as mocks). However, you may be able to test it directly if the external system is stable enough .

For further information on the avoidance of mocks see Khorikov’s blog post http://enterprisecraftsmanship.com/2016/11/15/when-to-include-external-systems-into-testing-scope/
Some general rules to follow when writing integration tests:
  • verify collaborations at the very edges of your system
  • only use mocks if you don’t have direct control over the external system—once you use mocks your tests become less valuable
  • use the same type and versions of the database In your integration tests as in production
  • to isolate integration tests from each other, run them sequentially and remove data left after test execution
  • the best way to do test data cleanup when testing the database is to wipe out all test data before test execution—each test will then create the data it needs
  • each developer should have their own version of the database; i.e. their own instance—the DB schema and reference data should be in a version control system

Overall Approach


I advocate this practical approach to automated unit and integration tests:
  • employ unit testing to verify all possible cases in your domain model
  • with integration tests, check only a single happy path per application service method—if there are any edge cases that cannot be covered with unit tests, check them as well

Note: this approach to integration testing, where we use the actual external dependencies where possible, exercises a large amount of code and reduces the need to write full end-to-end automated test cases. I plan to return to this topic in a future blog post.

Over my last three blog posts I’ve written a lot about the best way to tackle the task of writing automated unit and integration tests. There are three key concepts that I would like you to take away:

  1. Only write valuable unit and integration tests. By doing this you will reduce the overall number of unit and integration tests that need to be written and maintained.
  2. Design the code so that the tests will be maintainable and valuable. Writing tests and good code design go hand in hand.
  3. For integration tests, only use mocks when you do not have full control over the external system.

Comments

Popular posts from this blog

Let’s stop writing automated end to end tests through the GUI

What’s the problem? I have not been a fan of Selenium WebDriver since I wrote a set of automated end-to-end tests for a product that had an admittedly complicated user interface. It was quite difficult to write meaningful end-to-end tests and the suite we ended up with was non-deterministic i.e. it failed randomly. Selenium Webdriver may be useful in very simple eCommerce type websites but for most real world products it’s just not up to scratch. This is because it’s prone to race conditions in which Selenium believes that the UI has updated when, in fact, it has not. If this happens, the automated check will fail randomly. While there are techniques for reducing these race conditions, in my experience it is difficult to eradicate them completely. This means that automated checks written with Selenium are inherently flaky or non-deterministic. Maintenance of these automated checks becomes a full time job as it is very time consuming to determine whether a failing check is actuall...

How I got rid of step by step test cases

In my last blog post I told you what I think is wrong with step by step test cases. In this blog post I’ll tell you how I got rid of step by step test cases at the company I work for. When I joined Yambay about 18 months ago, the company was following a fairly traditional waterfall style development approach. They had an offshore test team who wrote step by step test cases in an ALM tool called Test Track. Over the past 18 months we have moved to an agile way of developing our products and have gradually got rid of step by step test cases. User Stories and how I use them to test Getting rid of step by step test cases didn’t happen overnight. Initially we replaced regression test cases and test cases for new features with user stories that have acceptance criteria. The key to using a user story to cover both requirements and testing is to make sure that the acceptance criteria cover all test scenarios. Often product owners and/or business analysts only cover typical scenarios. It...

How to write valuable unit tests

In my last blog post I discussed what makes unit tests valuable and how to structure your code so that you can write valuable unit tests. But so far I haven’t got into the nitty gritty of how you should write these valuable unit tests. I’ll address that in this blog post. There are three styles of unit test: Output Verification , also known as the functional style, involves checking the output of a method for a given input. This style of unit testing does not concern itself with the internals of a method. State Verification involves checking the state of an object rather than the output of a method. Collaboration Verification is where collaboration between classes is tested, and it usually involves test doubles such as mocks. See Vladimir Khorikov’s blog post for further information and code examples: http://enterprisecraftsmanship.com/2016/06/09/styles-of-unit-testing/ So which style is best for writing valuable unit tests? Here are the four attributes of a v...