Skip to main content

Unit and integration test guidelines for Spring Boot

When you have a legacy product that has been around for over 10 years and it is written with old technologies and has no automated tests, how do you improve quality? Sometimes the best solution is to rewrite it!
That was the situation  we were in with a product written with JBOSS. It had become difficult to maintain and debug so we decided to rewrite it using the Spring Boot framework, and add unit and integration tests as we were doing it. This is not a simple undertaking and it's still a work in progress; but I feel we made the right decision.
As not many of the developers had much experience with writing unit and integration tests, I wrote up the guidelines shown below. These have been written for a Java based Spring Boot project that uses the JUnit framework to write the tests, but are general enough to apply to most languages and frameworks.

  1. Test the behaviour of the class; not its implementation.
  2. Don’t test implementation details; e.g. private methods.
  3. Use descriptive names for your tests that explain what you are testing. Don't be afraid to use long test names. Some examples:
Not very descriptive test case name
More descriptive test case name
  1. Each test should have the format: Setup test data (Given), do the action you are testing (When), check the result (Then). See example integration test below. Note that you do not need to add in the comments when you actually create the test cases:     
public void newSysUserShouldBeSavedToRepository() {
   final long count = repository.count();
       SysUser sysUser = new SysUser();
       assertThat(repository.count()).isEqualTo(count + 1);
  1. Only test one thing per test case. You should only assert the result in the Then section of the code.
  2. Use the AssertJ library to write assertions in a fluid, readable way.
  3. If you need to run a method before each test case (e.g. to setup test data), use the @Before annotation.
  1. Make sure that the test fails if the assert condition is not true. For each test you write change the data to fail the test. This way you are sure that the test will fail if there is a change.  
  2. Test both negative and positive scenarios. For example:
    1. Positive scenario: findByUserIdShouldReturnOneSysUser()
    2. Corresponding negative scenario: findByUserIdWhenUserNonExistentShouldReturnNull()
We found the following talk and corresponding example code by Phil Webb useful:
Video on testing Spring Boot applications:
The source code for the video is in github:
Some tips from the talk:
  • Use AssertJ for assertions. Spring Boot 1.4 and above has it built in.
  • Constructor injection makes it easier to test.
  • Prefer plain Junit when writing unit tests; i.e. try not to involve Spring.
  • When you do need to use Spring (for integration tests) use helper classes such as TestEntity Manager.
  • TestEntityManager is useful to setup test data when testing repositories.


Popular posts from this blog

How I got rid of step by step test cases

In my last blog post I told you what I think is wrong with step by step test cases. In this blog post I’ll tell you how I got rid of step by step test cases at the company I work for. When I joined Yambay about 18 months ago, the company was following a fairly traditional waterfall style development approach. They had an offshore test team who wrote step by step test cases in an ALM tool called Test Track. Over the past 18 months we have moved to an agile way of developing our products and have gradually got rid of step by step test cases.

User Stories and how I use them to test
Getting rid of step by step test cases didn’t happen overnight. Initially we replaced regression test cases and test cases for new features with user stories that have acceptance criteria. The key to using a user story to cover both requirements and testing is to make sure that the acceptance criteria cover all test scenarios. Often product owners and/or business analysts only cover typical scenarios. It’s the …

Automating Regression Testing

In my last blog post I described how we have got rid of step by step test cases but didn’t have any automated regression tests. Since then we have embarked on a test automation journey and we are building up a suite of automated regression tests.
In some of my older posts on unit and integration testing I talked about “valuable” automated tests. In summary, a valuable automated test is one which: Has a high chance of catching a regression errorHas a low chance of producing a false positiveProvides fast feedbackHas low maintenance costThe more code that is covered, the more chance there is of catching a regression error. End to End (E2E) tests are good for this but feedback is often too slow. So how do we make E2E automated tests more valuable? They already have a high chance of catching a regression error and a low chance of producing a false positive but they tend to be slow and have a high maintenance cost. How can we improve those two aspects? A good way to do this is NOT to automat…

Let’s stop writing automated end to end tests through the GUI

What’s the problem?I have not been a fan of Selenium WebDriver since I wrote a set of automated end-to-end tests for a product that had an admittedly complicated user interface. It was quite difficult to write meaningful end-to-end tests and the suite we ended up with was non-deterministic i.e. it failed randomly.
Selenium Webdriver may be useful in very simple eCommerce type websites but for most real world products it’s just not up to scratch. This is because it’s prone to race conditions in which Selenium believes that the UI has updated when, in fact, it has not. If this happens, the automated check will fail randomly. While there are techniques for reducing these race conditions, in my experience it is difficult to eradicate them completely. This means that automated checks written with Selenium are inherently flaky or non-deterministic. Maintenance of these automated checks becomes a full time job as it is very time consuming to determine whether a failing check is actually a de…