Skip to main content

How I got rid of step by step test cases

In my last blog post I told you what I think is wrong with step by step test cases. In this blog post I’ll tell you how I got rid of step by step test cases at the company I work for. When I joined Yambay about 18 months ago, the company was following a fairly traditional waterfall style development approach. They had an offshore test team who wrote step by step test cases in an ALM tool called Test Track. Over the past 18 months we have moved to an agile way of developing our products and have gradually got rid of step by step test cases.

User Stories and how I use them to test

Getting rid of step by step test cases didn’t happen overnight. Initially we replaced regression test cases and test cases for new features with user stories that have acceptance criteria. The key to using a user story to cover both requirements and testing is to make sure that the acceptance criteria cover all test scenarios. Often product owners and/or business analysts only cover typical scenarios. It’s the tester's role to uncover edge cases and negative scenarios. If your user story covers these, then it can be used as both a requirement to develop the feature and to test the feature once it’s developed, and there is one source of truth.

Most people will be familiar with the user story format “AS A … I WANT TO …. SO THAT ….But the real meat of a user story is the acceptance criteria. Again people have different ways of specifying these. The format I prefer is the Gherkin one: GIVEN … WHEN … THEN
How does this work in practice? We now use Jira to manage software development and to write both user and technical stories. The simplest way to show which acceptance criteria have been tested is to add another column to the acceptance criteria and tick off as you test.

Here is a real life example of the user story:

AS A dispatcher I WANT the ability to send an SOS message to a field engineer SO THAT I can alert them of an emergency

As you can see, the the first three acceptance criteria specify typical scenarios, while the last three specify edge cases and negative scenarios. It’s important to capture all the scenarios if you want to use user stories for testing.

Because we test directly with the user stories they always reflect the current state of the product, and the business and developers alike have found them to be a valuable snapshot of how the product works. It would be difficult to wade through reams of step by step test cases but user stories are much more accessible to other parts of the team. In addition, overall maintenance costs are much reduced as we do not need to maintain a separate set of regression step by step test cases.

Other Types of Testing

Once we had moved to a more agile way of testing that was based on user stories I made a list of the other types of testing that we do. Once I did that I realised that we didn’t need step by step test cases for any of them.

Let’s have a look at these other types of testing:
Shakedown testing - we have a need to do a formal shakedown prior to releasing to customers and also for new internal releases. These were written up as step by step test cases in Test Track and I was keen to get rid of them. I had already been using mind maps for exploratory testing and it occurred to me that mind maps would be suitable for defining a shakedown test as well.
Here is an example:

We use a mind map tool called iThoughts (by Toketaware) but there are lots of other tools available.

In practice these work well but they assume the tester has a good knowledge of the system as there isn’t a lot of detail. It’s a similar case for user stories. For this approach to work you need intelligent testers who understand the system under test well.

Performance and load testing - done on an ad-hoc basis. These are now automated tests that are stored in source control.

Mobile OS version testing - because we develop Windows, iOS and Android apps we need to check that the apps still work correctly when there are new mobile OS versions. These are similar to the shakedown tests and we also use mind maps.

Regression testing - I will discuss this more in the next section but the set of product user stories can be used to define and execute regression testing.

So far I haven’t mentioned exploratory testing but we do do that and I’ve created an exploratory test session template that we use when we want to document test sessions.

So if I list out all the types of testing that we do you can see that none of them actually requires step by step test cases:
This also means that no test management or ALM tool is required.

Regression Testing

At the moment we use a set of product user stories to regression test the product but ideally regression testing would be automated as much as possible. See my blog post Let's stop writing automated end to end tests through the GUI for my thoughts on that. Over time, I want to get to the point where we only use the user stories for new product features and regression testing is handled by a combination of automated tests, shakedowns and exploratory testing.

Efficiencies Generated

Moving to this style of testing has meant that we are now much more efficient:
  • We now only maintain one set of assets which are valuable to the business as well as testers 
  • Time savings (not writing step by step test cases) means that I have more time to do real testing 
  • No test management tool is required 
  • We have effectively replaced an offshore test team with one in house tester


  1. Were you able to capture any metrics around how much time it saved your team to take this approach?

    Two of the things I get a lot of value from test management tools are reporting and assigning testing work to team members. I'm able to provide a quick visual representation of what is automated and what is not, what is passing and failing, platform coverage, and how far along we are with regression, which my clients love. Have you figured out a way to communicate this data and assign out work on large teams?

    1. Thanks for your comment Elysia. It's difficult to quantify exactly how much time I have saved using this approach but I have much more time now to do real testing. The time I would have spent writing step by step test cases I now spend doing exploratory testing or fleshing out user stories which are valuable to the business as well as the testers.

      Test management tools can be useful for creating reports and assigning work but the problem with them is that they require you to write test cases. We report testing progress at a higher level i.e. at the user story level. I create a spreadsheet with the user stories that were in scope for a particular testing task and report on pass/fail and defects raised in there. We give these to our customers. It's a good high level overview. I also indicate what's automated in a spreadsheet.

      In terms of assigning testing work, that's easily managed through Jira. We use Jira for managing our development work in sprints and we simply assign testing to the relevant person as we go.

  2. Very cool post and I really stand by this approach to testing. Super efficient and allows you to spend time on the important aspects- check out my post on a similar sentiment


Post a Comment

Popular posts from this blog

Let’s stop writing automated end to end tests through the GUI

What’s the problem?I have not been a fan of Selenium WebDriver since I wrote a set of automated end-to-end tests for a product that had an admittedly complicated user interface. It was quite difficult to write meaningful end-to-end tests and the suite we ended up with was non-deterministic i.e. it failed randomly.
Selenium Webdriver may be useful in very simple eCommerce type websites but for most real world products it’s just not up to scratch. This is because it’s prone to race conditions in which Selenium believes that the UI has updated when, in fact, it has not. If this happens, the automated check will fail randomly. While there are techniques for reducing these race conditions, in my experience it is difficult to eradicate them completely. This means that automated checks written with Selenium are inherently flaky or non-deterministic. Maintenance of these automated checks becomes a full time job as it is very time consuming to determine whether a failing check is actually a de…

Testing Mobile Apps versus Websites

You all no doubt own a mobile phone, most likely an iOS or Android device, so you probably think it wouldn't be too difficult to test mobile apps. While the principles of testing remain the same, there are significant differences between how you test websites and mobile apps. For example: You can't just open a browser on your laptop to test the latest version of the application. Somehow you need to get the latest version of the app onto your test device.iOS and Android have completely different design patterns.It's more difficult to get under the hood of an app to see what's going on. You can't just open Chrome DevTools.
This is the first in a series of posts where I'll give you some background on how to test mobile apps and include a number of tips and tricks that I've learnt the hard way in my 10 years of mobile testing experience. I'll focus on iOS and Android since those are the most common apps but I also test Windows apps.…