Skip to main content

WeTest Conference 2017, Wellington

I attended the 2017 WeTest Conference in Wellington, NZ on Monday 18 September. Wellington has a great test community and conference itself had a friendly vibe and lots of interesting talks.




The most interesting talk was “Machine Learning for Testers” by Kathryn Hempstalk. She described what Machine Learning is (in simple terms it is an algorithm plus data that produces a model) and gave us some examples along with the caveat that it will not solve all business problems. She recommends testing from the start of the process.


The most practical talk was by Daniel Mcclelland on debugging proxies. While I’ve used debugging proxies such as Fiddler and Charles before, Daniel gave us some tips on using breakpoints to simulate network failures that I’ve already put into practice.


The most inspiring talk was Angie Jones’s closing keynote “Owning Our Narrative“. She took us on a journey through contemporary music history and made an analogy between a musician and a tester and how we both need to adapt to changing technologies.


If you would like to see more details on the presentations check out http://www.wetest.co.nz/wetest-2017/. All in all a very worthwhile conference with good networking opportunities.

PS I can recommend the QT Museum hotel as a great place to stay if you visit Wellington. It’s a quirky boutique hotel on the Wellington waterfront with a good French restaurant Hippopotamus.

Comments

Popular posts from this blog

Let’s stop writing automated end to end tests through the GUI

What’s the problem?I have not been a fan of Selenium WebDriver since I wrote a set of automated end-to-end tests for a product that had an admittedly complicated user interface. It was quite difficult to write meaningful end-to-end tests and the suite we ended up with was non-deterministic i.e. it failed randomly.
Selenium Webdriver may be useful in very simple eCommerce type websites but for most real world products it’s just not up to scratch. This is because it’s prone to race conditions in which Selenium believes that the UI has updated when, in fact, it has not. If this happens, the automated check will fail randomly. While there are techniques for reducing these race conditions, in my experience it is difficult to eradicate them completely. This means that automated checks written with Selenium are inherently flaky or non-deterministic. Maintenance of these automated checks becomes a full time job as it is very time consuming to determine whether a failing check is actually a de…

How I got rid of step by step test cases

In my last blog post I told you what I think is wrong with step by step test cases. In this blog post I’ll tell you how I got rid of step by step test cases at the company I work for. When I joined Yambay about 18 months ago, the company was following a fairly traditional waterfall style development approach. They had an offshore test team who wrote step by step test cases in an ALM tool called Test Track. Over the past 18 months we have moved to an agile way of developing our products and have gradually got rid of step by step test cases.


User Stories and how I use them to test
Getting rid of step by step test cases didn’t happen overnight. Initially we replaced regression test cases and test cases for new features with user stories that have acceptance criteria. The key to using a user story to cover both requirements and testing is to make sure that the acceptance criteria cover all test scenarios. Often product owners and/or business analysts only cover typical scenarios. It’s the …

Do we need step by step test cases?

Many companies have moved to agile software development but they still maintain large suites of step by step test cases for regression testing. At the company I work for, I’ve managed to get rid of these step by step test cases altogether.
What is wrong with step by step test cases anyway?
If you are in a traditional waterfall setup with high level requirements and test cases, then the test cases are actually quite valuable as they document how the system works. But if you start to move quality to the left and specify requirements in more detail upfront with user stories, then test cases make much less sense. You are essentially duplicating the requirements in test cases and end up with a whole set of assets that need to be maintained in a separate test management or ALM tool. A big problem with that is that you do not have a single source of truth on how the system behaves.
A second problem with step by step test cases is that they are not visible to the business. I can’t think of ma…