Skip to main content

WeTest Conference 2017, Wellington

I attended the 2017 WeTest Conference in Wellington, NZ on Monday 18 September. Wellington has a great test community and conference itself had a friendly vibe and lots of interesting talks.




The most interesting talk was “Machine Learning for Testers” by Kathryn Hempstalk. She described what Machine Learning is (in simple terms it is an algorithm plus data that produces a model) and gave us some examples along with the caveat that it will not solve all business problems. She recommends testing from the start of the process.


The most practical talk was by Daniel Mcclelland on debugging proxies. While I’ve used debugging proxies such as Fiddler and Charles before, Daniel gave us some tips on using breakpoints to simulate network failures that I’ve already put into practice.


The most inspiring talk was Angie Jones’s closing keynote “Owning Our Narrative“. She took us on a journey through contemporary music history and made an analogy between a musician and a tester and how we both need to adapt to changing technologies.


If you would like to see more details on the presentations check out http://www.wetest.co.nz/wetest-2017/. All in all a very worthwhile conference with good networking opportunities.

PS I can recommend the QT Museum hotel as a great place to stay if you visit Wellington. It’s a quirky boutique hotel on the Wellington waterfront with a good French restaurant Hippopotamus.

Comments

Popular posts from this blog

Let’s stop writing automated end to end tests through the GUI

What’s the problem? I have not been a fan of Selenium WebDriver since I wrote a set of automated end-to-end tests for a product that had an admittedly complicated user interface. It was quite difficult to write meaningful end-to-end tests and the suite we ended up with was non-deterministic i.e. it failed randomly. Selenium Webdriver may be useful in very simple eCommerce type websites but for most real world products it’s just not up to scratch. This is because it’s prone to race conditions in which Selenium believes that the UI has updated when, in fact, it has not. If this happens, the automated check will fail randomly. While there are techniques for reducing these race conditions, in my experience it is difficult to eradicate them completely. This means that automated checks written with Selenium are inherently flaky or non-deterministic. Maintenance of these automated checks becomes a full time job as it is very time consuming to determine whether a failing check is actuall

How I got rid of step by step test cases

In my last blog post I told you what I think is wrong with step by step test cases. In this blog post I’ll tell you how I got rid of step by step test cases at the company I work for. When I joined Yambay about 18 months ago, the company was following a fairly traditional waterfall style development approach. They had an offshore test team who wrote step by step test cases in an ALM tool called Test Track. Over the past 18 months we have moved to an agile way of developing our products and have gradually got rid of step by step test cases. User Stories and how I use them to test Getting rid of step by step test cases didn’t happen overnight. Initially we replaced regression test cases and test cases for new features with user stories that have acceptance criteria. The key to using a user story to cover both requirements and testing is to make sure that the acceptance criteria cover all test scenarios. Often product owners and/or business analysts only cover typical scenarios. It

Automating Regression Testing

In my last blog post I described how we have got rid of step by step test cases but didn’t have any automated regression tests. Since then we have embarked on a test automation journey and we are building up a suite of automated regression tests. In some of my older posts on unit and integration testing I talked about “valuable” automated tests. In summary, a valuable automated test is one which: Has a high chance of catching a regression error Has a low chance of producing a false positive Provides fast feedback Has low maintenance cost The more code that is covered, the more chance there is of catching a regression error. End to End (E2E) tests are good for this but feedback is often too slow. So how do we make E2E automated tests more valuable? They already have a high chance of catching a regression error and a low chance of producing a false positive but they tend to be slow and have a high maintenance cost. How can we improve those two aspects? A good way to do this i