What’s the problem?
I have not been a fan of Selenium WebDriver since I wrote a set of automated end-to-end tests for a product that had an admittedly complicated user interface. It was quite difficult to write meaningful end-to-end tests and the suite we ended up with was non-deterministic i.e. it failed randomly.
Selenium Webdriver may be useful in very simple eCommerce type websites but for most real world products it’s just not up to scratch. This is because it’s prone to race conditions in which Selenium believes that the UI has updated when, in fact, it has not. If this happens, the automated check will fail randomly. While there are techniques for reducing these race conditions, in my experience it is difficult to eradicate them completely. This means that automated checks written with Selenium are inherently flaky or non-deterministic. Maintenance of these automated checks becomes a full time job as it is very time consuming to determine whether a failing check is actually a defect or just a flaky test. Appium (developed for testing mobile apps) and other variants of Selenium suffer from the same flakiness.
Why are we still doing it?
These issues are well known so why do we keep using Selenium? My theory is that the industry, where former “manual” testers become “automation” testers, is driving the use of Selenium and its variants. It doesn’t make sense for these automation testers to write unit and integration tests as those should be written as code is developed. But they can write end to end (E2E) automation tests once the code is written. There are some problems with this approach in that testers generally have inferior coding skills to developers, and they work in isolation. The whole team is responsible for quality, not just the testers. There is also a belief that you should automate existing manual E2E test cases. I believe this is the wrong approach. I have discussed this in my article “Who Should be Writing Automated Tests?” in the June edition of Testing Trapeze http://www.testingtrapezemagazine.com/magazine/june-2017/
What’s the solution?
So if are not going to write automated E2E tests using Selenium or its variants then what do we do about regression testing a product? As you will know from my previous blog posts, I advocate building quality in from the start. In my previous three blog posts I discuss how to write a set of valuable unit and integration tests that protect against regressions. Writing integration tests that exercise the external dependencies reduces the need for E2E tests and ensures that a lot of code is covered. However, you probably do want to look at your product as a whole and see if there are meaningful ways to test it end to end.
One product that I test is a three tier system designed for electricity network operators; it has field clients written as Windows or iOS apps running on mobile devices. The field clients communicate with an intermediary server which in turn communicates with a customer back office system. All messaging is done via XML and JSON. Trying to automate end to end testing using Appium (a variant of Selenium for testing mobile apps) is too painful to contemplate.
Given that messaging between the three tiers is the core of the system, we are looking at tools that can test those interfaces, as well as building up unit and integration tests for each tier. The tool that looks most promising is REST Assured
REST Assured is a Java Domain Specific Language (DSL) for simplifying testing of REST based services built on top of HTTP Builder. It supports POST, GET, PUT, DELETE, OPTIONS, PATCH and HEAD requests and can be used to validate and verify the response of these requests. It is implemented in Groovy but there is no Groovy dependency.
It can be used to test both JSON and XML. XML response bodies can also be verified against an XML Schema (XSD). Here is a blog post with examples: http://www.hascode.com/2011/10/testing-restful-web-services-made-easy-using-the-rest-assured-framework/
The idea is that if each tier (e.g. the field client) has a strong set of unit and integration tests and we also add automated tests for the interfaces between the tiers, we will have a strong suite of automated tests that will protect against regressions.
Conclusion
Obviously you will need to tailor your automation solutions to the products you are developing. I’ve given an example of a product where the cost of writing automated E2E tests through the GUI is prohibitive, and a valuable solution is to test the interfaces between tiers using a tool such as REST Assured. Of course you still need to have both unit and integration tests on the individual tiers as well.
there is another interesting tool comparable to restassured (though in an early phase) -> https://github.com/intuit/karate
ReplyDeleteThanks for adding that; I'll check it out. Always good to see new tools being created.
DeleteI think the problem is an order of operations issue. Most companies by the time they hire QA Engineers or Automation Engineers have been working with a random level of test coverage depending on features and system complexity. Often times the core engineering team has worked with each other for some time and the level of trust in each others code is high so code reviews are quick and minimal as well. Then you get to a point where perhaps you hire more engineers you cannot trust the quality of their work right off the back, or the company goes after a different vertical of work in their domain. Then hiring quality assurance people becomes a priority. The issue is that the QA person comes into a system sees the inequalities in code, especially for legacy code and while they can advocate for more unit and component testing, the tech debt is beyond reasonable to get the level of coverage that would satisfy a qa engineers level of standards. So new code is written with some level of test education from QA. But this is usually new code on top of core code components. At this point E2E tests are invaluable to uncover in a psychological way the need to refactor core functionality or while fixing long hidden bugs from lack of a QA department / person and thus can advocate for adding unit tests during the refactoring planning stages of up coming code cycles.
ReplyDeleteI agree in principle that automation through UI is not a great solution but is a handy tool in the QA Engineers arsenal to get a point across that quality exists on all levels, and that refactoring code that is legacy while painful allows newer engineering team members to gain a sense of ownership of the system, product, and company
Rediff does look interesting. It would be worth considering as part of a test automation strategy. Good luck with the concept.
ReplyDeleteI understand what you are saying Phillipe, and I agree that E2E testing does find a lot of issues and should be used to advocate for better test processes. However, that E2E testing does not need to be automated to gain that benefit.
ReplyDeleteThe danger with adding automated testing through the UI for legacy code is that it's a band aid that stops the team from addressing the real issues.
Nice and interesting post, I appreciate your hard work. keep it up…!!!Thanks for such useful information, It is true that now if you want to grow your business you will surely need the mobile app testing services for your business. But for that purpose everyone needs best mobile app testing companies.
ReplyDelete