Test automation in Agile and why it fails

It’s fairly safe to say that quite a lot of test automation efforts fail. It is also very safe to say that without test automation an agile team fails. So how can you make sure that while doing agile your test automation will not fail and thus your agile team will not fail? One of the ways to answer this question is by looking at why test automation often fails within agile environments.

When I am talking about test automation within this post, I am referring to testing that is done to reduce the amount of manual regression work, the so called functional test automation or automatic regression testing.

Moving target

Test automation quite often does not receive the attention it needs and deserves, also in agile teams. Quite some test automation efforts start off too late and without the appropriate preparation, resulting in organic test automation driven by a moving target. The moving target is the system under test which, in agile, is constantly in flux. Each sprint new features are added, bugs are fixed and quite often it is not clear at the start of a project where it is going to end up. Writing automated scripts against such a flexible environment which will stand the test of time, is difficult. It is even more difficult when the base on which automation is done is weak.

Quite often test automation runs behind on what is being delivered within an iteration, this is somewhat logical, considering that it is difficult to test, let alone automatically test what has not been built yet. Ideally while manually testing the new feature(s) as a tester, you’re already pondering how to automate it so that you do not have to do the tedious work more than once. Given enough time within your iteration, you actually might be able to automate some of the features, from what I have seen thus far, generally not all features will be covered in test automation within one iteration. So if these tests are not all automated, what happens to them in the next iteration? Are they omitted? Are they picked up and automated retrospectively?

If you do not keep track of what has been automated during an iteration for both your current iteration and your previous iteration, how can you rely on your test automation? You can’t be sure what exactly it is going through, so a bug can easily get through the net of your automated tests.

This moving target you are testing needs to be traced and tested solidly, repeatedly and in a trust-worthy way!

Definition of Done

In the majority of the DoD’s I have seen, one of the items is something referring to “tests automated”. The thing I have thus far not seen however, is the team adding as much value to the automation code as they do to the production code. Quite a lot of DoD’s refer to certain coding standards, however these standards often seem to not apply to functional test automation. Isn’t your functional automation code also just code? If so, why then should this not be covered in code reviews, be written according to some useful guidelines and standards and hopefully use a framework to make the code sustainable?

Test automation is just writing code

I have seen several automation efforts going on within agile teams where test automation was done without proper thinking having been put into it. A tool was chosen, based on what exactly other than members of the team having heard of it or having had good experiences with the tool. No base or framework to keep the code clean chosen. Since you are writing code, you should follow the same rules as the rest of the software developers. Don’t think your code, since they are merely tests, should not be hooked up to some form of framework. If you want to make your tests survive a few iterations, considering reuse of your code would be logical.

By the way, coding standards do not need to be too complicated. In 2009 “Agile in a flash” came up with a coding standard that could work for all languages and for most environments:

Coding Standards - agileinaflash.blogspot.com

All of the above mentioned points are “logical” when writing an application which is supposed to go into production. However when looking at a lot of (agile) projects, these logical “best practices” seem to be totally forgotten when it comes to test automation.

Succeed in test automation

So, how do you succeed in your test automation? How do you make it work? The answer seems clear to me: test automation is not like writing code, it is equal to writing code. Since it is the same, treat it the same way!

Do your code reviews, follow a form of a standard, use a (simple) framework to make life easier in writing tests, create reusable modules in your automation code. In other words, treat your functional test automation with the same respect as your production grade code. Who knows, you might want to run your tests against your production environment some day! In setting up your initial test automation environment and framework, don’t be shy and ask the developers in your team for tips, tricks and suggestions. They quite likely have gone through those setup steps more often than you have, so use their knowledge. Asking them for their insights and ideas not only helps you, it also helps them feel more responsible for doing their 5 pennies worth on the test automation side. They will get a clearer idea of what you intend to achieve, so they might also be more willing to help out keeping their code testable, they might even enjoy helping you write the testscripts!

Resources

Some informational resources where you can find some ideas on how to setup the test automation framework:

What did I get out of today’s testingdojo

It’s funny to see how difficult it is to get a group of people, who work with one another daily, to talk freely and share their ideas, even when their manager is not present and they are amongst their peers.

During today’s testingdojo, which again was supposed to last an entire day focussing fully on working with FitNesse, we started off with a talk about what we aim to achieve at our customer’s with test automation. I tried to enthuse the group by pushing them to think about the possible difference between “test automation” and “computer aided testing” and if there are differences, what does one mean and what does the other mean. From there I hoped to get to insight into what they think we should aim to achieve and of course whether or not their ideas make sense to us, as the leads on implementing test automation.

A real discussion on this never took flight unfortunately, moreover, the two people we have been working with closely on the implementation remained most silent of all. I am still not sure what the cause of this silence from their side was, natural shyness, cultural pressure, or something else. Instead I ended up pulling some keywords out of the group and discussing my thoughts on them. Not too bad either, but I do not believe I should have been the one talking this much about the subject.

The second part where I hoped to create a bit of discussion was on what the group believes to be good practices in testautomation. This also took some pains from my side, along with some poking, probing and planting the occasional seed, but some discussion arose on this. After a while one of them remarked that in the end it seemed that all things that can be considered good or best practices in testautomation also fly for manual functional testing.

This insight led me nicely back to clarifying the first point, what are we aiming to do: trying to remove manual testing all together or trying to create more free-space and time to enable them to do more and different manual testing? I do believe I got the picture across that we are not trying to take away manual testing, but rather trying to help them remove repetitive work. Since repetitive testing of the same items and same or similar functionality is quite likely to create a form of feature-blindness.

The term feature-blindness seemed to be a new concept for a big part of the group; however I managed to get this concept explained fairly easy by example.

In the end the morning session was not exactly what I hoped it would be, but it clearly did get the points I wanted to make across. Which were: think of what you want to test, try to describe for yourself why you want to automate something and then read it back in order to figure out whether it indeed still makes sense to automate this. Try to keep your tests small, self contained and reusable. Refactor your FitNesse tests into reusable scenarios, but also keep an eye out on over-complicating things by making everything a scenario, e.g. do not make a scenario for the sake of making it, only create it if you indeed have several identical tests which need different input data. And the most important of all as far as I am concerned in functional testautomation: Keep It Simple and Stupid. Even fancy stuff you should be able to keep simple, readable and brief. If at a first attempt you fail at doing it, don’t worry, move on and come back at a later stage to refactor your test.

One not so nice thing about today’s dojo was that for the second time in a row the second part of the day was rudely disturbed by some very unexpected downtime of our test-environments. We were told in advance that one of the environments would be taken down for urgent maintenance and patching, unfortunately both environments went down during this change which resulted in us sending the group off earlier than anticipated.

Main takeaway for me: I really enjoy doing these knowledge sharing and coaching sessions, I like it a lot and see it as a great bonus to my work as a consultant, especially since it makes me (and hopefully my colleagues) think about why I am doing things they way I am doing them.