Test automation metrics – what do you report on?

Metrics

One of the fun things of test automation is that, since you do not have to do all the tests manually, you can spend some extra time coming up with test metrics. Test metrics are tricky to do well in any situation, but in a situation where there is an abundance of metrics, such as in a test automation setup,  the choice of metrics becomes the key first step. What are the metrics to look at? Code coverage? Number of tests passed vs Number of tests failed? Duration of the tests over time? Number passed now vs number passed in previous runs? Newly automated tests added since last run? You can keep going in dreaming up new metrics, but which ones will actually make sense and become representative?? And of course, how do you ensure you do not spend ages ploughing through your data to gather these metrics manually?
Borrowed the image from khanmjk If you just take a test automation tool off the shelf it probably has an immense amount of options to measure on and report on, but the risk is always that you start generating reports and metrics that are not quite representative, or even worse, give a tainted view of the actual situation. So how do you make sure you don’t end up with a jungle of metrics?

Audience

First thing you need to know is who is the audience of your metrics? There is a huge difference in what different levels in an organisation consider useful metrics. One manager can be mainly interested in the time spent automating versus the time won by automating; e.g. the extra time now available for testing other stuff, the stuff that matters, while a test manager might be more interested in the functional areas of the application covered and to what extend they are covered.

Type of metrics

I will not attempt to dream up the perfect metric, for every environment and situation one metric might be better than the other. It all depends on the context, the persons you are reporting to, targets of each particular business area etc.

What I do want to touch upon is the awesome power you have with metrics coming out of automation. Since your tests can run rapidly and often, there are lots of runs that can be measured. In other words, you can gather a lot of data, a lot of historical data. When reporting on metrics like amount of tests passed versus the amount failed, it generally will be a snapshot of some test run. Why limit the metric to a snapshot when you have living data at hand?

The strongest metric to show to any manager is trend lines; you need to report on the amount of tests passed vs failed or the amount of tests added to the automation suite? Need to report metrics on code coverage? All of these metrics can result in a trend line. Show the “upwards trend” and managers are generally happy without even knowing what they are looking at.

There are of course some pitfalls, the main one I have made was having a downwards sloping trend line. That seems like a bad trend, even though it can be a totally perfect trend, the sight of a trend line going down generally makes managers nervous, they expect things to always go up.

Be prepared to explain a downwards trend, cause sometimes you cannot escape a downwards or flattening trend line!

Graph examples

Below are two graphs, both with the same data, and a trend line set on the same data. The three charts however, when looking at them each tell you a slightly different story due to the style of trend line chosen for the chart.

Upwards trend

Making the numbers seem a bit more positive than they really are by using an exponential trend line.

The exponential trend line paints a strong picture, however when using it, be prepared to explain the fact that despite the lack of growth at about two thirds of the graph, the trend is still upwards. This is a difficult story to tell.

Linear trendline

The linear trend line gives an indication of the overall trend, when close to flat-lining you know you have a problem, when it is too steep however you also may have a problem!

The linear trend line is one usually understood well by most people, at least in my experience. It shows the gradual, overall progress being made on your metrics. Since it is a straight line, quite often questions about what happened in a “dip” period can be prevented.

Since there is an abundance in data, if you have setup your automation properly, there is also the possibility to combine data. Such as setting off the trend of passed/failed to the trend of new tests added, or even more interestingly, to new functionality added to the system under test.

Be aware!

One big warning though, when playing around with the numbers you may be tempted to make them look nicer than they are or focus on the good things. However tempting this may be, don’t prettify your numbers or graphs, make sure the always paint a true story. If you manipulate the graphs, you are not only trying to fool your manager, but also yourself. Metrics should be useful for you as well as for the managers.

In a follow up post I am currently working on I will give some more clear examples of mashing up data into a useful automation report and how to interpret/present the data given specific contexts.

Figures often beguile me, particularly when I have the arranging of them myself; in which case the remark attributed to Disraeli would often apply with justice and force: “There are three kinds of lies: lies, damned lies and statistics.”
– Mark Twain’s Own Autobiography: The Chapters from the North American Review

–Edit–

A follow up on this post can be found here: Test automation metrics – mashing up non-test data

Are we afraid of test automation?

After having spent the last 9 months working on a test automation implementation for a team I am not running myself I have started to wonder what are the main success factors for test automation, besides the obvious: achieving the goal you set out in the first place?

One of the main factors that will help test automation be a success in your organisation is to not consider test automation a goal, but use it as a means to achieve a goal. The goal should not be something like “have x% of regression automated”, your goal should be something in which test automation can be a means to achieve this goal, for example help free up time for the testers to focus on important things rather than having to spend most of their time in regression testing.

Another very important thing to keep in mind in implementing test automation is that you need to keep a sharp eye out for trying to automate everything. Technically it may be possible, but it is hardly ever a good idea to want to automate everything. Quite some things require human intervention or human interpretation and cannot be simply made into a Boolean (which is what test automation is in the end, something is either true or false).

Look & feel testing, or validating, should in most cases not be automated for example. This for the simple reason that, despite it being possible, will more often than not raise either false positives or more likely, false negatives. Since these tests often require some form of image recognition and comparison a change of screen resolution or a small CSS change (font difference for example) will make the test fail, resulting in either maintenance or tests being made redundant.

For me however, the main show of success is that the automated tests are actually used and maintained actively.

Having a test automation framework and relying on it to do its job is not good enough. Just like every other software automated tests also want some TLC or at least some attention and maintenance.

Something that is still often seen in test automation is that the tests run but with non-determinate errors, in other words errors that are not really due to the testcases but are also definitely not a bug in the system under test.

If you show some tender love and care to your automation suite, these errors will be spotted, investigated and fixed. More often however these errors will be spotted, someone will “quickly” attempt to fix it and fail, after which the “constantly failing test” will be deemed useless and switched off.
Besides non-determinate errors there is another thing I have seen happen a lot in the past.

Some automation engineers spend a lot of time and efforts building a solid framework with clean and clear reporting. They add a lot of test cases, connect the setup to a continuous integration environment and make sure that they keep all tests working and running.
They then go ahead building more and more tests, adding all kinds of nice new tests and possibilities. What gets forgotten often however, is to involve the rest of the department, they do not show and tell to the (manual?) testers, they do not share with the developers. So the developers use their own unit tests for themselves and if they do some functional testing they do it manually and sloppily. The (manual) testers go about their usual business of testing the software, as they have always done. Not considering that a lot of the test automation suite they can use and abuse to make their life easier. They will spend time on data seeding, manually. They will spend time on look and feel verification on several browsers, manually.

All this time the developers and (manual) testers could have been using the automation framework and the existing tests in it to make their life a lot easier.

While writing this down it starts sounding silly to me and unlikely that it happens, yet I have seen this happen time and time again. What is it that makes developers but especially testers afraid of hesitant to use automated tests?
I love using test automation to make my life easier when doing manual testing, despite having an very strong disliking for writing code.

Test automation should be seen as a tool, it is there to make life a hell of a lot easier. It cannot replace manual testing, but it can take all the repetitiveness and tediousness of manual testing away, it can help you get an idea of what kind of state a system is in before you even start testing, but it can also help you, a lot, in getting the system in all kinds of states!