Test automation metrics – mashing up non-test data

In my previous post I wrote about reporting on test automation and one of the main things I said in there was: use the vast amount of data you have. In this post I will elaborate a bit more on that, in fact, I am going to go one step beyond and say: don’t just use your own data! Generally in an organisation there is more data at hand than just the data you create with test automation. Some very powerful pieces of data are release-boards and bug-metrics.

Bug metrics

The bug-metrics are quite often a logical item to look at. However, keep in mind that generally speaking, test automation will not have a very high bug-detection-percentage. The bug-detection-percentage curiously enough is something you do not often read about, but it is being measured all over the place actively; it is the amount of bugs found in production set off as a percentage of the amount of bug found before production and in production, in other words: Bug Detection Percentage calculation An example of a BDP graph, with Preproduction found bugs, bugs found in production and the grand total of bugs found per run (please keep in mind, this is simply sample data and not from an actual project) Bug detection percentage graph The problem I see in using the BDP to show the effectiveness of test automation is that the objective of test automation is generally notto find new bugs, but instead to ensure all existing functionality, on a new release, functions as expected and thus as it was on previous builds or releases. So in short, test automation is excellent at running a regression test, but not at finding new problems. Of course it’s possible to then see if you can define a bug detection percentage for test automation specifically, e.g. make the numbers solely based on test automation. This however will not really give any insights into the effectiveness of test automation considering the  low amount of bugs found by test automation in general. What can we do with bug data and test automation? Frankly, I am convinced that generally speaking test automation will not benefit from setting metrics based on bugs. Use other data instead, such as new features introduced, lines of code changed, number of tasks or user stories completed and set that off against your test set showing growth.

Test automation and User Stories

In agile environments it is generally considered (and thankfully so) a good thing to automate the crap out of everything being built. Generally there is quite some data available on the amount of stories or tasks coming out of a sprint, or on the story-points having been committed to per sprint. A graph setting off the story-points (or complexity points if you will) against the amount of tests having been automated can give a nice insight into the “coverage” of your automation. Add to this set the amount of bugs appearing in production and you have a sweet showcase of how your agile effort is performing over all. Storypoints vs automated tests vs production issuesA graph like this contains a lot of information so it will need a bit of explanation when showing to the team and especially to managers. On a per sprint basis this graph shows:

  • Blue bars: the amount of story points delivered per sprint (the actual delivery, not the committed amount!) set off against the left vertical axis
  • White bars: the cumulative amount of automated tests run on these stories and run as regression set off against the right vertical axis.
  • Black line: a trend line of the overall amount of automated tests
  • Green line: the actual amount of production issue occurences after releasing a sprint worth of code to production.

A graph like this is relatively easy to setup and maintain and it will show a very strong picture. It clearly shows the progress of the team effort (growing amount of story points taken up, less production issues over time) and at the same time gives a very clear insight into the added value of test automation within the team by showing the rising coverage against the declining production issues.

Test automation metrics – what do you report on?

Metrics

One of the fun things of test automation is that, since you do not have to do all the tests manually, you can spend some extra time coming up with test metrics. Test metrics are tricky to do well in any situation, but in a situation where there is an abundance of metrics, such as in a test automation setup,  the choice of metrics becomes the key first step. What are the metrics to look at? Code coverage? Number of tests passed vs Number of tests failed? Duration of the tests over time? Number passed now vs number passed in previous runs? Newly automated tests added since last run? You can keep going in dreaming up new metrics, but which ones will actually make sense and become representative?? And of course, how do you ensure you do not spend ages ploughing through your data to gather these metrics manually?
Borrowed the image from khanmjk If you just take a test automation tool off the shelf it probably has an immense amount of options to measure on and report on, but the risk is always that you start generating reports and metrics that are not quite representative, or even worse, give a tainted view of the actual situation. So how do you make sure you don’t end up with a jungle of metrics?

Audience

First thing you need to know is who is the audience of your metrics? There is a huge difference in what different levels in an organisation consider useful metrics. One manager can be mainly interested in the time spent automating versus the time won by automating; e.g. the extra time now available for testing other stuff, the stuff that matters, while a test manager might be more interested in the functional areas of the application covered and to what extend they are covered.

Type of metrics

I will not attempt to dream up the perfect metric, for every environment and situation one metric might be better than the other. It all depends on the context, the persons you are reporting to, targets of each particular business area etc.

What I do want to touch upon is the awesome power you have with metrics coming out of automation. Since your tests can run rapidly and often, there are lots of runs that can be measured. In other words, you can gather a lot of data, a lot of historical data. When reporting on metrics like amount of tests passed versus the amount failed, it generally will be a snapshot of some test run. Why limit the metric to a snapshot when you have living data at hand?

The strongest metric to show to any manager is trend lines; you need to report on the amount of tests passed vs failed or the amount of tests added to the automation suite? Need to report metrics on code coverage? All of these metrics can result in a trend line. Show the “upwards trend” and managers are generally happy without even knowing what they are looking at.

There are of course some pitfalls, the main one I have made was having a downwards sloping trend line. That seems like a bad trend, even though it can be a totally perfect trend, the sight of a trend line going down generally makes managers nervous, they expect things to always go up.

Be prepared to explain a downwards trend, cause sometimes you cannot escape a downwards or flattening trend line!

Graph examples

Below are two graphs, both with the same data, and a trend line set on the same data. The three charts however, when looking at them each tell you a slightly different story due to the style of trend line chosen for the chart.

Upwards trend

Making the numbers seem a bit more positive than they really are by using an exponential trend line.

The exponential trend line paints a strong picture, however when using it, be prepared to explain the fact that despite the lack of growth at about two thirds of the graph, the trend is still upwards. This is a difficult story to tell.

Linear trendline

The linear trend line gives an indication of the overall trend, when close to flat-lining you know you have a problem, when it is too steep however you also may have a problem!

The linear trend line is one usually understood well by most people, at least in my experience. It shows the gradual, overall progress being made on your metrics. Since it is a straight line, quite often questions about what happened in a “dip” period can be prevented.

Since there is an abundance in data, if you have setup your automation properly, there is also the possibility to combine data. Such as setting off the trend of passed/failed to the trend of new tests added, or even more interestingly, to new functionality added to the system under test.

Be aware!

One big warning though, when playing around with the numbers you may be tempted to make them look nicer than they are or focus on the good things. However tempting this may be, don’t prettify your numbers or graphs, make sure the always paint a true story. If you manipulate the graphs, you are not only trying to fool your manager, but also yourself. Metrics should be useful for you as well as for the managers.

In a follow up post I am currently working on I will give some more clear examples of mashing up data into a useful automation report and how to interpret/present the data given specific contexts.

Figures often beguile me, particularly when I have the arranging of them myself; in which case the remark attributed to Disraeli would often apply with justice and force: “There are three kinds of lies: lies, damned lies and statistics.”
– Mark Twain’s Own Autobiography: The Chapters from the North American Review

–Edit–

A follow up on this post can be found here: Test automation metrics – mashing up non-test data