Test automation in Agile and why it fails

It’s fairly safe to say that quite a lot of test automation efforts fail. It is also very safe to say that without test automation an agile team fails. So how can you make sure that while doing agile your test automation will not fail and thus your agile team will not fail? One of the ways to answer this question is by looking at why test automation often fails within agile environments.

When I am talking about test automation within this post, I am referring to testing that is done to reduce the amount of manual regression work, the so called functional test automation or automatic regression testing.

Moving target

Test automation quite often does not receive the attention it needs and deserves, also in agile teams. Quite some test automation efforts start off too late and without the appropriate preparation, resulting in organic test automation driven by a moving target. The moving target is the system under test which, in agile, is constantly in flux. Each sprint new features are added, bugs are fixed and quite often it is not clear at the start of a project where it is going to end up. Writing automated scripts against such a flexible environment which will stand the test of time, is difficult. It is even more difficult when the base on which automation is done is weak.

Quite often test automation runs behind on what is being delivered within an iteration, this is somewhat logical, considering that it is difficult to test, let alone automatically test what has not been built yet. Ideally while manually testing the new feature(s) as a tester, you’re already pondering how to automate it so that you do not have to do the tedious work more than once. Given enough time within your iteration, you actually might be able to automate some of the features, from what I have seen thus far, generally not all features will be covered in test automation within one iteration. So if these tests are not all automated, what happens to them in the next iteration? Are they omitted? Are they picked up and automated retrospectively?

If you do not keep track of what has been automated during an iteration for both your current iteration and your previous iteration, how can you rely on your test automation? You can’t be sure what exactly it is going through, so a bug can easily get through the net of your automated tests.

This moving target you are testing needs to be traced and tested solidly, repeatedly and in a trust-worthy way!

Definition of Done

In the majority of the DoD’s I have seen, one of the items is something referring to “tests automated”. The thing I have thus far not seen however, is the team adding as much value to the automation code as they do to the production code. Quite a lot of DoD’s refer to certain coding standards, however these standards often seem to not apply to functional test automation. Isn’t your functional automation code also just code? If so, why then should this not be covered in code reviews, be written according to some useful guidelines and standards and hopefully use a framework to make the code sustainable?

Test automation is just writing code

I have seen several automation efforts going on within agile teams where test automation was done without proper thinking having been put into it. A tool was chosen, based on what exactly other than members of the team having heard of it or having had good experiences with the tool. No base or framework to keep the code clean chosen. Since you are writing code, you should follow the same rules as the rest of the software developers. Don’t think your code, since they are merely tests, should not be hooked up to some form of framework. If you want to make your tests survive a few iterations, considering reuse of your code would be logical.

By the way, coding standards do not need to be too complicated. In 2009 “Agile in a flash” came up with a coding standard that could work for all languages and for most environments:

Coding Standards - agileinaflash.blogspot.com

All of the above mentioned points are “logical” when writing an application which is supposed to go into production. However when looking at a lot of (agile) projects, these logical “best practices” seem to be totally forgotten when it comes to test automation.

Succeed in test automation

So, how do you succeed in your test automation? How do you make it work? The answer seems clear to me: test automation is not like writing code, it is equal to writing code. Since it is the same, treat it the same way!

Do your code reviews, follow a form of a standard, use a (simple) framework to make life easier in writing tests, create reusable modules in your automation code. In other words, treat your functional test automation with the same respect as your production grade code. Who knows, you might want to run your tests against your production environment some day! In setting up your initial test automation environment and framework, don’t be shy and ask the developers in your team for tips, tricks and suggestions. They quite likely have gone through those setup steps more often than you have, so use their knowledge. Asking them for their insights and ideas not only helps you, it also helps them feel more responsible for doing their 5 pennies worth on the test automation side. They will get a clearer idea of what you intend to achieve, so they might also be more willing to help out keeping their code testable, they might even enjoy helping you write the testscripts!

Resources

Some informational resources where you can find some ideas on how to setup the test automation framework:

Test automation metrics – what do you report on?

Metrics

One of the fun things of test automation is that, since you do not have to do all the tests manually, you can spend some extra time coming up with test metrics. Test metrics are tricky to do well in any situation, but in a situation where there is an abundance of metrics, such as in a test automation setup,  the choice of metrics becomes the key first step. What are the metrics to look at? Code coverage? Number of tests passed vs Number of tests failed? Duration of the tests over time? Number passed now vs number passed in previous runs? Newly automated tests added since last run? You can keep going in dreaming up new metrics, but which ones will actually make sense and become representative?? And of course, how do you ensure you do not spend ages ploughing through your data to gather these metrics manually?
Borrowed the image from khanmjk If you just take a test automation tool off the shelf it probably has an immense amount of options to measure on and report on, but the risk is always that you start generating reports and metrics that are not quite representative, or even worse, give a tainted view of the actual situation. So how do you make sure you don’t end up with a jungle of metrics?

Audience

First thing you need to know is who is the audience of your metrics? There is a huge difference in what different levels in an organisation consider useful metrics. One manager can be mainly interested in the time spent automating versus the time won by automating; e.g. the extra time now available for testing other stuff, the stuff that matters, while a test manager might be more interested in the functional areas of the application covered and to what extend they are covered.

Type of metrics

I will not attempt to dream up the perfect metric, for every environment and situation one metric might be better than the other. It all depends on the context, the persons you are reporting to, targets of each particular business area etc.

What I do want to touch upon is the awesome power you have with metrics coming out of automation. Since your tests can run rapidly and often, there are lots of runs that can be measured. In other words, you can gather a lot of data, a lot of historical data. When reporting on metrics like amount of tests passed versus the amount failed, it generally will be a snapshot of some test run. Why limit the metric to a snapshot when you have living data at hand?

The strongest metric to show to any manager is trend lines; you need to report on the amount of tests passed vs failed or the amount of tests added to the automation suite? Need to report metrics on code coverage? All of these metrics can result in a trend line. Show the “upwards trend” and managers are generally happy without even knowing what they are looking at.

There are of course some pitfalls, the main one I have made was having a downwards sloping trend line. That seems like a bad trend, even though it can be a totally perfect trend, the sight of a trend line going down generally makes managers nervous, they expect things to always go up.

Be prepared to explain a downwards trend, cause sometimes you cannot escape a downwards or flattening trend line!

Graph examples

Below are two graphs, both with the same data, and a trend line set on the same data. The three charts however, when looking at them each tell you a slightly different story due to the style of trend line chosen for the chart.

Upwards trend

Making the numbers seem a bit more positive than they really are by using an exponential trend line.

The exponential trend line paints a strong picture, however when using it, be prepared to explain the fact that despite the lack of growth at about two thirds of the graph, the trend is still upwards. This is a difficult story to tell.

Linear trendline

The linear trend line gives an indication of the overall trend, when close to flat-lining you know you have a problem, when it is too steep however you also may have a problem!

The linear trend line is one usually understood well by most people, at least in my experience. It shows the gradual, overall progress being made on your metrics. Since it is a straight line, quite often questions about what happened in a “dip” period can be prevented.

Since there is an abundance in data, if you have setup your automation properly, there is also the possibility to combine data. Such as setting off the trend of passed/failed to the trend of new tests added, or even more interestingly, to new functionality added to the system under test.

Be aware!

One big warning though, when playing around with the numbers you may be tempted to make them look nicer than they are or focus on the good things. However tempting this may be, don’t prettify your numbers or graphs, make sure the always paint a true story. If you manipulate the graphs, you are not only trying to fool your manager, but also yourself. Metrics should be useful for you as well as for the managers.

In a follow up post I am currently working on I will give some more clear examples of mashing up data into a useful automation report and how to interpret/present the data given specific contexts.

Figures often beguile me, particularly when I have the arranging of them myself; in which case the remark attributed to Disraeli would often apply with justice and force: “There are three kinds of lies: lies, damned lies and statistics.”
– Mark Twain’s Own Autobiography: The Chapters from the North American Review

–Edit–

A follow up on this post can be found here: Test automation metrics – mashing up non-test data

How do you test for SEO

In this post I mention SEO and Search engine optimization several times, I am referring here to the optimization of a website for natural search, so without paying for it showing up high in the search result lists.

While on holiday I spent some time talking to a local entrepeneur. He makes his money through his own website, we got to talking about his website and about the translations of his site in particular. Since this site was translated from Greek into English, French, German, Italian and Russian I had a quick hunch that his meta-keywords would not be in order for all the separate languages (which does not apply to all translations he made by the way).

When asked to test a site specifically for SEO, what are the things to look at? As I mentioned above, there are a few tell-tales when you start your testing, especially when the site has been translated:

  • lang – this should be set to the actual language of the page you are testing
  • meta-keywords – these should be in the same language as the lang set in the header
  • meta-description – this should be in the same language as the lang set in the header
  • Alt-text for images –  – these should be in the same language as the lang set in the header
  • page specific URL’s – should be in the same language as the lang set in the header

Please note that this is just a sub-set of what needs to be looked at when testing a site for SEO optimization.

Based on the before mentioned website I will give some examples of what to look for when testing for search engine optimizations.

HEADER

Looking at the header of the Russian version I indeed saw exactly what I assumed I would see:

<html xmlns="http://www.w3.org/1999/xhtml" xml:lang="ru-ru" lang="ru-ru" >
<head>
<base href="http://www.corfu-villa.gr/ru.html" />
<meta http-equiv="content-type" content="text/html; charset=utf-8" />
<meta name="robots" content="index, follow" />
<meta name="keywords" content="corfu villa, villa corfu, seaside villa corfu, pool villa corfu, villa with sunset view corfu, villa rental corfu, corfu villa rentals" />
<meta name="description" content="Corfu villa. Two elegant seaside pool villas located in Chalikounas Corfu with amazing panoramic sunset view. See photos check 2010 availability and book online with our paypal secure system." />
<meta name="generator" content="" />
<title>Корфу Вилла | бассейн вилла с потрясающим видом на закат | villa.gr Корфу</title>

If you look fast and are not sure what to look at this looks fine, however from a Search Engine point of view this header is a bit of a drama.

The language is set to Russian in the first line, the title is in Russian, in the Cyrillic alphabet. The SEO issue however is in between the language declaration and the title: keywords and description are in English. When a Russian is trying to find a “seaside villa in Corfu” he will probably not use the English words for it, instead the keywords used will quite likely be “вилла на Корфу с видом на море“.

Just for fun, here are the result pages for the two searches; the English search on Google.ru and the Russian search on Google.ru. On the Cyrillic search the first page doesn’t have any links to our test site. On the English search however, the site is the first to surface, underneath the paid links. Problem I see with that result page however is, the url we get back is the main URL rather than the Russian URL.

The header of the homepage of the site is just the beginning of testing it for search engine optimization.

IMAGES

This being a site aimed at renting out a villa with amazing views in a fairly decadent location, it is quite visually driven. As a tester you might not pay too much attention to the images, however when testing the search engine optimization, the images should be looked at as well.

Sticking to the example of the Russian version of this site I grabbed another piece of the source code:

<div title="Corfu Villa Boxes" id="boxes">
<div title="Corfu Villa Gallery. Click to view more photos" id="left_box">
<div class="module">
<table border="0" cellspacing="0" cellpadding="0" width="100%">
<tbody>
<tr>
<td><a href="/ru/gallery.html"><img src="/templates/corfuvilla/images/gallery.jpg" border="0" alt="Corfu Villa Gallery" /></a></td>
</tr>
</tbody>
</table>
</div>
</div>
</div>

Within this snippet you’ll notice again an issue similar to that in the header meta-tags. The page is supposed to be in Russian, yet the title of the div is in English. The alt-tag of the image is also in English. Both of these are supposed to be in Russian in order for this page to properly be indexed by the search engines in that language. If you really want to make an effort, the image name should also be in Russian.

The useage of the images themselves, on the rendered HTML page already gave an indication that SEO and translation were not well thought through or at least not fully implemented.

Corfu villa gallery image

The text on the image should of course also have been translated. When, as a tester, you see mistakes like this on a website, this should quickly give you an idea that the SEO has not been done properly, nor quite likely will the translation of the site have been done properly. Ideally you would want the text on this image to be configurable in the CMS and be attached to the language the page is in.

Of course there are more things you have to look at when testing for SEO, however I will stop here for now.

Throw-away test automation

I quite often tell clients that their approach to test automation is not sustainable, this got me thinking, does test automation always have to be sustainable and reusable?

This all depends on the goal you’re trying to meet I guess. If your goal is set for long-term cost efficiency, shorten the timelines on regression testing and through that get to a more rapid release cycle, yes you will need to be focussed on the re-usability of your automation suite.

However there are plenty of instances where you want to automate something to make life easier right here, right now. Most testers, I hope, know the feeling of having to go through tedious, repetitive work, setting up data for a test, going through the login flow of an application to get to the feature you want to test etc. For actions like that, you can very well use automation. In fact, you can quite often use the most simple form of automation, record/playback without having to adjust your scripts for maintainability or re-usability.

Tools like Selenium IDE and AutoIt are excellent for making life easy right here, right now when you need to quickly automate something to make life easy. Funnily enough a lot of testers either do not realize these tools can be used to make life easier. When talking with colleagues about test automation they quite often think of big test automation packages, like QTP or Rational Robot. Sometimes they ask how much you need to know of software development and writing code to automate things. And most of these conversations I let myself be sucked into the tool talks and indeed talk about the difficulties of setting up a test automation framework.

In future conversations I am going to try to explain my colleagues and fellow testers that automation does not need to be a big operation, it doesn’t need to be reusable and maintainable, at least, depending on your goals. As long as your goal is to make life easy here and now, there is no need to build something awesome.

For a lot of things, a simple script, either hand written or simply recorded, can be more than enough to get to your goal, when done with your tasks you can then throw them away, but preferably be a bit smart about it and just dump it somewhere in a repository, you might have to do this task again.

Is testing the dumping grounds of IT?

The other day I was talking to a few developers I was on an assignment with about getting testers added to their scrum team, and the response I got from them disturbed me. They told me that in their experience most testers do not work together in the team, they work against development, trying to get everything fully tested, despite them knowing this is not a feasible thing, and with that delay projects. On top of that they told me, most of the testers they have worked with, are part of the dumping grounds of the IT industry. And with that they meant that in their view most testers are not good enough to be a developer, so they decided to become testers instead (<sarcasm> cause, come on, testing is not that difficult anyone can do that! <\sarcasm>).

I was shocked to hear there are still a lot of developers out there who believe that testers are the dumping grounds of the IT industry, but I was even more shocked of their experiences with testers, working in an “us versus them” modus operandi instead of working in a team, part of a joing effort with a shared focus and goal.

What is it that still makes testers often work against developers instead of with them?

Most testers I have worked with over the past years agree that working side by side with development is the most effective and efficient way of working, this way you both keep track of your joint goal: get the software out on time, on budget and according to what your customer (or end-user for that matter) wants and needs. Together you try to add value to the software.

So is it indeed true that there are still a lot of testers out there in the field who are indeed not seeing the big picture and are trying to prove their worth by working against dev and looking for bugs that are not relevant, e.g. just looking for bugs for the sake of finding one, no matter what the value of that bug is to the end-user/customer, just so they can triumphantly point to a developer that indeed, “see! There are bugs in your code, you did it wrong!” Unfortunately I fear there are still too many testers out there that think and work this way, not to even mention all the developers out there that seem to not understand the added value of a good tester to the team and to the developers work!

Fortunately there is a wonderful contrast out there as well, in the form of this blog post by Nathan Lusher who shows that there indeed are good testers out there, who weigh in on a project and prove the value of testing and with that show that testers are not (or at least not everywhere) the dumping grounds of IT.

In my experience, there are a lot of very good, inspired and knowledgable testers out there, who see the added value of working together, in a team with a shared goal, a shared approach and shared respect. If testers want to get the respect of developers, I believe it is up to quite a lot of the testers to start by showing respect to the developers and where needed, increasing their technical knowledge in order to be able to counterbalance a developers viewpoint. You get what you give!