Heb je die testautomationtool wel nodig?

3 tips om tot een goede toolselectie te komen

Als testconsultant kom ik vaak bij klanten over de vloer die een testautomatiseringstool willen aanschaffen. Ze stellen mij dan regelmatig de vraag “welke tool moeten we kopen of gebruiken?”.  Ik antwoord dan meestal met een tegenvraag: Waarom wil je een testautomatiseringstool hebben? De reactie die ik hierop krijg is redelijk voorspelbaar: grote ogen, gefronsde wenkbrauwen,  verbijstering op de gezichten dat een consultant zo’n onnozele vraag stelt en vervolgens de (steeds falende) poging om een antwoord te geven. GO BACKAntwoorden die ik krijg gaan vaak in de richting van:

  • “Ik heb er goede dingen over gehoord of gelezen”
  • “Een kennis van me werkt er ook mee”.

Of het nog gevaarlijker:

  • “Bij mijn vorige werkgever gebruikten we ook een testautomatiseringstool”.

Het mooiste antwoord wat ik ook nog wel eens krijg is:

  • “We hebben een demo gehad van een tool en dat zag er heel veelbelovend uit”.

The first principle is that you must not fool yourself — and you are the easiest person to fool.

– Richard Feynman

Deze antwoorden zijn duidelijke indicatoren dat het echte probleem niet duidelijk is en daarmee is ook het doel wat de klant hoopt te bereiken met de testautomatiseringtool niet duidelijk.

Wat is het probleem dat opgelost moet worden?

Waarom stel ik nou juist die vraag? Er zal toch wel een goede reden zijn waarom die klant vraagt om een testautomatiseringstool?

In veel gevallen is de reden van het zoeken naar testautomatiseringstools niet de échte reden, de diepere reden. Door te vragen waaróm men op zoek is naar testautomatiseringstools ga je op zoek naar welk probleem er is. Ik wil weten wat de aanleiding is voor het willen implementeren van een tool en vooral wat de klant hoopt ermee te bereiken.

Ik wil mijn gesprekspartner uitlokken (of uitnodigen) alles boven water te halen.

In veel gevallen is een testautomationtool in eerste instantie een lapmiddel. Het is symptoombestrijding omdat het eigenlijke probleem

  • te groot is
  • buiten jou invloedsfeer ligt
  • niet op korte termijn opgelost kan worden.

Symptoombestrijding geeft geen oplossing voor het echte probleem. Als bewuste symptoombestrijding het hoogst-haalbare is op dit moment, dan is het wellicht een goed idee om met een tool aan de slag te gaan. Vaak echter lijkt een testautomatiseringstool een oplossing te bieden voor iets wat niet de basis van het probleem is en dus wordt het echte probleem niet weggenomen, laat staan opgelost.

Er zijn een aantal dingen waar je naar moet kijken als je een testautomatiseringstool gaat selecteren.

 Wat moet je weten voor je een testautomatiseringstool selecteert?

  1. Definieer de basis van het probleem dat je poogt op te lossen met de testautomatiseringstool
  2. Stel jezelf de vraag “gaat een tool dit probleem daadwerkelijk oplossen?
  3. Hebben we de kennis, kunde en financiële middelen om een en ander ook door te zetten en het probleem écht op te lossen?

 Het is mogelijk om op tal van manieren te falen… terwijl je slechts op één manier kunt slagen.

– Aristoteles

How did teaching test automation work out?

A while ago I wrote a post about a set of workshops I was asked to setup for functional testers, test coordinators and test managers to get the familiar and acquainted with testautomation and performance testing. I pre-selected a set of tools which I wanted to go through with the participants. The slides I used for this can be found on SlideShare:


Apologies for the slides being in Dutch. Possibly I will come up with the EN version as well, however the training was in Dutch.

This part of the evening already created a lot of discussion and questions. One of the nicest questions was the obvious one: “is coding really needed when working on test automation? I thought test automation tools and software were advanced enough nowadays to no longer require code to work well?” This questions fairly obvious, considering the tools I selected for the training, Sikuli and AutoIt SciTE. Both tools require a lot of coding in order for them to be at all usable.

The hands-on experiences

After we had had the theoretical bits we moved on to the hands on bit. As visible on the last slide there were a few (simple) assignments prepared for the participants. The first one was executing a calculation with the Microsoft built-in calculator. Fairly straight forward and rudimentary I thought. A few had prior coding knowledge so they went to do this exercise with AutoIt rather than with Sikuli, the majority of the group however attempted to execute this task with Sikuli.

Sikuli

Since Sikuli is mainly image based almost everyone started by actually clicking on the START button and going through the Windows Start Menu. After a while I thought it worth while to show the group how to just launch an application from within Sikuli through the RUN command of Windows. This of course immediately raised the question why I would prefer to do that over manipulating the mouse (which is answered simply by explaining the concept of cross-platform (or at least cross windows version) testing with test automation, the <WIN>+R key-combo has been in existence since Windows 95 if I recall correctly and thus this is backwards compatible and since you can still use this within Windows 8.1 it is also forwards compatible).

Sikuli turned out to be a hit with the participants. They barely noticed they were actually writing code and at some point I saw a fairly experienced test-manager explain some basic things of Python coding to at least two other test-managers. Non of these had prior coding experiences, not even the one explaining things.

AutoIt

AutoIt, with full access and use of the Windows API however was a bit more of a stretch. Turns out the Basic based language for AutoIt is for non-coding testers a lot more difficult to understand than Python. The first assignment, manipulating the Windows Calculator, was for most still doable, although it took a lot more explaining and showing than Sikuli.

The second assignment, calculating the square root of a number in Excel proved for most really difficult. I had hoped they would see the use of the Windows API and thus also come up with using the Office API in doing this, but apparently I overestimated the ease of use of AutoIt for them.

My takeaways

Next time I do an evening like this I now know to introduce AutoIt separately, after people have gained some experience with Sikuli. Guiding them a bit more with the AutoIt things, instead of letting them go.
Overall the test automation evening was really great and I do believe everyone had great fun and actually got a bit of an idea of what it is that attracts me to testautomation.

Are we building shelf-ware or a useful test automation tool?

Frustration and astonishment inspired this post. There currently is a big regression testing cycle going on within the organization, over the past 4 months we have worked hard with testers to establish a sizable base of automated tests, however the moment regression started everyone seemed to drop the automation tools and revert back into what they have always done: open excel and check the check-boxes of the scripted tests.

Considering that we have already setup a solid base with a custom fixture enabling the tests, or checks if you will, to do exactly what the tester wants them to do and do what a tester would do manually whilst following the prescribed scripts, and having written out, in FitNesse, a fair share of these prescribed scripts, what is stopping them from using this setup?

Are we automating for the sake of automating?

While working on this, extremely flexible, setup with FitNesse and Selenium WebDriver and White as the drivers I have started wondering more and more why we are automating in this organization. The people responsible for testing do not seem to be picking up on the concept of test automation, they are all stating loudly that it is needed and that it is great that we are doing it, but when regression starts they immediately go back to manual checks. I say manual checks on purpose since the majority of testing here is done fully scripted, most of these scripts do not leave anything to the testers imagination, resulting in these tests being checks rather than tests. Checks we can execute automatically, repeatedly and consistently with tools such as FitNesse.

How do you make testers aware that a lot of the scripted tests should not be done manually?

Let me be clear on this, I am a firm believer in both manual and automated testing. They both have their value and should be used together, automated testing is not here to take away the manual testing, rather it is here to support the testers in their work. Automated testing should be complimentary to manual testing. Thus far in this organization, I have seen manual testing happening and I have seen (and experienced) a lot of effort being put into writing out the automated tests in FitNesse. However there has not been a clear cooperation between the two, despite the people writing the automated tests being the same individuals who also are responsible for executing the manual tests (which they have rewritten into FitNesse in order to build automated tests).

We have tried coaching on the job, we have tried dojos, but alas, I still see a hell of a lot of manual checks happening instead of FitNesse doing these checks for them. What is it that makes people not realize the potential of an automation tool? Thus far I have come up with several possible causes

  • In our test-dojos we mainly focused on how to write tests in FitNesse rather than focusing on what you can achieve with test automation. This has led me to the idea that we rapidly need to organize another workshop or dojo in which the focus should be on what the advantages of automated tests are.
  • Another reason could be that test automation was not initiated by this team, it was put upon this team as a responsibility. The team we are currently creating this fixture for is a typical end-of-the-line-bottom-of-the-testing-chain team, everything they get to test is thrown over a wall and left to them to see if it works appropriately. Most of them do not seem to have consciously chosen to be testers, instead they have accidentally rolled into the software testing field. Some of them have adapted very well to this and clearly show affinity and aptitude for testing, others however would, in my opinion, be better of choosing a different occupation. It is exactly the latter group that needs to be pulling this test automation effort currently going on.
There are more reasons I could go into here, but I believe these two to be the main issues at hand here which can actually be addressed.

So what will make people use automation tools properly?

The moment I can answer this one in a general rule-of-thumb I will sell it to the highest bidder. For within this organization however there doesn’t really seem to be a simple solution just yet. As I have written before, there is not yet one sole ambassador for test automation in this organisation. Even if there is, we will need to cause a shift in the general mindset of the testers. Rather than just walking through their predefined set of instructions in excel, they need to consider for themselves what has already gotten covered in the automated tests, how can I supplement these tests with manual testing?

We will need to find a way to get the testers to step out of their comfort-zone and learn how to utilize tools other than Excel and MS Word. Maybe organizing a testing competition will work, see who can cover the most tests in the shortest time and with the highest accuracy?

I am not a great believer in measuring things in testing, but maybe inventing some nice measurements will help the testers see the light. For example “How often can you test the same flow with different input in a certain timeframe?”.

Did we build shelf-ware or did we add value to the testing chain?

At the moment I often ask myself whether I am building shelf-ware or actually am building a useful automation tool (trying to stay away from terms like framework, since that might only increase the distance between the tool and the testers). Whenever I play around with the FitNesse/WebDriver/White setup we currently have running I see an incredibly versatile test automation tool which can be used to make life a lot easier for those who have to test the software regularly and repeatedly (not just testers, but also developers, product owners etc. can easily use this setup).

It is completely environment agnostic, if needed we can (and have in the past) run the same tests we run in a test environment also in production. It is easy to build new test cases/scripts or scenarios (I seem to have lost track what would be the safe option here to choose, they all have their own subconscious connotations) since it is a wiki. All tests are human readable, if you can read an excel sheet, reading the tests in FitNesse with Slim the way we built it, should be child-play.

Despite all these great advantages, the people that should be using it are not.

Reading all this back makes me consider one more thing; we started off building this setup with these tools based on a request from higher management. The tool selection was done by the managers (or team leads if you will) and not by the team themselves. Did we miss out on the one thing the IT industry has taught us? Did we build something we all want, but not what our customer wants and needs? I hope not, for one thing, I am quite sure this is what they need, an easy to use tool to automate all tedious, repetitive check work.

Question that remains: is this what our customer, or to be more exact, our customers’ end user, the tester, wants?