Heb je die testautomationtool wel nodig?

3 tips om tot een goede toolselectie te komen

Als testconsultant kom ik vaak bij klanten over de vloer die een testautomatiseringstool willen aanschaffen. Ze stellen mij dan regelmatig de vraag “welke tool moeten we kopen of gebruiken?”.  Ik antwoord dan meestal met een tegenvraag: Waarom wil je een testautomatiseringstool hebben? De reactie die ik hierop krijg is redelijk voorspelbaar: grote ogen, gefronsde wenkbrauwen,  verbijstering op de gezichten dat een consultant zo’n onnozele vraag stelt en vervolgens de (steeds falende) poging om een antwoord te geven. GO BACKAntwoorden die ik krijg gaan vaak in de richting van:

  • “Ik heb er goede dingen over gehoord of gelezen”
  • “Een kennis van me werkt er ook mee”.

Of het nog gevaarlijker:

  • “Bij mijn vorige werkgever gebruikten we ook een testautomatiseringstool”.

Het mooiste antwoord wat ik ook nog wel eens krijg is:

  • “We hebben een demo gehad van een tool en dat zag er heel veelbelovend uit”.

The first principle is that you must not fool yourself — and you are the easiest person to fool.

– Richard Feynman

Deze antwoorden zijn duidelijke indicatoren dat het echte probleem niet duidelijk is en daarmee is ook het doel wat de klant hoopt te bereiken met de testautomatiseringtool niet duidelijk.

Wat is het probleem dat opgelost moet worden?

Waarom stel ik nou juist die vraag? Er zal toch wel een goede reden zijn waarom die klant vraagt om een testautomatiseringstool?

In veel gevallen is de reden van het zoeken naar testautomatiseringstools niet de échte reden, de diepere reden. Door te vragen waaróm men op zoek is naar testautomatiseringstools ga je op zoek naar welk probleem er is. Ik wil weten wat de aanleiding is voor het willen implementeren van een tool en vooral wat de klant hoopt ermee te bereiken.

Ik wil mijn gesprekspartner uitlokken (of uitnodigen) alles boven water te halen.

In veel gevallen is een testautomationtool in eerste instantie een lapmiddel. Het is symptoombestrijding omdat het eigenlijke probleem

  • te groot is
  • buiten jou invloedsfeer ligt
  • niet op korte termijn opgelost kan worden.

Symptoombestrijding geeft geen oplossing voor het echte probleem. Als bewuste symptoombestrijding het hoogst-haalbare is op dit moment, dan is het wellicht een goed idee om met een tool aan de slag te gaan. Vaak echter lijkt een testautomatiseringstool een oplossing te bieden voor iets wat niet de basis van het probleem is en dus wordt het echte probleem niet weggenomen, laat staan opgelost.

Er zijn een aantal dingen waar je naar moet kijken als je een testautomatiseringstool gaat selecteren.

 Wat moet je weten voor je een testautomatiseringstool selecteert?

  1. Definieer de basis van het probleem dat je poogt op te lossen met de testautomatiseringstool
  2. Stel jezelf de vraag “gaat een tool dit probleem daadwerkelijk oplossen?
  3. Hebben we de kennis, kunde en financiële middelen om een en ander ook door te zetten en het probleem écht op te lossen?

 Het is mogelijk om op tal van manieren te falen… terwijl je slechts op één manier kunt slagen.

– Aristoteles

How did teaching test automation work out?

A while ago I wrote a post about a set of workshops I was asked to setup for functional testers, test coordinators and test managers to get the familiar and acquainted with testautomation and performance testing. I pre-selected a set of tools which I wanted to go through with the participants. The slides I used for this can be found on SlideShare:

Apologies for the slides being in Dutch. Possibly I will come up with the EN version as well, however the training was in Dutch.

This part of the evening already created a lot of discussion and questions. One of the nicest questions was the obvious one: “is coding really needed when working on test automation? I thought test automation tools and software were advanced enough nowadays to no longer require code to work well?” This questions fairly obvious, considering the tools I selected for the training, Sikuli and AutoIt SciTE. Both tools require a lot of coding in order for them to be at all usable.

The hands-on experiences

After we had had the theoretical bits we moved on to the hands on bit. As visible on the last slide there were a few (simple) assignments prepared for the participants. The first one was executing a calculation with the Microsoft built-in calculator. Fairly straight forward and rudimentary I thought. A few had prior coding knowledge so they went to do this exercise with AutoIt rather than with Sikuli, the majority of the group however attempted to execute this task with Sikuli.


Since Sikuli is mainly image based almost everyone started by actually clicking on the START button and going through the Windows Start Menu. After a while I thought it worth while to show the group how to just launch an application from within Sikuli through the RUN command of Windows. This of course immediately raised the question why I would prefer to do that over manipulating the mouse (which is answered simply by explaining the concept of cross-platform (or at least cross windows version) testing with test automation, the <WIN>+R key-combo has been in existence since Windows 95 if I recall correctly and thus this is backwards compatible and since you can still use this within Windows 8.1 it is also forwards compatible).

Sikuli turned out to be a hit with the participants. They barely noticed they were actually writing code and at some point I saw a fairly experienced test-manager explain some basic things of Python coding to at least two other test-managers. Non of these had prior coding experiences, not even the one explaining things.


AutoIt, with full access and use of the Windows API however was a bit more of a stretch. Turns out the Basic based language for AutoIt is for non-coding testers a lot more difficult to understand than Python. The first assignment, manipulating the Windows Calculator, was for most still doable, although it took a lot more explaining and showing than Sikuli.

The second assignment, calculating the square root of a number in Excel proved for most really difficult. I had hoped they would see the use of the Windows API and thus also come up with using the Office API in doing this, but apparently I overestimated the ease of use of AutoIt for them.

My takeaways

Next time I do an evening like this I now know to introduce AutoIt separately, after people have gained some experience with Sikuli. Guiding them a bit more with the AutoIt things, instead of letting them go.
Overall the test automation evening was really great and I do believe everyone had great fun and actually got a bit of an idea of what it is that attracts me to testautomation.

Test automation – Finding your starting point

The other day I had a job coaching of a colleague who, for his assignment, needs to start setting up test automation within an agile environment. He is very interested in test automation and the possibilities it gives you, however his knowledge on the subject, as well as on coding, is very limited.

In preparation for this first coaching session with him I was pondering where to start with him and I ended up starting with the, to me, obvious points :

  • just start with a first step
  • KISS
  • DRY

Just start with the first step

My colleague was struggling on how to start and more importantly, where to start with test automation. This is an issue I have faced myself often in the past as well. You have this huge task of automating a system laying ahead of you, there is so much to do and so little time. Where do you start?

Over the years I have come up with a simple strategy for that: just start with the basics and move on from there.

In other words, start with the basic functionality you will need for any test you are about to execute. In most systems that is setting up a simple sequence for validating the home- or landingpage, going to a login screen, validating that loads properly and then logging in.

Login screen


Once you have this start it often becomes easier and more logical to move ahead and start automating other functions and functionalities. In this particular case once logged in there is a landing page which consists of a dashboard, so we scripted the verification of all required items on the dashboard. Once this was done my colleague again was questioning, what do now?

Again, we kept it simple and straightforward, I asked him to name another functionality he often uses and which help add value to the test immediately and make life easier in testing in general. He came up with logging off.

Over the course of this hands-on coaching session we ended up writing several scripts, which when put together in a scenario ended up being a very simple and fast sanity check on the application under test, which can immediately be reused in Continuous Integration and on every new code drop they will be doing on any environment.

Keep it simple and don’t repeat yourself.

Once we got through the initial start we went on to two other very important things you need to keep in mind when automating (or writing any code for that matter): KISS and DRY.


Keep It Simple

Keep It Simple

KISS is well know to mean “Keep It Simple, Stupid” the main thing I generally take from this is the “Keep It Simple” part. For test automation it is important to keep things simple, you need to be able to maintain the suite, others need to be able to understand what you’re doing and why you did things in a certain way. Therefore, I rigorously stick to keeping things as simple as possible considering circumstances. I generally try to follow the KISS rule in all aspects of test automation: naming, coding, hierarchy, refactoring, running etc. Some of the ways this is visible is: I try to keep scripts short and concise, I try to ensure names of scripts clearly state what the script (supposedly) does, I try to keep variables I use clearly names (e.g. I try to stay away from statements like i = 1 , instead I give i a meaningful name).


DRY, or Don’t Repeat Yourself, is a well known practice in software development and should be used within test automation as well. I apply DRY both in the scripts and in my daily work. When starting with automation on any project some of the first things I generally put on my list of things to automate sooner rather than later are all functions and functionalities I have to use more than rarely. In other words, starting an application and logging in I generally automate as quickly as possible, even on a manual testing project!

Don't Repeat Yourself

Don’t Repeat Yourself

One of the reasons I am a strong advocate of test automation is that I am lazy. Why would I want to start an application and login to it manually and thus have to execute a whole bunch of things myself, several times (sometimes tens or hundreds of times a day) when I can automate that same flow in a few minutes and not have to deal with it anymore? Quite often starting and logging in of an application, when automated, is faster than when you do it manually as well, so it not only saves me from repetitive work, it also speeds up my work!

In other words, the DRY principle can also work outside of the testautomation code, in your daily work. If a task is repetitive, chances are you can automate it!

Difference between performance testing and functional automated testing

For the last few months I have been working on performance testing quite a lot and when discussing it with colleagues I started to notice that it can be easily confused with testautomation. Based on discussions I have had with customers and sales people I ran into the question of “what is the exact difference between the two? Both are a form of automated testing in the end”.

Performance testing == automated testing… ?

Both Performance testing and automated testing are indeed some form of executing simple checks with a tool. The most obvious difference being the objective of running the test and analysing the outcomes. If they are indeed so similar, does that mean you can use your automated tests to also run performance tests and vice versa?

What is the difference?

I believe the answer is both easy and challenging to explain. The main difference is in the verifications and assertions done in the two different test types. In functional test automation (let’s at least call it that for now), the verifications and assertions done are all oriented to validating that the actual full functionality as described in the specification, was passed. Whereas in performance testing these verifications and assertions are more or less focused on validating that all data and especially the expected data is loaded.

jmeter-snapshotA lot of the performance tests I have executed over the past year or so, have not used the Graphical User Interface. Instead the tests use the communications underneath the GUI, such as XML, JSON or whatever else passes between server and client. In these performance tests the functionality of the application under test is still run through by the tests, so a functional walkthrough/test does get executed, my assertions however do not necessarily validate that and definitely not on a level that would be acceptable for normal functional test automation. In other words, most of the performance tests cannot (easily or blindly) be reused as functional test automation.

Now you might think: “So can we put functional test automation to work as a performance test, if the other way around cannot easily be done maybe it will work this way?”

In my experience the answer to this is similar as when trying to use performance tests as a functional test automation. It can be done, but will not really give you the leverage in performance testing you quite likely would like to have. Running functional test automation generally requires the application to run. If the application is a webapplication you might get away with running the browser headless (e.g. just the rendering engine, not the full GUI version of the browser) in order to prevent the need for a load of virtual machines to generate even a little bit of load. When the SUT is a client/server application however the functional test automation generally requires the actual client to run, making any kind of load really expensive.

How can we utilize the functional test automation for performance testing?

performance and testautomation combined

performance and testautomation combined

One of the wonderful possibilities is combining functional testing, performance testing and load testing. By adjusting the functional test automation to not only record pass/fail but also render times of screens/objects, the functional test automation suite turns into a performance monitor. Now you start your load generator to test the server response times during load, once the target load is reached, you  start the functional test automation suite to walk through a solid test set and measure the actual times it takes on a warm or hot system to run everything through a full rendered environment. This gives wonderful insights into what the end-users may experience during heavy loads on the server.

Selecting performance test tooling – Part 4

Decision making time

Decision making process

Decision making process

Considering the fact that I am not too fond of Sikuli and SilkTest is disqualified because it cannot deal with the remote application the decision was tough and yet simple. I have an immediate need which needs fulfilling and besides that a customer wish to look ahead and assume we will be working on test automation in the (near) future for regression testing and end-to-end testing.

The choice was made to not go for the, very affordable, commercial tool at this point in time, but rather go the open source road. Sikuli it is.

Experiences with Sikuli Sikuli Script

As stated above Sikuli was not my preferred tool, since it is heavily depending on screen captures, however when I was finally working with it I started to get a liking for it. It has grown on me by now. Scripting in it can be both difficult and extremely easy.

I decided to approach the functional measuring with SIkuli as a pure testautomation project, but then a bit less reusable since it depends on screenshots. Starting where I generally start; starting the application and logging in to the system, was simple enough. Although still not exactly intuitive. The startup code looks something like this:

cmd = 'mstsc.exe "C:\\Program Files\\RemotePackages\\AppAccNew.rdp"'
def startApp():
    from time import time
    startTime = time()
    Log.elog('Starting Acceptatie RDP Session') 

Sikuli Code Snippett On top of a separate, reusable login (and logoff and shutdown) routine, I also built up a nice set of helpful methods for measuring time between an action and the result, verifying that the expected area indeed is selected and quite a few others. These look a bit more odd in my eyes due to the screen captures inline in the code, as you can see here.

The moment the basic functions were there, e.g. click on something and expect some result with a timer in between, the rest was fairly straight forward. We now have a bunch of functional tests which, instead of doing a functional verification are focussed on duration of the calls, but for the rest it is not very far from actual functional automation through Sikuli.


All in all it took some getting used to the fact that there is script combined with screenshots, but now that it is fully up and running the scripting is fast and easy to do. I am quite impressed with what Sikuli can do.