Heb je die testautomationtool wel nodig?

3 tips om tot een goede toolselectie te komen

Als testconsultant kom ik vaak bij klanten over de vloer die een testautomatiseringstool willen aanschaffen. Ze stellen mij dan regelmatig de vraag “welke tool moeten we kopen of gebruiken?”.  Ik antwoord dan meestal met een tegenvraag: Waarom wil je een testautomatiseringstool hebben? De reactie die ik hierop krijg is redelijk voorspelbaar: grote ogen, gefronsde wenkbrauwen,  verbijstering op de gezichten dat een consultant zo’n onnozele vraag stelt en vervolgens de (steeds falende) poging om een antwoord te geven. GO BACKAntwoorden die ik krijg gaan vaak in de richting van:

  • “Ik heb er goede dingen over gehoord of gelezen”
  • “Een kennis van me werkt er ook mee”.

Of het nog gevaarlijker:

  • “Bij mijn vorige werkgever gebruikten we ook een testautomatiseringstool”.

Het mooiste antwoord wat ik ook nog wel eens krijg is:

  • “We hebben een demo gehad van een tool en dat zag er heel veelbelovend uit”.

The first principle is that you must not fool yourself — and you are the easiest person to fool.

– Richard Feynman

Deze antwoorden zijn duidelijke indicatoren dat het echte probleem niet duidelijk is en daarmee is ook het doel wat de klant hoopt te bereiken met de testautomatiseringtool niet duidelijk.

Wat is het probleem dat opgelost moet worden?

Waarom stel ik nou juist die vraag? Er zal toch wel een goede reden zijn waarom die klant vraagt om een testautomatiseringstool?

In veel gevallen is de reden van het zoeken naar testautomatiseringstools niet de échte reden, de diepere reden. Door te vragen waaróm men op zoek is naar testautomatiseringstools ga je op zoek naar welk probleem er is. Ik wil weten wat de aanleiding is voor het willen implementeren van een tool en vooral wat de klant hoopt ermee te bereiken.

Ik wil mijn gesprekspartner uitlokken (of uitnodigen) alles boven water te halen.

In veel gevallen is een testautomationtool in eerste instantie een lapmiddel. Het is symptoombestrijding omdat het eigenlijke probleem

  • te groot is
  • buiten jou invloedsfeer ligt
  • niet op korte termijn opgelost kan worden.

Symptoombestrijding geeft geen oplossing voor het echte probleem. Als bewuste symptoombestrijding het hoogst-haalbare is op dit moment, dan is het wellicht een goed idee om met een tool aan de slag te gaan. Vaak echter lijkt een testautomatiseringstool een oplossing te bieden voor iets wat niet de basis van het probleem is en dus wordt het echte probleem niet weggenomen, laat staan opgelost.

Er zijn een aantal dingen waar je naar moet kijken als je een testautomatiseringstool gaat selecteren.

 Wat moet je weten voor je een testautomatiseringstool selecteert?

  1. Definieer de basis van het probleem dat je poogt op te lossen met de testautomatiseringstool
  2. Stel jezelf de vraag “gaat een tool dit probleem daadwerkelijk oplossen?
  3. Hebben we de kennis, kunde en financiële middelen om een en ander ook door te zetten en het probleem écht op te lossen?

 Het is mogelijk om op tal van manieren te falen… terwijl je slechts op één manier kunt slagen.

– Aristoteles

How did teaching test automation work out?

A while ago I wrote a post about a set of workshops I was asked to setup for functional testers, test coordinators and test managers to get the familiar and acquainted with testautomation and performance testing. I pre-selected a set of tools which I wanted to go through with the participants. The slides I used for this can be found on SlideShare:


Apologies for the slides being in Dutch. Possibly I will come up with the EN version as well, however the training was in Dutch.

This part of the evening already created a lot of discussion and questions. One of the nicest questions was the obvious one: “is coding really needed when working on test automation? I thought test automation tools and software were advanced enough nowadays to no longer require code to work well?” This questions fairly obvious, considering the tools I selected for the training, Sikuli and AutoIt SciTE. Both tools require a lot of coding in order for them to be at all usable.

The hands-on experiences

After we had had the theoretical bits we moved on to the hands on bit. As visible on the last slide there were a few (simple) assignments prepared for the participants. The first one was executing a calculation with the Microsoft built-in calculator. Fairly straight forward and rudimentary I thought. A few had prior coding knowledge so they went to do this exercise with AutoIt rather than with Sikuli, the majority of the group however attempted to execute this task with Sikuli.

Sikuli

Since Sikuli is mainly image based almost everyone started by actually clicking on the START button and going through the Windows Start Menu. After a while I thought it worth while to show the group how to just launch an application from within Sikuli through the RUN command of Windows. This of course immediately raised the question why I would prefer to do that over manipulating the mouse (which is answered simply by explaining the concept of cross-platform (or at least cross windows version) testing with test automation, the <WIN>+R key-combo has been in existence since Windows 95 if I recall correctly and thus this is backwards compatible and since you can still use this within Windows 8.1 it is also forwards compatible).

Sikuli turned out to be a hit with the participants. They barely noticed they were actually writing code and at some point I saw a fairly experienced test-manager explain some basic things of Python coding to at least two other test-managers. Non of these had prior coding experiences, not even the one explaining things.

AutoIt

AutoIt, with full access and use of the Windows API however was a bit more of a stretch. Turns out the Basic based language for AutoIt is for non-coding testers a lot more difficult to understand than Python. The first assignment, manipulating the Windows Calculator, was for most still doable, although it took a lot more explaining and showing than Sikuli.

The second assignment, calculating the square root of a number in Excel proved for most really difficult. I had hoped they would see the use of the Windows API and thus also come up with using the Office API in doing this, but apparently I overestimated the ease of use of AutoIt for them.

My takeaways

Next time I do an evening like this I now know to introduce AutoIt separately, after people have gained some experience with Sikuli. Guiding them a bit more with the AutoIt things, instead of letting them go.
Overall the test automation evening was really great and I do believe everyone had great fun and actually got a bit of an idea of what it is that attracts me to testautomation.

Test automation – Finding your starting point

The other day I had a job coaching of a colleague who, for his assignment, needs to start setting up test automation within an agile environment. He is very interested in test automation and the possibilities it gives you, however his knowledge on the subject, as well as on coding, is very limited.

In preparation for this first coaching session with him I was pondering where to start with him and I ended up starting with the, to me, obvious points :

  • just start with a first step
  • KISS
  • DRY

Just start with the first step

My colleague was struggling on how to start and more importantly, where to start with test automation. This is an issue I have faced myself often in the past as well. You have this huge task of automating a system laying ahead of you, there is so much to do and so little time. Where do you start?

Over the years I have come up with a simple strategy for that: just start with the basics and move on from there.

In other words, start with the basic functionality you will need for any test you are about to execute. In most systems that is setting up a simple sequence for validating the home- or landingpage, going to a login screen, validating that loads properly and then logging in.

Login screen

Login

Once you have this start it often becomes easier and more logical to move ahead and start automating other functions and functionalities. In this particular case once logged in there is a landing page which consists of a dashboard, so we scripted the verification of all required items on the dashboard. Once this was done my colleague again was questioning, what do now?

Again, we kept it simple and straightforward, I asked him to name another functionality he often uses and which help add value to the test immediately and make life easier in testing in general. He came up with logging off.

Over the course of this hands-on coaching session we ended up writing several scripts, which when put together in a scenario ended up being a very simple and fast sanity check on the application under test, which can immediately be reused in Continuous Integration and on every new code drop they will be doing on any environment.

Keep it simple and don’t repeat yourself.

Once we got through the initial start we went on to two other very important things you need to keep in mind when automating (or writing any code for that matter): KISS and DRY.

KISS

Keep It Simple

Keep It Simple

KISS is well know to mean “Keep It Simple, Stupid” the main thing I generally take from this is the “Keep It Simple” part. For test automation it is important to keep things simple, you need to be able to maintain the suite, others need to be able to understand what you’re doing and why you did things in a certain way. Therefore, I rigorously stick to keeping things as simple as possible considering circumstances. I generally try to follow the KISS rule in all aspects of test automation: naming, coding, hierarchy, refactoring, running etc. Some of the ways this is visible is: I try to keep scripts short and concise, I try to ensure names of scripts clearly state what the script (supposedly) does, I try to keep variables I use clearly names (e.g. I try to stay away from statements like i = 1 , instead I give i a meaningful name).

DRY

DRY, or Don’t Repeat Yourself, is a well known practice in software development and should be used within test automation as well. I apply DRY both in the scripts and in my daily work. When starting with automation on any project some of the first things I generally put on my list of things to automate sooner rather than later are all functions and functionalities I have to use more than rarely. In other words, starting an application and logging in I generally automate as quickly as possible, even on a manual testing project!

Don't Repeat Yourself

Don’t Repeat Yourself

One of the reasons I am a strong advocate of test automation is that I am lazy. Why would I want to start an application and login to it manually and thus have to execute a whole bunch of things myself, several times (sometimes tens or hundreds of times a day) when I can automate that same flow in a few minutes and not have to deal with it anymore? Quite often starting and logging in of an application, when automated, is faster than when you do it manually as well, so it not only saves me from repetitive work, it also speeds up my work!

In other words, the DRY principle can also work outside of the testautomation code, in your daily work. If a task is repetitive, chances are you can automate it!

Difference between performance testing and functional automated testing

For the last few months I have been working on performance testing quite a lot and when discussing it with colleagues I started to notice that it can be easily confused with testautomation. Based on discussions I have had with customers and sales people I ran into the question of “what is the exact difference between the two? Both are a form of automated testing in the end”.

Performance testing == automated testing… ?

Both Performance testing and automated testing are indeed some form of executing simple checks with a tool. The most obvious difference being the objective of running the test and analysing the outcomes. If they are indeed so similar, does that mean you can use your automated tests to also run performance tests and vice versa?

What is the difference?

I believe the answer is both easy and challenging to explain. The main difference is in the verifications and assertions done in the two different test types. In functional test automation (let’s at least call it that for now), the verifications and assertions done are all oriented to validating that the actual full functionality as described in the specification, was passed. Whereas in performance testing these verifications and assertions are more or less focused on validating that all data and especially the expected data is loaded.

jmeter-snapshotA lot of the performance tests I have executed over the past year or so, have not used the Graphical User Interface. Instead the tests use the communications underneath the GUI, such as XML, JSON or whatever else passes between server and client. In these performance tests the functionality of the application under test is still run through by the tests, so a functional walkthrough/test does get executed, my assertions however do not necessarily validate that and definitely not on a level that would be acceptable for normal functional test automation. In other words, most of the performance tests cannot (easily or blindly) be reused as functional test automation.

Now you might think: “So can we put functional test automation to work as a performance test, if the other way around cannot easily be done maybe it will work this way?”

In my experience the answer to this is similar as when trying to use performance tests as a functional test automation. It can be done, but will not really give you the leverage in performance testing you quite likely would like to have. Running functional test automation generally requires the application to run. If the application is a webapplication you might get away with running the browser headless (e.g. just the rendering engine, not the full GUI version of the browser) in order to prevent the need for a load of virtual machines to generate even a little bit of load. When the SUT is a client/server application however the functional test automation generally requires the actual client to run, making any kind of load really expensive.

How can we utilize the functional test automation for performance testing?

performance and testautomation combined

performance and testautomation combined

One of the wonderful possibilities is combining functional testing, performance testing and load testing. By adjusting the functional test automation to not only record pass/fail but also render times of screens/objects, the functional test automation suite turns into a performance monitor. Now you start your load generator to test the server response times during load, once the target load is reached, you  start the functional test automation suite to walk through a solid test set and measure the actual times it takes on a warm or hot system to run everything through a full rendered environment. This gives wonderful insights into what the end-users may experience during heavy loads on the server.

Selecting performance test tooling – Part 4

Decision making time

Decision making process

Decision making process

Considering the fact that I am not too fond of Sikuli and SilkTest is disqualified because it cannot deal with the remote application the decision was tough and yet simple. I have an immediate need which needs fulfilling and besides that a customer wish to look ahead and assume we will be working on test automation in the (near) future for regression testing and end-to-end testing.

The choice was made to not go for the, very affordable, commercial tool at this point in time, but rather go the open source road. Sikuli it is.

Experiences with Sikuli Sikuli Script

As stated above Sikuli was not my preferred tool, since it is heavily depending on screen captures, however when I was finally working with it I started to get a liking for it. It has grown on me by now. Scripting in it can be both difficult and extremely easy.

I decided to approach the functional measuring with SIkuli as a pure testautomation project, but then a bit less reusable since it depends on screenshots. Starting where I generally start; starting the application and logging in to the system, was simple enough. Although still not exactly intuitive. The startup code looks something like this:

cmd = 'mstsc.exe "C:\\Program Files\\RemotePackages\\AppAccNew.rdp"'
def startApp():
    from time import time
    startTime = time()
    Log.elog('Starting Acceptatie RDP Session') 
    App.open(cmd)

Sikuli Code Snippett On top of a separate, reusable login (and logoff and shutdown) routine, I also built up a nice set of helpful methods for measuring time between an action and the result, verifying that the expected area indeed is selected and quite a few others. These look a bit more odd in my eyes due to the screen captures inline in the code, as you can see here.

The moment the basic functions were there, e.g. click on something and expect some result with a timer in between, the rest was fairly straight forward. We now have a bunch of functional tests which, instead of doing a functional verification are focussed on duration of the calls, but for the rest it is not very far from actual functional automation through Sikuli.

Conclusion

All in all it took some getting used to the fact that there is script combined with screenshots, but now that it is fully up and running the scripting is fast and easy to do. I am quite impressed with what Sikuli can do.

The cost of test automation

Over the past few posts I have written a lot about test automation, however one very important subject I have left out thus far. What is the actual cost of test automation? How do you calculate the cost of test automation, how do you compare this cost to the overall costs of testing? In other words, how do you get to the return on investment people are looking for? First thing that needs to be covered if you want to know and understand the costs of test automation is a CLEAR understanding of the goal. Typically there are three possible goals:Cost, quality or time to market?

  1. reduce the cost of testing
  2. reduce the time spent on testing
  3. improve the quality of the software

These three have a direct relation with each other, and thus each of them also has a direct impact on the other two depending on which one you take as your main focus. In the next few paragraphs I will discuss the impact of picking one of the three as a goal.

Reduce the cost of testing

When putting the focus of your test automation on reducing the overall cost of testing you set yourself up for a long road ahead. There generally is an initial investment needed for test automation to become cost reducing. Put simple, you need to go through the initial implementation of a tool or framework. Get to know this tool well and ensure that the testers who need to work with the tool all know and understand how to work with it as well. If going for a big commercial tool there is of course the investment in purchasing the license, which often ranges between 5.000 and 10.000 euros per seat (floating or dedicated). Assuming more than one test engineer will need to be using the tool concurrently, you will always need more than one license. This license cost needs to be earned back by an overall reduction in testing costs (since that is your goal). An average investment graph will look something like this when working with a commercial tool: Investment of test automation with a commercial tool

The initial investment is high, this is the cost of the licenses. At that point there is nothing yet, no tests have been automated yet and you have already spent a small fortune. The costs will not drop immediately, since after the purchase the tool needs to be installed, people need to be trained etc. Next to it, all the while no tests have been automated, thus the cost is high, but the return is zero. Once the training process has been finilised and the implementation of the automated tests has started the cost line will slowly drop to a stable flatline. The flatline will be the running cost of test automation, which includes the cost of maintenance of the tool and the testscripts and of course the cost of running tests and reviewing and interpreting the reports.

A particular post in a LinkedIn group with the ominous question “Who’s afraid of test automation”, one of the more disturbing responses was as follows (and I am quoting from the group literally, with a changed name however):

Q: Who’s afraid of test automation?

A: Anyone with headcount. What would it look like if all of the testing is done by machine and there is only one person left in the organization?
Respectfully, Louise, PhD

The idea that test automation will take away testers’ work completely and as such will reduce the running costs of a test team drastically is a common misconception which takes too much time and effort right now to address. But in short, Test automation may reduce some of the workload, but will not be able to reduce your cost of testing by removing all of the testers, nor should this ever be your objective.

In my next blog post I will continue this story, then with the focus on “Reduce the time spent on testing”

FitNesse – Test automation in a wiki

the assignment
When I started working at my current assignment I got told that the tools for automation had already been chosen, one of the tools being FitNesse. The way the tools were chosen is not my preferred way of picking a tool, the fact that the assignment was to “implement test automation across the organization through the use of the chosen tools” made me slightly worried whether or not FitNesse and the rest would indeed turn out to be the right choice.

Prior to this assignment I had heard a lot about FitNesse but had never had any hands-on experience with it, nor did I know anyone with hands-on experience with it.
Having worked with FitNesse for a few months now i feel the time has come to share my thoughts on it, what do I like, what do I believe is up for improvement, how is it working for me for now etc.

learning curve
Getting started with FitNesse was not all too intuitive. Getting it started is easy enough, but once you have it running it is not clear where to start and where to go from the FrontPage. Since we were not planning to use the standard fixtures but instead were planning to create our own we started on the fixture side rather than with FitNesse itself directly. Having created a generic login functionality in the fixture translating actions back int FitNesse became a lot more intuitive.

possibilities
The base fixtures such as the DoFixture, WebFixture etc. are very powerful in itself, I however feel they somewhat miss the point of automating in clear text: the tests are not easy to read, logical to follow or intuitive to write. We chose to work with SLIM rather than with FIT since FIT gives too much flexibility in usage of (almost) psuedo-code. Examples as used in the acceptance test in FitNesse are not clear enough for our purpose at this client. The test team is, to say the least, not very technically inclined and examples such as below do not really help them very much:

This is still somewhat readable

!|Response Examiner.|
|type  |pattern|matches?|contents?|
|contents|Location: LinkingPage\?properties|true||

A while loop implemented in FitNesse however quickly turns into black-magic in the hands of the technically less inclined:

|While|i=0|i<5|i++|
|put book in the cart|i|
|confirm selection|i|

With our custom implementation we now have test cases that can be read by most people within the organization and will be quite well understood, for example the below scenario for transferring money from one account to another:

|Scenario|Transfer Money|amount|From Account|accountFrom|To Account|accountTo|With Description|desc|
|Start               |TransferMoney |
|Go To|
|Select Van Rekening |@accountFrom |From Dropdown|
|Select Naar Rekening|@accountTo|From Dropdown|
|Enter Bedrag        |@amount|In Textbox|
|Enter Omschrijving  |@desc|In Textbox|
|Click On Verwerken|
|Select Content Frame|
|Is Text             |Het effect van deze overboeking op uw vrije bestedingsruimte is|Present|
|Click On Verwerken|
|Start               |CurrentAccount|
|go to|
|check               |Get data from column|Bedrag|3|

flexibility
Having started with Selenium as the driver below FitNesse enabled us to quickly build out quite a lot of test cases for the web applications. Part of the beauty of FitNesse in my opinion is that it is driver agnostic. In other words, it doesn’t really care what the system under test is, be it a website, a JavaApplet, a database or desktop applications. We are currently starting to work on TIBCO interfaces and will soon have to move over to Delphi and C# desktop applications. With quite some traditional test-automation-frameworks this would force us to start working either with a different tool or at least in quite a different way. The great thing about FitNesse is that it is so flexible that we can not only test desktop applications, we can also test across several parts of the platform. For example start executing some functions on the web application, verify these actions directly in the database, start a management desktop application and approve the actions initiated from the web application, all within one test case. A test case that big would make the test fragile, but the great thing is, it is possible if you really would want to.

refactoring
Quite some of the tests currently in FitNesse have been built up based on a functional mapping we initially made of the system, rather than the flows through the application. This is not quite ideal when running the tests, let alone when trying to sort through them and building up a suite for a particular type of flow or functionality.
Refactoring in FitNesse is probably the part where I believe a lot of improvements can be made. The current functionality, based on regular expression search is fairly crude.
FitNesse being a wiki, does have a wonderful different perk when needing to execute some bigger refactoring or moving around of test cases. All tests are text files within directories in the filesystem of your PC. In other words, if the built-in refactor function is too crude, a tool like Notepad++ or TextPad can be of immense value. These can search through files across directory structures and do a big part of the refactoring work for you. If you need to move whole folder structures, again you can copy them around directly on the file system.

judgement
My feeling regarding FitNesse so far is that it is a great tool which thus far seems to be underestimated out there in the world of test automation. Even when working directly with the standard fixtures FitNesse makes for easy to use, simple and quick to implement test automation. The main challenge is the initial learning curve, getting started with it and making the right choice in whether to go with Fit or Slim are for the newcomer the main hurdles.