Teaching test automation & performance testing

I was asked to share some knowledge on test automation and load and performance testing by Polteq. The company is running a “Special Development Program” in which employees from all levels of the company, ranging from junior engineers to senior consultants, get the opportunity to follow trainings and courses in a range of things. Varying from social skills to hard technical skills. It is the latter I have been asked to help provide some training for, which I do gladly.

warning-mass-confusion-aheadThere is however quite a challenge, as said, the audience ranges from junior to quite senior, but also from technically strong to technically challenged (my apologies to all the colleagues I am insulting with this statement, but I know you can forgive me:) ). So how do I go about preparing two trainings, one about test automation and one about load and performance testing, for such a diverse group? The duration of these sessions is also quite limited, making it even more difficult to come up with something sane to do in these evenings.JMeter

Oh! To make my life easier, I have been telling everyone, during a bunch of company meetings and updates over the past month, I do not believe sitting back and listening to someone broadcasting the information will help in learning something to do with technical skills.

Sikuli ScriptThe trainings need to be interactive, but also guided and somewhat personalized. Thus far I have come up with the idea of preparing a bunch of USB sticks with a set of portable applications I use regularly to automate stuff with. When in this context I say automate, I really mean hack something together which does the job and at the end of the project (or my involvement with the project) gets thrown away. Do I indeed want to teach the habit of writing throw-away-code?autoit-logo

On top of that, do I want to teach some “technical” or basic programing skills based on examples with tools, which should in their own right not be used to automate these things? Actually, I believe I do! My goal for these evenings will be to get this group excited to use and abuse tools to their own advantage! The tools I already chose, now I need to figure out some interesting, useful and enjoyable targets for these people to hack their way around. Tips anyone?

Test automation – Finding your starting point

The other day I had a job coaching of a colleague who, for his assignment, needs to start setting up test automation within an agile environment. He is very interested in test automation and the possibilities it gives you, however his knowledge on the subject, as well as on coding, is very limited.

In preparation for this first coaching session with him I was pondering where to start with him and I ended up starting with the, to me, obvious points :

  • just start with a first step
  • KISS
  • DRY

Just start with the first step

My colleague was struggling on how to start and more importantly, where to start with test automation. This is an issue I have faced myself often in the past as well. You have this huge task of automating a system laying ahead of you, there is so much to do and so little time. Where do you start?

Over the years I have come up with a simple strategy for that: just start with the basics and move on from there.

In other words, start with the basic functionality you will need for any test you are about to execute. In most systems that is setting up a simple sequence for validating the home- or landingpage, going to a login screen, validating that loads properly and then logging in.

Login screen

Login

Once you have this start it often becomes easier and more logical to move ahead and start automating other functions and functionalities. In this particular case once logged in there is a landing page which consists of a dashboard, so we scripted the verification of all required items on the dashboard. Once this was done my colleague again was questioning, what do now?

Again, we kept it simple and straightforward, I asked him to name another functionality he often uses and which help add value to the test immediately and make life easier in testing in general. He came up with logging off.

Over the course of this hands-on coaching session we ended up writing several scripts, which when put together in a scenario ended up being a very simple and fast sanity check on the application under test, which can immediately be reused in Continuous Integration and on every new code drop they will be doing on any environment.

Keep it simple and don’t repeat yourself.

Once we got through the initial start we went on to two other very important things you need to keep in mind when automating (or writing any code for that matter): KISS and DRY.

KISS

Keep It Simple

Keep It Simple

KISS is well know to mean “Keep It Simple, Stupid” the main thing I generally take from this is the “Keep It Simple” part. For test automation it is important to keep things simple, you need to be able to maintain the suite, others need to be able to understand what you’re doing and why you did things in a certain way. Therefore, I rigorously stick to keeping things as simple as possible considering circumstances. I generally try to follow the KISS rule in all aspects of test automation: naming, coding, hierarchy, refactoring, running etc. Some of the ways this is visible is: I try to keep scripts short and concise, I try to ensure names of scripts clearly state what the script (supposedly) does, I try to keep variables I use clearly names (e.g. I try to stay away from statements like i = 1 , instead I give i a meaningful name).

DRY

DRY, or Don’t Repeat Yourself, is a well known practice in software development and should be used within test automation as well. I apply DRY both in the scripts and in my daily work. When starting with automation on any project some of the first things I generally put on my list of things to automate sooner rather than later are all functions and functionalities I have to use more than rarely. In other words, starting an application and logging in I generally automate as quickly as possible, even on a manual testing project!

Don't Repeat Yourself

Don’t Repeat Yourself

One of the reasons I am a strong advocate of test automation is that I am lazy. Why would I want to start an application and login to it manually and thus have to execute a whole bunch of things myself, several times (sometimes tens or hundreds of times a day) when I can automate that same flow in a few minutes and not have to deal with it anymore? Quite often starting and logging in of an application, when automated, is faster than when you do it manually as well, so it not only saves me from repetitive work, it also speeds up my work!

In other words, the DRY principle can also work outside of the testautomation code, in your daily work. If a task is repetitive, chances are you can automate it!

Difference between performance testing and functional automated testing

For the last few months I have been working on performance testing quite a lot and when discussing it with colleagues I started to notice that it can be easily confused with testautomation. Based on discussions I have had with customers and sales people I ran into the question of “what is the exact difference between the two? Both are a form of automated testing in the end”.

Performance testing == automated testing… ?

Both Performance testing and automated testing are indeed some form of executing simple checks with a tool. The most obvious difference being the objective of running the test and analysing the outcomes. If they are indeed so similar, does that mean you can use your automated tests to also run performance tests and vice versa?

What is the difference?

I believe the answer is both easy and challenging to explain. The main difference is in the verifications and assertions done in the two different test types. In functional test automation (let’s at least call it that for now), the verifications and assertions done are all oriented to validating that the actual full functionality as described in the specification, was passed. Whereas in performance testing these verifications and assertions are more or less focused on validating that all data and especially the expected data is loaded.

jmeter-snapshotA lot of the performance tests I have executed over the past year or so, have not used the Graphical User Interface. Instead the tests use the communications underneath the GUI, such as XML, JSON or whatever else passes between server and client. In these performance tests the functionality of the application under test is still run through by the tests, so a functional walkthrough/test does get executed, my assertions however do not necessarily validate that and definitely not on a level that would be acceptable for normal functional test automation. In other words, most of the performance tests cannot (easily or blindly) be reused as functional test automation.

Now you might think: “So can we put functional test automation to work as a performance test, if the other way around cannot easily be done maybe it will work this way?”

In my experience the answer to this is similar as when trying to use performance tests as a functional test automation. It can be done, but will not really give you the leverage in performance testing you quite likely would like to have. Running functional test automation generally requires the application to run. If the application is a webapplication you might get away with running the browser headless (e.g. just the rendering engine, not the full GUI version of the browser) in order to prevent the need for a load of virtual machines to generate even a little bit of load. When the SUT is a client/server application however the functional test automation generally requires the actual client to run, making any kind of load really expensive.

How can we utilize the functional test automation for performance testing?

performance and testautomation combined

performance and testautomation combined

One of the wonderful possibilities is combining functional testing, performance testing and load testing. By adjusting the functional test automation to not only record pass/fail but also render times of screens/objects, the functional test automation suite turns into a performance monitor. Now you start your load generator to test the server response times during load, once the target load is reached, you  start the functional test automation suite to walk through a solid test set and measure the actual times it takes on a warm or hot system to run everything through a full rendered environment. This gives wonderful insights into what the end-users may experience during heavy loads on the server.

Selecting performance test tooling – Part 4

Decision making time

Decision making process

Decision making process

Considering the fact that I am not too fond of Sikuli and SilkTest is disqualified because it cannot deal with the remote application the decision was tough and yet simple. I have an immediate need which needs fulfilling and besides that a customer wish to look ahead and assume we will be working on test automation in the (near) future for regression testing and end-to-end testing.

The choice was made to not go for the, very affordable, commercial tool at this point in time, but rather go the open source road. Sikuli it is.

Experiences with Sikuli Sikuli Script

As stated above Sikuli was not my preferred tool, since it is heavily depending on screen captures, however when I was finally working with it I started to get a liking for it. It has grown on me by now. Scripting in it can be both difficult and extremely easy.

I decided to approach the functional measuring with SIkuli as a pure testautomation project, but then a bit less reusable since it depends on screenshots. Starting where I generally start; starting the application and logging in to the system, was simple enough. Although still not exactly intuitive. The startup code looks something like this:

cmd = 'mstsc.exe "C:\\Program Files\\RemotePackages\\AppAccNew.rdp"'
def startApp():
    from time import time
    startTime = time()
    Log.elog('Starting Acceptatie RDP Session') 
    App.open(cmd)

Sikuli Code Snippett On top of a separate, reusable login (and logoff and shutdown) routine, I also built up a nice set of helpful methods for measuring time between an action and the result, verifying that the expected area indeed is selected and quite a few others. These look a bit more odd in my eyes due to the screen captures inline in the code, as you can see here.

The moment the basic functions were there, e.g. click on something and expect some result with a timer in between, the rest was fairly straight forward. We now have a bunch of functional tests which, instead of doing a functional verification are focussed on duration of the calls, but for the rest it is not very far from actual functional automation through Sikuli.

Conclusion

All in all it took some getting used to the fact that there is script combined with screenshots, but now that it is fully up and running the scripting is fast and easy to do. I am quite impressed with what Sikuli can do.

Selecting performance test tooling – Part 3

A birthday post… and no, the story has nothing to do with my birthday, it’s just that today is my birthday.

This post became a bit longer than I initially intended so here are some links to jump directly to the specific content:

Some challenges in the PoC…

Following up on my previous posts (here and here) on this topic while executing the Proof of Concept I have run into some interesting challenges.

For starters, I am convinced I started the PoC the wrong way around; I started off with implementing some things in Sikuli rather than in SilkTest. Since SilkTest is, to me, less intuitive than Sikuli, since SilkTest tries to encapsulate a lot of different automation approaches in one tool, whereas Sikuli is focussed on one methodology only, I should have started with that one. However, I didn’t and there is nothing I can do about that anymore now.

Secondly, the application we are about to put to the test is an application served from a Citrix platform, in other words, it is a remote application. The charter for this project is simple: Measure the performance of the application as the user would experience it. In other words, measure its performance via the RDP tunnel and not directly on the Citrix machine.

The setup is basically as shown in this image (simplified of course)

Citrix simplified setup

Simplified picture of the Citrix application setup

Sikuli Sikuli Script Logo

For those not farmiliar with Sikuli, here’s what they say about themselves:

Sikuli Script automates anything you see on the screen. It uses image recognition to identify and control GUI components. It is useful when there is no easy access to a GUI’s internal or source code.

In other words, Sikuli is fully based on image recognition and pattern recognition rather than following the industry standard Object Model.

The good, the bad and the ugly.

Stepping away from the Object Model has some advantages, especially in this application setup, but I will get to that when discussing the Borland setup.

The good

Considering this is a Proof of Concept I have simply taken Sikuli out of the box, using Sikuli-IDE. The IDE works nice, simple and intuitive. It was very easy to start the RDP application and login without using any screenshots. The basic use of Sikuli is very simple and intuitive. Scripting in it is simple and logica, at least if you have a basic understanding of other scripting languages and/or programming.

Functionally stepping through the application was easy, just a few small screenshots were needed to load reports and verify that the report indeed is loaded successfully. In other words, the ease of use is excellent!

The bad and the ugly

I am mashing the bad and the ugly into one big pile since they are closely connected.

Sikuli-Code-SnippetThe first thing I disliked a lot is that Sikuli is 100% depending on Java 6, try running it on 7 and you have a problem (as in, it simply doesn’t work).

Another bad part of Sikuli is that even if I wanted to, I cannot add Object ID’s. This means that if I want to verify the existence of something, it needs to be done with screencaps and recognition thereof. Which leads me to the ugly. Screencaps are not the nicest way to identify objects, in fact they are ugly and not friendly to use, since objects can occur, in a similar look and feel, several times on one screen. This results so now and again in the wrong button being clicked. It may look the same to Sikuli, but it is not the same functionally.

On top of that, I am now saving images in source-control (GIT) which I am not in favor of. Why would I want binary files in source control? I cannot do a diff on them anyway.

SilkTest Borland logo

I have known Borland as a company for a long time yet in the past 10 years have not really worked with any of their tools. A short summary of how they see themselves:

With Silk Test, there’s no need to understand coding so even non-technical people like your business analysts can build tests and get fully involved. This 13.5 release also breaks new ground by working with all the latest browsers, so a single script is all you need.

Well, there are some issues with that statement of course, cause a single script is always doomed to fail in the most horrid ways imaginable, but still, SilkTest is a nice tool to work with.

The good

The reason for looking at SilkTest was because I would like to have a tool now which is future proof for the organisatio. In other words, will this tool support further test automation on the end-to-end chains within this large organisation. One really important qualifier for that is solid SAP support. My Proof of concept on SilkTest started off looking into SAP support. The way Silk handles SAP I can simply summarize with one word: good. Out of the box it managed to select the correct SAP instance from the system selection popup, login without issues and after a few attempts execute a bunch of transactions. In other words, I was happily surprised! Most test automation applications I had on the longlist have serious issues in dealing with SAP.

The bad and the ugly

UISpy view of the applicationThe not so nice side of SilkTest in my opinion is that the recorded code is somewhat ugly, if not really ugly and not very friendly to read and through that probably also to maintain. This however is just a minor nuisance compared to the next issue.

Since the application under test is being served through an RDP tunnel I have no access to the object ID’s. In other words, it is difficult to recognize objects on the application. In SilkTest it is not merely difficult, it is close to impossible. The only runnable way to do so I found is to record the tests based on the screen coordinates and then manually add assertions all over the place. However since SilkTest doesn’t see what it is trying to test, getting the assertions in is really hard. What do you put the assertion on? There is no object to verify.

In other words, this is a disqualifier for SilkTest in this context.