Difference between performance testing and functional automated testing

For the last few months I have been working on performance testing quite a lot and when discussing it with colleagues I started to notice that it can be easily confused with testautomation. Based on discussions I have had with customers and sales people I ran into the question of “what is the exact difference between the two? Both are a form of automated testing in the end”.

Performance testing == automated testing… ?

Both Performance testing and automated testing are indeed some form of executing simple checks with a tool. The most obvious difference being the objective of running the test and analysing the outcomes. If they are indeed so similar, does that mean you can use your automated tests to also run performance tests and vice versa?

What is the difference?

I believe the answer is both easy and challenging to explain. The main difference is in the verifications and assertions done in the two different test types. In functional test automation (let’s at least call it that for now), the verifications and assertions done are all oriented to validating that the actual full functionality as described in the specification, was passed. Whereas in performance testing these verifications and assertions are more or less focused on validating that all data and especially the expected data is loaded.

jmeter-snapshotA lot of the performance tests I have executed over the past year or so, have not used the Graphical User Interface. Instead the tests use the communications underneath the GUI, such as XML, JSON or whatever else passes between server and client. In these performance tests the functionality of the application under test is still run through by the tests, so a functional walkthrough/test does get executed, my assertions however do not necessarily validate that and definitely not on a level that would be acceptable for normal functional test automation. In other words, most of the performance tests cannot (easily or blindly) be reused as functional test automation.

Now you might think: “So can we put functional test automation to work as a performance test, if the other way around cannot easily be done maybe it will work this way?”

In my experience the answer to this is similar as when trying to use performance tests as a functional test automation. It can be done, but will not really give you the leverage in performance testing you quite likely would like to have. Running functional test automation generally requires the application to run. If the application is a webapplication you might get away with running the browser headless (e.g. just the rendering engine, not the full GUI version of the browser) in order to prevent the need for a load of virtual machines to generate even a little bit of load. When the SUT is a client/server application however the functional test automation generally requires the actual client to run, making any kind of load really expensive.

How can we utilize the functional test automation for performance testing?

performance and testautomation combined

performance and testautomation combined

One of the wonderful possibilities is combining functional testing, performance testing and load testing. By adjusting the functional test automation to not only record pass/fail but also render times of screens/objects, the functional test automation suite turns into a performance monitor. Now you start your load generator to test the server response times during load, once the target load is reached, you  start the functional test automation suite to walk through a solid test set and measure the actual times it takes on a warm or hot system to run everything through a full rendered environment. This gives wonderful insights into what the end-users may experience during heavy loads on the server.

Selecting performance test tooling – Part 4

Decision making time

Decision making process

Decision making process

Considering the fact that I am not too fond of Sikuli and SilkTest is disqualified because it cannot deal with the remote application the decision was tough and yet simple. I have an immediate need which needs fulfilling and besides that a customer wish to look ahead and assume we will be working on test automation in the (near) future for regression testing and end-to-end testing.

The choice was made to not go for the, very affordable, commercial tool at this point in time, but rather go the open source road. Sikuli it is.

Experiences with Sikuli Sikuli Script

As stated above Sikuli was not my preferred tool, since it is heavily depending on screen captures, however when I was finally working with it I started to get a liking for it. It has grown on me by now. Scripting in it can be both difficult and extremely easy.

I decided to approach the functional measuring with SIkuli as a pure testautomation project, but then a bit less reusable since it depends on screenshots. Starting where I generally start; starting the application and logging in to the system, was simple enough. Although still not exactly intuitive. The startup code looks something like this:

cmd = 'mstsc.exe "C:\\Program Files\\RemotePackages\\AppAccNew.rdp"'
def startApp():
    from time import time
    startTime = time()
    Log.elog('Starting Acceptatie RDP Session') 
    App.open(cmd)

Sikuli Code Snippett On top of a separate, reusable login (and logoff and shutdown) routine, I also built up a nice set of helpful methods for measuring time between an action and the result, verifying that the expected area indeed is selected and quite a few others. These look a bit more odd in my eyes due to the screen captures inline in the code, as you can see here.

The moment the basic functions were there, e.g. click on something and expect some result with a timer in between, the rest was fairly straight forward. We now have a bunch of functional tests which, instead of doing a functional verification are focussed on duration of the calls, but for the rest it is not very far from actual functional automation through Sikuli.

Conclusion

All in all it took some getting used to the fact that there is script combined with screenshots, but now that it is fully up and running the scripting is fast and easy to do. I am quite impressed with what Sikuli can do.

Selecting performance test tooling – Part 3

A birthday post… and no, the story has nothing to do with my birthday, it’s just that today is my birthday.

This post became a bit longer than I initially intended so here are some links to jump directly to the specific content:

Some challenges in the PoC…

Following up on my previous posts (here and here) on this topic while executing the Proof of Concept I have run into some interesting challenges.

For starters, I am convinced I started the PoC the wrong way around; I started off with implementing some things in Sikuli rather than in SilkTest. Since SilkTest is, to me, less intuitive than Sikuli, since SilkTest tries to encapsulate a lot of different automation approaches in one tool, whereas Sikuli is focussed on one methodology only, I should have started with that one. However, I didn’t and there is nothing I can do about that anymore now.

Secondly, the application we are about to put to the test is an application served from a Citrix platform, in other words, it is a remote application. The charter for this project is simple: Measure the performance of the application as the user would experience it. In other words, measure its performance via the RDP tunnel and not directly on the Citrix machine.

The setup is basically as shown in this image (simplified of course)

Citrix simplified setup

Simplified picture of the Citrix application setup

Sikuli Sikuli Script Logo

For those not farmiliar with Sikuli, here’s what they say about themselves:

Sikuli Script automates anything you see on the screen. It uses image recognition to identify and control GUI components. It is useful when there is no easy access to a GUI’s internal or source code.

In other words, Sikuli is fully based on image recognition and pattern recognition rather than following the industry standard Object Model.

The good, the bad and the ugly.

Stepping away from the Object Model has some advantages, especially in this application setup, but I will get to that when discussing the Borland setup.

The good

Considering this is a Proof of Concept I have simply taken Sikuli out of the box, using Sikuli-IDE. The IDE works nice, simple and intuitive. It was very easy to start the RDP application and login without using any screenshots. The basic use of Sikuli is very simple and intuitive. Scripting in it is simple and logica, at least if you have a basic understanding of other scripting languages and/or programming.

Functionally stepping through the application was easy, just a few small screenshots were needed to load reports and verify that the report indeed is loaded successfully. In other words, the ease of use is excellent!

The bad and the ugly

I am mashing the bad and the ugly into one big pile since they are closely connected.

Sikuli-Code-SnippetThe first thing I disliked a lot is that Sikuli is 100% depending on Java 6, try running it on 7 and you have a problem (as in, it simply doesn’t work).

Another bad part of Sikuli is that even if I wanted to, I cannot add Object ID’s. This means that if I want to verify the existence of something, it needs to be done with screencaps and recognition thereof. Which leads me to the ugly. Screencaps are not the nicest way to identify objects, in fact they are ugly and not friendly to use, since objects can occur, in a similar look and feel, several times on one screen. This results so now and again in the wrong button being clicked. It may look the same to Sikuli, but it is not the same functionally.

On top of that, I am now saving images in source-control (GIT) which I am not in favor of. Why would I want binary files in source control? I cannot do a diff on them anyway.

SilkTest Borland logo

I have known Borland as a company for a long time yet in the past 10 years have not really worked with any of their tools. A short summary of how they see themselves:

With Silk Test, there’s no need to understand coding so even non-technical people like your business analysts can build tests and get fully involved. This 13.5 release also breaks new ground by working with all the latest browsers, so a single script is all you need.

Well, there are some issues with that statement of course, cause a single script is always doomed to fail in the most horrid ways imaginable, but still, SilkTest is a nice tool to work with.

The good

The reason for looking at SilkTest was because I would like to have a tool now which is future proof for the organisatio. In other words, will this tool support further test automation on the end-to-end chains within this large organisation. One really important qualifier for that is solid SAP support. My Proof of concept on SilkTest started off looking into SAP support. The way Silk handles SAP I can simply summarize with one word: good. Out of the box it managed to select the correct SAP instance from the system selection popup, login without issues and after a few attempts execute a bunch of transactions. In other words, I was happily surprised! Most test automation applications I had on the longlist have serious issues in dealing with SAP.

The bad and the ugly

UISpy view of the applicationThe not so nice side of SilkTest in my opinion is that the recorded code is somewhat ugly, if not really ugly and not very friendly to read and through that probably also to maintain. This however is just a minor nuisance compared to the next issue.

Since the application under test is being served through an RDP tunnel I have no access to the object ID’s. In other words, it is difficult to recognize objects on the application. In SilkTest it is not merely difficult, it is close to impossible. The only runnable way to do so I found is to record the tests based on the screen coordinates and then manually add assertions all over the place. However since SilkTest doesn’t see what it is trying to test, getting the assertions in is really hard. What do you put the assertion on? There is no object to verify.

In other words, this is a disqualifier for SilkTest in this context.

Selecting performance test tooling – Part 2

In my first installment I wrote about how I got the requirements together. Based on those I wrote a plan on what will need to be done. In my second installment I wrote about the first considerations of what I need the tooling to be able to do. In this part I am going to discuss a few of the things I have done to come to the shortlist and what will I do as a Proof of Concept for the tools.

Load generator

First of all, let’s get the easy part out of the way. We will need something to generate a (functional) load on the servers. That part i consider relatively easy, no big tools are needed for this since we have an extremely powerful opensource tool at our fingertips: Apache’s JMeter. JMeter The load will have to be generated based on both HTTP traffic and client/server traffic, neither of which should pose a problem for JMeter. The most difficult part for load generation is getting the numbers out of the system, e.g. figuring out what the average and peak load is on the system. For this we have thrown some lines out to application managers to figure out.

Long list

The longlist I started out with was not just any list, it was a set of several lists. Out of this initial set I picked a bunch to actually play around with a bit more. Some gave me fun new insights, some disappointed me from the beginning, just by reading the sites or white papers.

The list of tools I initially looked at somewhat seriously was the following:

telerik-logo

Logo_froglogic

SmartBear-New-Logo_RBG

header-logo-borland

1350141391

Original_Software_logo

autoit-logo

thoughtworks-logo

logo-neotys-top

 

Shortlist

Quite a few of the tools I installed, just to see how they work and integrate with developer tools like Eclipse and Microsoft Visual Studio Express. The majority of the more expensive tools barely integrate at all, since I would need to have a full version of Visual Studio rather than the Express version. That is a full disqualifier for me in this phase.

Another strong disqualifier is if the tool simply refuses to run on a Windows XP Professional environment, such as Microsoft Visual Studio Test Professional. Within this company the majority of machines are still running Windows XP or XP Pro, so the tools need to work perfectly in that environment. Interestingly the only tool that flat-out refuses to be installed on it is a Microsoft own tool :).

After having considered the needs for the tool in the short term and possibly longer run two tools jumped out big time: Borland SilkTest and Sikuli.

What have been (some of) the disqualifiers for the other tools I looked at:

  • availability of a downloadable and fully usable demo version, some tools have no demo version available or the demo is locked off.
  • support of SAP for possible future use, the organisation is looking at a long road ahead of SAP upgrades and patches, so automated test support would be a welcome helping hand
  • possibility to use the tool for more than just performance or load testing, for example for pure functional test automation
  • organizational fit moving forward, e.g.
    • will the less technical people within the organisation be capable of using this tool for future runs of the tests built for this particular project?
    • will this tool be capable of supporting upcoming projects in both functional and non-functional tests?
    • is the learning curve for the internal users not too steep (or, how much programming is actually needed)
  • price, is the price something that fits within the project budget and does the price make sense in relation to the project and capabilities of the tool

PoC

With this very short list of two tools a Proof of Concept will be made to see how the applications deal with several situations I will be running into during the performance tests.

One of the main parts to test is whether or not the tool is accurate enough in measuring and reading the state of the application under test. Since the application under test is two fold: a web-application and a remote desktop application.

The webapplication, as stated in the previous post, will not really be the difficult one to test. The remote desktop application however is more challenging to test. The application runs on a Citrix server and thus the object ID’s are not visible to the test automation tooling. The second  outcome of the PoC should be to see how well the tooling deals with the lack of object ID’s and thus with navigating the application based on other pointers. For Sikuli the challenge will be different resolutions, for SilkTest I will be focusing on finding a way other than navigating by screen coordinates.

Automating SAP to create load from an RFC port

In my current assignment I am tasked with coordinating the testing of the integration of several retail systems, basically making them work together logically and effectively. Part of the work is  oriented towards load and performance testing of these integrated systems.

What is being done is that SAP Retail systems need to communicate with Locus WMS, since the version of SAP currently running at the customer cannot deal with anything but IDocs a message broker has been setup in between SAP and Locus to translate the IDocs into XML and vice versa. The IDocs are served to the message broker via SAP’s default RFC port, the broker pulls the documents out of SAP, translates them and sends them off to Locus to be picked up and processed. This is a simplification of how it truly works, since it is only meant to help set the scene.

Generating IDoc load

In order to build up load in a structured, guided way from SAP there are a few ideas of what can be done. My initial hopes, were to push IDocs from a load generator to the message broker. This would be the easiest way in which to control the flow of data towards the broker and thus the easiest way to make sure we are fully in control of how busy the broker is. Alas, when talking to the guys behind the broker interfaces it turned out that this method would not work for the setup used. The only way the broker would actually do something with the IDocs was if it could pull them from the SAP RFC port, pushing to the broker would not work, since the RFC receiving end of the broker is not listening, it is pulling.

Alternatively sending data off into the message queue would fill up the MQ, but not help with getting the messages pushed through the Broker, again, due to the specific setup of the Enterprise Service Bus which contains the broker interfaces.

Spike testSo alternatives needed to be found. One obvious alternative is setup a transaction in SAP which generates a boat-load of IDocs and sends the to the RFC port in one big bulk. This would generate a spike, such as shown in this image, rather than a guided load. In other words, this is not what we want for this test either. It might be a useful step for during a spike test, however the first tests to be completed are the normal, expected load tests.

The search for altneratives continued. At my customer, not a lot of automation tools were available, especially not for SAP systems. One tool however has been purchased a while ago and apparently is actively used: WinShuttle

Winshuttle seems to be able to generate the required load, based on Excel input, the main issue with Winshuttle however, was the lack of available licenses. There are two licenses available and both are single use licenses. This meant I would have to find a way to hijack one of the PC’s it was installed on, script my way through it and run the tests in a very timeboxed manner. In other words, not really a solution to the problem.

I then decided to look at this from a whole different point of view: what can I use to make SAP execute a bunch of transactions, is freely available and flexible enough to also monitor what is happening on several sides of the message broker? The answer that came to me was not quite what I had expected: AutoIt.

SAP-main-screen-side-by-side
Starting SAP from AutoIt was simple, running through the application and manipulating SAP however was a bit less intuitive.
In this screenshot two SAP screens are put side by side, the left-hand side is what the userinterface in SAP looks like to the end user. The right-hand side is how AutoIt sees the screen, e.g. a big blob of nothingness.

SAP-AutoItScreenInfo3

To be a bit more specific, here’s what AutoIt can tell us about the SAP toolbar:

In other words, AutoIt sees the entire toolbar as one object, with one exception, the edit box for transactions. This box has a very easy and intuitive name: Class: Edit Instance: 1, making it easy to ensure the focus on this box can be easily set and thus the transaction being started to upload files.

Since the main screen of SAP is a blind box to AutoIt we had to resort to a very sloppy way of working, using the TAB button to navigate through the screen, resulting in code roughly looking like this:

Send("ZWBESTUPL{ENTER}")
 If WinWaitActive("Bestellingen (winkel) aanmaken vanuit CSV-bestand") Then
    _log('Successfully selected the Bestellingen aanmaken vanuit CSV-bestand transactie')
    Send("{TAB 3}")
    Send("{SPACE}")
    Send("+{TAB 3}")
 Else
    _log('Something went wrong. Could not get to the Bestellingen aanmaken vanuit CSV transactie')
    Exit
 EndIf

load-graph The resulting load ramp up was a linear rampup of IDocs being generated and sent to the SAP RFC port, where they were picked up by the Message Broker and subsequently tranformed and sent back to the Locus system, where the load turned out to be quite on the high side.

All in all this was a fun excercise in automating SAP to do something it is absolutely not meant to do with a tool not built nor designed to do what it did. In other words, it was wonderful to be able to abuse a bunch of tools and achieve a very clear and convincing result!

Test automation on SAP, is it really that much different?

SAP logo This year I got to know SAP fairly intimately, looking at it and into it from a test automation perspective, inventorising the possibilities and opportunities of automated testing of a (huge) SAP implementation. During this time I ran into a fair amount of SAP related people, ranging from SAP consultants and sales people to ABAP-developers, HP sales people and SAP preferred suppliers. They all are making it seem as though SAP development and testing is a different world, nothing to do with the “normal” software development world. In my view this is wrong, SAP is just software. Yes, it has a bunch of particularities which you do not get in so many other packages, but in terms of the actual functionality it is fairly comparable to Siebel and Oracle (no, I am NOT saying it’s the same, I am merely saying it is comparable). With neither Oracle nor Siebel this almost religious separatism exists, yet they too are bound by the laws of business process models, transaction codes and what not. So how come SAP is seen as so special and the others are not? Is SAP special? SAP TAO & HP Quality CenterWhen you start talking about test automation and SAP the first things that pop up are some SAP proprietary names such as CATT, eCATT and SAP TAO. Fortunately SAP themselves recommend against the use of either CATT or eCATT, so let’s dismiss these right here and now, they are tools that once were somewhat helpful but now should be considered redundant for most SAP implementations. SAP TAO however is of a different breed. SAP TAO is pushed by SAP as being the solution to use when trying to automate your testing. One minor issue with SAP TAO however is that it does not really automate anything on its own, you invariably need HP Quality Center (HPQC) and Quick Test Professional (QTP) with it. HP tooling has some tailor-made solutions to integrate well with SAP TAO and more specifically with the SAP Solution Manager. The setup as proposed in this picture is the ideal picture as SAP would like to envision and implement a SAP testing solution. However, not all organisations have Solution Manager up and running for anything other than transport and low level reporting, nor do all organisations have the budget for the HP tool set. When working with SAP TAO effectively and efficiently, the Business Blueprint, the description of all business processes as used by the organisation with the SAP systems, should be residing in the SAP Solution Manager. This blueprint should be maintained carefully and always be up to date. When changes to the system are made, either by updates to the system or by customizations in ABAP, these changes should be visible in the Solution Manager, ensuring the SAP Solution Manager Business Process Change Analyzer can identify which processes have changed and based on this impact analysis propose tests within HP Quality Center to be executed. With SAP TAO the testers can “automate” the tests, which effectively means record the steps. SAP TAO then adds some secret sauce by cutting longer scripts up into maintainable and reusable chunks. These scripts will then be sent from SAP TAO into HPQC, where they can be associated with functional test descriptions. When a tester now wants to run one of the automated tests, or for that matter wants to run the entire automated suite,  HPQC is used again to trigger the scripts, which get executed with QTP. In other words, the actual testdriver is QTP, not SAP TAO. When starting up a SAP GUI instance and analyzing it with something like UISpy or some other tool which can show the objects on a screen, the fields and buttons are barely visible and not really open to test automation. Yet it is possible. If SAP is configured to enable scripting, the UI objects become accessible and thus the GUI is scriptable with any tool of your choice. The moment this little flag has been set, a whole new world opens up in the GUI, it’s all of a sudden open, the fields, screens and buttons all have an ID and can be hooked into by a driver of your choice. Effectively what the enable scripting setting does, is ensuring non of the huge, expensive tools mentioned above are needed, it is possible to run through the application with any driver you want. The main thing needed in order to properly and solidly automate testing in SAP now, is a well grounded knowledge of the Business Processes the implementation is supporting (or driving).  This is no different than what is needed when automating SAP with SAP TAO. The benefits of having the option to choose your own drivers, your own programming language and your own reporting framework are huge. If SAP is merely in the organisation to support the business processes and software developers within the organisation are writing their own code in Erlang, C++, C#, Java, Ruby, Python or whatever else you can imagine, the testsuite for SAP can be in that same language. Having the automated testsuite in a well supported language rather than just in QTP’s own VBScript, ensures a larger possible support base for the automated tests. It enables easy integration of home-built software with the SAP systems since all tests can be built in one language and in an end-to-end setup, again supported by the organisation’s own development group. The SAP TAO and HPQC setup do have some benefits of course. First of all, there is a huge corporate support for both HP and SAP software products. But more importantly, there are some technical benefits of using SAP TAO, if the environment is setup properly. As mentioned above, there is this tool called the Business Process Change Analyser, or BPCA, which can help extract transaction based changes from a transport and help the tester decide, based on these changes, which test scenarios need to be run to effectively cover the business processes (or mainly the transactions associated both directly and indirectly to the transport). Next to that there is the benefit of using HPQC, I can hardly believe that I am saying this, since I am personally not a big fan of the HPQC suite, however the reporting possibilities and capabilities within HPQC are close to limitless. This means that it is possible to generate excellent reports, automatically, for both management level execs and for the business analysts and ABAP-specialists, on each test run without having to think about it. Having the full benefits of this setup however comes at a cost, a fairly sizable cost. The licensing for HPQC, QTP and SAP TAO or not to be ignored for starters. A hidden cost lays within the organisation, as stated, for the BPCA to do anything, Solution Manager needs to be utilized fully, the Blueprint needs to be ready and up to date, more over, it needs to be well maintained to ensure it remains the “Single Source of Truth” (as SAP coined it). So, to answer the initial question: Is SAP special? It is, as a business process tool, definitely special, strong and extremely versatile. When looking at SAP as a system that requires testing and test automation however, I am not convinced it is special, it’s just software, which is open for testautomation with a range of drivers, one of these drivers might be QTP. If you do indeed choose to go for QTP with a SAP system, have a look into SAP TAO. However, do not feel that it is the only one out there which can effectively and efficiently be used for SAP test automation. All the others claiming they can, probably indeed can just as well as SAP TAO with QTP. In the end it is all about how you use and abuse a tool and whether you use QTP, White or Panaya, they all in the end merely function as a driver, it is the code the testers build which matters!

The cost of test automation

Over the past few posts I have written a lot about test automation, however one very important subject I have left out thus far. What is the actual cost of test automation? How do you calculate the cost of test automation, how do you compare this cost to the overall costs of testing? In other words, how do you get to the return on investment people are looking for? First thing that needs to be covered if you want to know and understand the costs of test automation is a CLEAR understanding of the goal. Typically there are three possible goals:Cost, quality or time to market?

  1. reduce the cost of testing
  2. reduce the time spent on testing
  3. improve the quality of the software

These three have a direct relation with each other, and thus each of them also has a direct impact on the other two depending on which one you take as your main focus. In the next few paragraphs I will discuss the impact of picking one of the three as a goal.

Reduce the cost of testing

When putting the focus of your test automation on reducing the overall cost of testing you set yourself up for a long road ahead. There generally is an initial investment needed for test automation to become cost reducing. Put simple, you need to go through the initial implementation of a tool or framework. Get to know this tool well and ensure that the testers who need to work with the tool all know and understand how to work with it as well. If going for a big commercial tool there is of course the investment in purchasing the license, which often ranges between 5.000 and 10.000 euros per seat (floating or dedicated). Assuming more than one test engineer will need to be using the tool concurrently, you will always need more than one license. This license cost needs to be earned back by an overall reduction in testing costs (since that is your goal). An average investment graph will look something like this when working with a commercial tool: Investment of test automation with a commercial tool

The initial investment is high, this is the cost of the licenses. At that point there is nothing yet, no tests have been automated yet and you have already spent a small fortune. The costs will not drop immediately, since after the purchase the tool needs to be installed, people need to be trained etc. Next to it, all the while no tests have been automated, thus the cost is high, but the return is zero. Once the training process has been finilised and the implementation of the automated tests has started the cost line will slowly drop to a stable flatline. The flatline will be the running cost of test automation, which includes the cost of maintenance of the tool and the testscripts and of course the cost of running tests and reviewing and interpreting the reports.

A particular post in a LinkedIn group with the ominous question “Who’s afraid of test automation”, one of the more disturbing responses was as follows (and I am quoting from the group literally, with a changed name however):

Q: Who’s afraid of test automation?

A: Anyone with headcount. What would it look like if all of the testing is done by machine and there is only one person left in the organization?
Respectfully, Louise, PhD

The idea that test automation will take away testers’ work completely and as such will reduce the running costs of a test team drastically is a common misconception which takes too much time and effort right now to address. But in short, Test automation may reduce some of the workload, but will not be able to reduce your cost of testing by removing all of the testers, nor should this ever be your objective.

In my next blog post I will continue this story, then with the focus on “Reduce the time spent on testing”