Selecting performance test tooling – Part 2

In my first installment I wrote about how I got the requirements together. Based on those I wrote a plan on what will need to be done. In my second installment I wrote about the first considerations of what I need the tooling to be able to do. In this part I am going to discuss a few of the things I have done to come to the shortlist and what will I do as a Proof of Concept for the tools.

Load generator

First of all, let’s get the easy part out of the way. We will need something to generate a (functional) load on the servers. That part i consider relatively easy, no big tools are needed for this since we have an extremely powerful opensource tool at our fingertips: Apache’s JMeter. JMeter The load will have to be generated based on both HTTP traffic and client/server traffic, neither of which should pose a problem for JMeter. The most difficult part for load generation is getting the numbers out of the system, e.g. figuring out what the average and peak load is on the system. For this we have thrown some lines out to application managers to figure out.

Long list

The longlist I started out with was not just any list, it was a set of several lists. Out of this initial set I picked a bunch to actually play around with a bit more. Some gave me fun new insights, some disappointed me from the beginning, just by reading the sites or white papers.

The list of tools I initially looked at somewhat seriously was the following:

telerik-logo

Logo_froglogic

SmartBear-New-Logo_RBG

header-logo-borland

1350141391

Original_Software_logo

autoit-logo

thoughtworks-logo

logo-neotys-top

 

Shortlist

Quite a few of the tools I installed, just to see how they work and integrate with developer tools like Eclipse and Microsoft Visual Studio Express. The majority of the more expensive tools barely integrate at all, since I would need to have a full version of Visual Studio rather than the Express version. That is a full disqualifier for me in this phase.

Another strong disqualifier is if the tool simply refuses to run on a Windows XP Professional environment, such as Microsoft Visual Studio Test Professional. Within this company the majority of machines are still running Windows XP or XP Pro, so the tools need to work perfectly in that environment. Interestingly the only tool that flat-out refuses to be installed on it is a Microsoft own tool :).

After having considered the needs for the tool in the short term and possibly longer run two tools jumped out big time: Borland SilkTest and Sikuli.

What have been (some of) the disqualifiers for the other tools I looked at:

  • availability of a downloadable and fully usable demo version, some tools have no demo version available or the demo is locked off.
  • support of SAP for possible future use, the organisation is looking at a long road ahead of SAP upgrades and patches, so automated test support would be a welcome helping hand
  • possibility to use the tool for more than just performance or load testing, for example for pure functional test automation
  • organizational fit moving forward, e.g.
    • will the less technical people within the organisation be capable of using this tool for future runs of the tests built for this particular project?
    • will this tool be capable of supporting upcoming projects in both functional and non-functional tests?
    • is the learning curve for the internal users not too steep (or, how much programming is actually needed)
  • price, is the price something that fits within the project budget and does the price make sense in relation to the project and capabilities of the tool

PoC

With this very short list of two tools a Proof of Concept will be made to see how the applications deal with several situations I will be running into during the performance tests.

One of the main parts to test is whether or not the tool is accurate enough in measuring and reading the state of the application under test. Since the application under test is two fold: a web-application and a remote desktop application.

The webapplication, as stated in the previous post, will not really be the difficult one to test. The remote desktop application however is more challenging to test. The application runs on a Citrix server and thus the object ID’s are not visible to the test automation tooling. The second  outcome of the PoC should be to see how well the tooling deals with the lack of object ID’s and thus with navigating the application based on other pointers. For Sikuli the challenge will be different resolutions, for SilkTest I will be focusing on finding a way other than navigating by screen coordinates.

What’s in a name? The job title paradox

Test consultant

As a test consultant I get hired in lots of different environments and companies, which results in all kinds of interesting job titles I receive. This led me to start thinking about what the title of a job actually means and what the difference is between them. Why should I be a QA Engineer, Test engineer, technical tester, test automation developer, load tester, developing qa engineer or whatever else people can come up with. And why should I care what they call me, as long as I get to do my job and I do it well?

What is the difference between them, why would you call yourself a technical tester or a test automation developer or a test automation engineer for that matter?

Test professional

Within our company we are currently expanding the group of “professionals who are very capable in test automation or load and performance testing”, this raises the question so now and again of what to put on a business card, or an email signature.

Then today I read this (long) article on Harvard Business Review which sounds completely logical, I too in the past have often looked at what the title was on a resume before even reading what the job itself entailed.  At one of my past customers a group of people had the title “test managers” and instead of doing the traditional job of a test manager they were fully dedicated to executing product risk analysis and based on that give advice to other groups within the organisation. At some point during my assignment there, they started a search for a new “test manager” and found they had incredible issues finding someone who matched the job, since all applicants applied to the title rather than the job description. Those who did apply to the job description seemed to lack the “test manager” type role in their resume and thus were not interviewed. It took the company a while to realize this gap and change the name of the role to something more clearly descriptive of what the people in this group did “test risk managers”.

What the above story illustrates to me is that a job title should be descriptive of what you do, not what you want people to think you do.

Interlude

A former colleague of mine recently received a promotion to something like Senior Vice President of something or other. I am very happy for him that he has a new title, however the job he does is still exactly the same as the one he did 2 years ago: he manages a group of developers and testers. So why not just call yourself that?

Performance tester

In my current assignment I have had several different roles, one being Test Coordinator, which effectively meant I was indeed coordinating the different parties involved with the tests to be executed. However when I switched to a different project within this organisation I started working fully on load and performance testing. The test manager raised the question on what my title should be and I proposed to adjust it to exactly state what I am doing for them at the moment: performance tester.

Getting back to the opening of this post, it is never really clear to me what the difference effectively is between a tester, a test engineer or a QA engineer. In the end, in my experience at least, they all do the same thing: they test software. So why not just call yourself a Software Tester? This makes it abundantly clear to potential colleagues and employers what you do for a living, it is a clear description for recruiters to find a person on and not send some highly inappropriate job offer (which they no doubt will still be doing cause they generally don’t read resume’s of people they contact). And most importantly, it clearly states what you are doing and thus helps you with a direct reality check whether the job you should be doing according to your title is indeed the job you are doing.

My point

So my view on job titles seems clear: just make it state what you do, what your charter within the organisation is and try to keep it clear, short and simple. This makes your life easier as well as that of your colleagues!

Selecting performance test tooling – Part 1

How do you come to a logical and effective performance test tool set? Yes, I say tool set since quite often just one tool will not suffice.

As could be read in my previous post I am currently heavily involved in performance testing for a large Dutch retail organization which is merging two large organisations into one. For the ERP system they both use I have written a concise Performance testplan. Now the time to execute has come.

In order to execute the performance tests on the ERP system, which is both desktop application (WPF) and web-based I will need a solid tool to

  1. generate load on the servers by emulating functional  behaviours
  2. functionally walk through the actual desktop and web-applications to measure the true application performance
Performance testing

Performance testing

Generating load is not the biggest challenge, quite a few tools are capable of doing that, especially since there is a website we can push the load through. The functional walk-through is going to be a bit less simple to create, especially since I have to recreate the same scenarios twice; once for the web application and once for the desktop application.

To complicate things even more, I want the functional walk-through  and thus the real measurements, to be done by the same tool. This way I can make sure the measurements are reporting on the same thing in the same way and thus can be compared to one-another to some extent.

Long list

First off in looking for the right tools I started setting up a long list, or actually, I Googled for a longlist…
This search resulted in several useful pages I checked out:

Just based on previous experience, tool knowledge etc. I dismissed a set of the tools and ended up with a nice set of  possible tools.
Next step, which of these tools actually fit with the organisation and application landscape? I have specific needs at the moment for this tool, however I also need to keep the future needs of my customer in mind. In other words, I need a tool that is a) good for me now and b) good enough for the foreseeable future to be worth the investment (in license and time or just time, either way quite an investment).

What will the tool need to support?

  • WPF/Win32/WinForms
  • Java (Swing) UI’s
  • Oracle
  • SAP GUI
  • Web applications of all sizes and shapes
  • RFC communications
  • SOAP
  • Tibco

The search has begun.
In the next installment of this series I will tell more about what the shortlist has become, what I tested on the applications and how I have come to a decision which application seems the best fit for this environment.

Requirements for Load & Performance testing

In my current assignment I am now tasked with writing a performance test plan for testing the performance of an ERP system. In this case, with performance, I mean the actual user experience which needs to be measured. So how long does it take for a screen to be fully rendered within a desktop application for a certain user-type with specific authentication and authorization.

Test data

Considering the organization I am still working in at the moment, the test data will also be an interesting challenge  the requirements we have for data are fairly simple, at first glance at least:

  • 80.000 active users within the ERP
  • 150.000 inactive users within the ERP

However when we start looking at the specifics of how this data is to be built, it becomes a bit more complicated. These users are divided over two organisations, where one organisation is a relatively simple pyramid structure, however the other organisation consists of a huge set (300+) of separate, smaller organisations, which have very flat organizational structures.
Generating this data is going to be fun! Especially since these users will need to be actual active users within the system, with an employment history, since the execution of the performance tests needs historical data to be available, not to forget that the users need to login during the performance tests and actively generate some load on the ERP system.

Performance requirements

Next up is the actual requirements we will be testing for. Some of the questions which popped into my head were:

  • How do you come up with proper L&P requirements?
  • What are bad requirements?
  • How do you get your requirements SMART?
  • How do you then measure these requirements?

So, we had several sessions with the end-users of the system to get to a basic understanding of what they required the application to follow and some really nice requirements came up, for example:

The application should finish a batch-job for at least 500 analyses within a time frame of 8 hours

Considering what SMART stands for this requirement leaves some gaping holes. Sure, it is Timely. The others however are not quite met yet.

This is merely one of many requirements we had to go through to make them SMART. The challenge in the requirements was mainly getting clear to the requirements owners what the difference is, from a testing and specifically performance testing point of view between the original requirements and the actual SMART version of that same requirement.

The example as stated above ended up as the following requirement:

The batch-job for executing predefined data analyses has to finish processing 500 separate analyses within a nightly run of 8 hours, after which the analyses  results are successfully uploaded to the end-user dashboards.

Getting requirements clear for batch-jobs however, is not the most difficult part. The main issue was getting the requirements clear for the user interactions and separating the desktop client interactions from the web-interface.

How do you explain in layman’s terms why a desktop application will by design respond faster than an average web-application and thus that you need different specifications for the two? How do you make clear, again in layman’s terms, that setting up the performance tests for a web-application will not make the scripts reusable for the desktop application, despite them having identical functionality, or even look and feel?

Those are some of the questions I have been struggling with the last few days while writing the performance testplan and with that defining and refining the requirements. (Why is it by the way that I, as the performance tester and defining the requirements I need to test for??)