In my first installment I wrote about how I got the requirements together. Based on those I wrote a plan on what will need to be done. In my second installment I wrote about the first considerations of what I need the tooling to be able to do. In this part I am going to discuss a few of the things I have done to come to the shortlist and what will I do as a Proof of Concept for the tools.
Load generator
First of all, let’s get the easy part out of the way. We will need something to generate a (functional) load on the servers. That part i consider relatively easy, no big tools are needed for this since we have an extremely powerful opensource tool at our fingertips: Apache’s JMeter. The load will have to be generated based on both HTTP traffic and client/server traffic, neither of which should pose a problem for JMeter. The most difficult part for load generation is getting the numbers out of the system, e.g. figuring out what the average and peak load is on the system. For this we have thrown some lines out to application managers to figure out.
Long list
The longlist I started out with was not just any list, it was a set of several lists. Out of this initial set I picked a bunch to actually play around with a bit more. Some gave me fun new insights, some disappointed me from the beginning, just by reading the sites or white papers.
The list of tools I initially looked at somewhat seriously was the following:
- Telerik TestStudio
- Microsoft Visual Studio Test Professional
- IBM Rational Functional Tester
- SmartBear TestComplete
- Borland SilkTest
- Original Software TestDrive
- Sikuli
- Thoughtworks Twist
- RobotFramework
- Apache JMeter
- AutoIt
- Froglogic Squish
- Neotys NeoLoad
Shortlist
Quite a few of the tools I installed, just to see how they work and integrate with developer tools like Eclipse and Microsoft Visual Studio Express. The majority of the more expensive tools barely integrate at all, since I would need to have a full version of Visual Studio rather than the Express version. That is a full disqualifier for me in this phase.
Another strong disqualifier is if the tool simply refuses to run on a Windows XP Professional environment, such as Microsoft Visual Studio Test Professional. Within this company the majority of machines are still running Windows XP or XP Pro, so the tools need to work perfectly in that environment. Interestingly the only tool that flat-out refuses to be installed on it is a Microsoft own tool :).
After having considered the needs for the tool in the short term and possibly longer run two tools jumped out big time: Borland SilkTest and Sikuli.
What have been (some of) the disqualifiers for the other tools I looked at:
- availability of a downloadable and fully usable demo version, some tools have no demo version available or the demo is locked off.
- support of SAP for possible future use, the organisation is looking at a long road ahead of SAP upgrades and patches, so automated test support would be a welcome helping hand
- possibility to use the tool for more than just performance or load testing, for example for pure functional test automation
- organizational fit moving forward, e.g.
- will the less technical people within the organisation be capable of using this tool for future runs of the tests built for this particular project?
- will this tool be capable of supporting upcoming projects in both functional and non-functional tests?
- is the learning curve for the internal users not too steep (or, how much programming is actually needed)
- price, is the price something that fits within the project budget and does the price make sense in relation to the project and capabilities of the tool
PoC
With this very short list of two tools a Proof of Concept will be made to see how the applications deal with several situations I will be running into during the performance tests.
One of the main parts to test is whether or not the tool is accurate enough in measuring and reading the state of the application under test. Since the application under test is two fold: a web-application and a remote desktop application.
The webapplication, as stated in the previous post, will not really be the difficult one to test. The remote desktop application however is more challenging to test. The application runs on a Citrix server and thus the object ID’s are not visible to the test automation tooling. The second outcome of the PoC should be to see how well the tooling deals with the lack of object ID’s and thus with navigating the application based on other pointers. For Sikuli the challenge will be different resolutions, for SilkTest I will be focusing on finding a way other than navigating by screen coordinates.
According to our discussion on LinkedIn on this subject – the good thing about Sikuli is that it’s a library, so if I were to approach the subject, I’d actually use it as such and wrap it in a language that provides more detailed statistics – and honestly I’d go with Java because of JMeter. Oh, and as Siuli practitioner here’s probably the best advice I could give you: use as small screen sections to perform lookups as possible. It can really change the overall performance in incredible ways…
Hi Adam,
The idea is to indeed wrap it somehow into something, might be java, might also become python. Depends a bit on how I feel in the end I guess. The Sikuli integration with Python is nice and flawless for fairly clear reasons, I haven’t played around yet with it in Java. What I understood from a former colleague is that it integrates very nicely in Eclipse though.
As for the tip, thanks, I will definitely keep that in mind! I already ran into one issue with a too big selection while playing around with Sikuli on an SAP environment. A single click on an arrow in the main-screen opens that section, however selecting the appropriate arrow turned out to be a bit more annoying than I thought 🙂
I started out with the entire line including the title, but that doesn’t expand the section of course (why would it? It’s SAP, nothing should be logical there).
This is my first trial with Sikuli and so far I quite like it. I am just a bit concerned about the maintainability of the scripts, since I am also looking to see if this is something we can keep using in this org going forward in regression automation for end-to-end chains.
Any tips, suggestions or experiences on that? Keeping in mind I would have to hand this stuff over to a non-technical organisation sooner or later?
For completeness sake, here is the conversation prior to Adams comment:
Adam: “Did I get it right, that you want to use JMeter as load generation tool and one of your shortlisted tools for actual usage simulation? If yes – please do post details about Sikuli/JMeter integration! I’m most curious, especially that I’ve been working with Sikuli lately, and I wouldn’t call it a speed daemon… At least not when used incorrectly.”
Me: “Hey Adam, indeed I am going to use JMeter to kick off building up load and use either Silk or Sikuli for the actual measuring. Sikuli is indeed not a speeddeamon, but I am looking for something that actually emulates the user behaviour on both a web application and the remote application. For that Sikuli itself is good enough for the moment I believe. My main concern is how to get clear stats and numbers out of Sikuli on a scale smaller than seconds, which is why I am now playing around with logging and that kind of stuff to see if I can throw log messages the moment something is fully there. (btw, comments on the blog are nicer for posterity 🙂 )”