JMeter and WebDriver – 2 ways to combine them effectively

In my previous post I wrote about why it can be useful to combine running load tests in combination with functional automated tests (checks). In this post I will go in a bit deeper and give some ways to combine the two effectively and efficiently. Apache JMeter plus-2-256 Selenium Logo

2 ways to combine JMeter and WebDriver effectively

The two ways to combine Jmeter and WebDriver I will describe differ quite a bit. Depending on your situation you may want to choose one or the other for your purposes. Keep in mind, these are ideas of how I have used the two effectively together, this is not necessarily the best way for you!

Existing webdriver tests

Quite a lot of organisations already have a fairly solid base of testautomation in place, in various states of success. A relatively easy way to get performance metrics for your application is to add a timer to your existing webdriver automation scripts. For every (relevant and useful) action in the browser, you can add a start and done timer, resulting in the actual wait-time within the browser. This timer can be in an “always on” state within your automation suite. It will generate quite some data, which is not a bad thing. When you keep track of your metrics over the course of a complete cycle, for example throughout a sprint, these metrics will help give you and your team insights into possible performance regression.

Performance regression over several releases – Image linked from devel.kostdoktorn.se

While this will give you valuable information, it does not yet give you hard data of how the application runs when put under stress. In order to do that, you will have to create a separate load suite. The load suite will need to contain functional tests, similar to the ones you are running during regular (automated) regression. This way you are certain you are putting your business logic under stress and thus may slow the application down significantly. If your test environment is representative enough, or if you actually benchmarked it beforehand, you can add the Jmeter script to your automated run at planned intervals to start building up load on the server prior to the start of the full browser automated test run. Otherwise on a representative environment you can run the JMeter scripts to generate load, once the target load is reached, you can start your regular automated functional tests,

Advantages

The main advantage is that you are actively encouraged to reuse your existing automated tests. The tests under load are easily compared to the benchmark you make on a daily basis without load.

Challenges

Unfortunately there is no easyway to get to unified reporting other than to generate a set of CSV files and merge those.

Stand-alone performance measurement

As a consultant I also often end up in an organisation where there is no functional automation to reuse for a performance test. In this case I use a fully JMeter based solution for executing the performancetest, Both load generation as well as page-load times (e.g. browser rendering included) are run from JMeter. jmeter-pluginsIn order to run browser-based tests from JMeter I use the wonderful work of the people over at Jmeter-Plugins. They have created a host of wonderful plugins, but for this particular purpose I am very fond of their WebDriver Set.

The WebDriver plugins in JMeter allow you to write JavaScript testscenarios in Jmeter, which actually use the browser. This combined with the regular load tests from within Jmeter make for a strong performancetest.

Advantages

With this solution you achieve a very easy and quick way to get one unified reporting, from within the toolset showing the performance of your application, both the server response times as well as the browser response times.

Challenges

If you already have a testautomation suite running as well, taking this road implies setting up a second testsuite specifically tailored for performancetesting. The lack of reuse of existing tests makes this quite often the more expensive way of performance testing.

The cost of test automation

Over the past few posts I have written a lot about test automation, however one very important subject I have left out thus far. What is the actual cost of test automation? How do you calculate the cost of test automation, how do you compare this cost to the overall costs of testing? In other words, how do you get to the return on investment people are looking for? First thing that needs to be covered if you want to know and understand the costs of test automation is a CLEAR understanding of the goal. Typically there are three possible goals:Cost, quality or time to market?

  1. reduce the cost of testing
  2. reduce the time spent on testing
  3. improve the quality of the software

These three have a direct relation with each other, and thus each of them also has a direct impact on the other two depending on which one you take as your main focus. In the next few paragraphs I will discuss the impact of picking one of the three as a goal.

Reduce the cost of testing

When putting the focus of your test automation on reducing the overall cost of testing you set yourself up for a long road ahead. There generally is an initial investment needed for test automation to become cost reducing. Put simple, you need to go through the initial implementation of a tool or framework. Get to know this tool well and ensure that the testers who need to work with the tool all know and understand how to work with it as well. If going for a big commercial tool there is of course the investment in purchasing the license, which often ranges between 5.000 and 10.000 euros per seat (floating or dedicated). Assuming more than one test engineer will need to be using the tool concurrently, you will always need more than one license. This license cost needs to be earned back by an overall reduction in testing costs (since that is your goal). An average investment graph will look something like this when working with a commercial tool: Investment of test automation with a commercial tool

The initial investment is high, this is the cost of the licenses. At that point there is nothing yet, no tests have been automated yet and you have already spent a small fortune. The costs will not drop immediately, since after the purchase the tool needs to be installed, people need to be trained etc. Next to it, all the while no tests have been automated, thus the cost is high, but the return is zero. Once the training process has been finilised and the implementation of the automated tests has started the cost line will slowly drop to a stable flatline. The flatline will be the running cost of test automation, which includes the cost of maintenance of the tool and the testscripts and of course the cost of running tests and reviewing and interpreting the reports.

A particular post in a LinkedIn group with the ominous question “Who’s afraid of test automation”, one of the more disturbing responses was as follows (and I am quoting from the group literally, with a changed name however):

Q: Who’s afraid of test automation?

A: Anyone with headcount. What would it look like if all of the testing is done by machine and there is only one person left in the organization?
Respectfully, Louise, PhD

The idea that test automation will take away testers’ work completely and as such will reduce the running costs of a test team drastically is a common misconception which takes too much time and effort right now to address. But in short, Test automation may reduce some of the workload, but will not be able to reduce your cost of testing by removing all of the testers, nor should this ever be your objective.

In my next blog post I will continue this story, then with the focus on “Reduce the time spent on testing”

Test automation in Agile and why it fails

It’s fairly safe to say that quite a lot of test automation efforts fail. It is also very safe to say that without test automation an agile team fails. So how can you make sure that while doing agile your test automation will not fail and thus your agile team will not fail? One of the ways to answer this question is by looking at why test automation often fails within agile environments.

When I am talking about test automation within this post, I am referring to testing that is done to reduce the amount of manual regression work, the so called functional test automation or automatic regression testing.

Moving target

Test automation quite often does not receive the attention it needs and deserves, also in agile teams. Quite some test automation efforts start off too late and without the appropriate preparation, resulting in organic test automation driven by a moving target. The moving target is the system under test which, in agile, is constantly in flux. Each sprint new features are added, bugs are fixed and quite often it is not clear at the start of a project where it is going to end up. Writing automated scripts against such a flexible environment which will stand the test of time, is difficult. It is even more difficult when the base on which automation is done is weak.

Quite often test automation runs behind on what is being delivered within an iteration, this is somewhat logical, considering that it is difficult to test, let alone automatically test what has not been built yet. Ideally while manually testing the new feature(s) as a tester, you’re already pondering how to automate it so that you do not have to do the tedious work more than once. Given enough time within your iteration, you actually might be able to automate some of the features, from what I have seen thus far, generally not all features will be covered in test automation within one iteration. So if these tests are not all automated, what happens to them in the next iteration? Are they omitted? Are they picked up and automated retrospectively?

If you do not keep track of what has been automated during an iteration for both your current iteration and your previous iteration, how can you rely on your test automation? You can’t be sure what exactly it is going through, so a bug can easily get through the net of your automated tests.

This moving target you are testing needs to be traced and tested solidly, repeatedly and in a trust-worthy way!

Definition of Done

In the majority of the DoD’s I have seen, one of the items is something referring to “tests automated”. The thing I have thus far not seen however, is the team adding as much value to the automation code as they do to the production code. Quite a lot of DoD’s refer to certain coding standards, however these standards often seem to not apply to functional test automation. Isn’t your functional automation code also just code? If so, why then should this not be covered in code reviews, be written according to some useful guidelines and standards and hopefully use a framework to make the code sustainable?

Test automation is just writing code

I have seen several automation efforts going on within agile teams where test automation was done without proper thinking having been put into it. A tool was chosen, based on what exactly other than members of the team having heard of it or having had good experiences with the tool. No base or framework to keep the code clean chosen. Since you are writing code, you should follow the same rules as the rest of the software developers. Don’t think your code, since they are merely tests, should not be hooked up to some form of framework. If you want to make your tests survive a few iterations, considering reuse of your code would be logical.

By the way, coding standards do not need to be too complicated. In 2009 “Agile in a flash” came up with a coding standard that could work for all languages and for most environments:

Coding Standards - agileinaflash.blogspot.com

All of the above mentioned points are “logical” when writing an application which is supposed to go into production. However when looking at a lot of (agile) projects, these logical “best practices” seem to be totally forgotten when it comes to test automation.

Succeed in test automation

So, how do you succeed in your test automation? How do you make it work? The answer seems clear to me: test automation is not like writing code, it is equal to writing code. Since it is the same, treat it the same way!

Do your code reviews, follow a form of a standard, use a (simple) framework to make life easier in writing tests, create reusable modules in your automation code. In other words, treat your functional test automation with the same respect as your production grade code. Who knows, you might want to run your tests against your production environment some day! In setting up your initial test automation environment and framework, don’t be shy and ask the developers in your team for tips, tricks and suggestions. They quite likely have gone through those setup steps more often than you have, so use their knowledge. Asking them for their insights and ideas not only helps you, it also helps them feel more responsible for doing their 5 pennies worth on the test automation side. They will get a clearer idea of what you intend to achieve, so they might also be more willing to help out keeping their code testable, they might even enjoy helping you write the testscripts!

Resources

Some informational resources where you can find some ideas on how to setup the test automation framework: