In my current assignment I am now tasked with writing a performance test plan for testing the performance of an ERP system. In this case, with performance, I mean the actual user experience which needs to be measured. So how long does it take for a screen to be fully rendered within a desktop application for a certain user-type with specific authentication and authorization.
Considering the organization I am still working in at the moment, the test data will also be an interesting challenge the requirements we have for data are fairly simple, at first glance at least:
- 80.000 active users within the ERP
- 150.000 inactive users within the ERP
However when we start looking at the specifics of how this data is to be built, it becomes a bit more complicated. These users are divided over two organisations, where one organisation is a relatively simple pyramid structure, however the other organisation consists of a huge set (300+) of separate, smaller organisations, which have very flat organizational structures.
Generating this data is going to be fun! Especially since these users will need to be actual active users within the system, with an employment history, since the execution of the performance tests needs historical data to be available, not to forget that the users need to login during the performance tests and actively generate some load on the ERP system.
Next up is the actual requirements we will be testing for. Some of the questions which popped into my head were:
- How do you come up with proper L&P requirements?
- What are bad requirements?
- How do you get your requirements SMART?
- How do you then measure these requirements?
So, we had several sessions with the end-users of the system to get to a basic understanding of what they required the application to follow and some really nice requirements came up, for example:
The application should finish a batch-job for at least 500 analyses within a time frame of 8 hours
Considering what SMART stands for this requirement leaves some gaping holes. Sure, it is Timely. The others however are not quite met yet.
This is merely one of many requirements we had to go through to make them SMART. The challenge in the requirements was mainly getting clear to the requirements owners what the difference is, from a testing and specifically performance testing point of view between the original requirements and the actual SMART version of that same requirement.
The example as stated above ended up as the following requirement:
The batch-job for executing predefined data analyses has to finish processing 500 separate analyses within a nightly run of 8 hours, after which the analyses results are successfully uploaded to the end-user dashboards.
Getting requirements clear for batch-jobs however, is not the most difficult part. The main issue was getting the requirements clear for the user interactions and separating the desktop client interactions from the web-interface.
How do you explain in layman’s terms why a desktop application will by design respond faster than an average web-application and thus that you need different specifications for the two? How do you make clear, again in layman’s terms, that setting up the performance tests for a web-application will not make the scripts reusable for the desktop application, despite them having identical functionality, or even look and feel?
Those are some of the questions I have been struggling with the last few days while writing the performance testplan and with that defining and refining the requirements. (Why is it by the way that I, as the performance tester and defining the requirements I need to test for??)