Are we afraid of test automation?

After having spent the last 9 months working on a test automation implementation for a team I am not running myself I have started to wonder what are the main success factors for test automation, besides the obvious: achieving the goal you set out in the first place?

One of the main factors that will help test automation be a success in your organisation is to not consider test automation a goal, but use it as a means to achieve a goal. The goal should not be something like “have x% of regression automated”, your goal should be something in which test automation can be a means to achieve this goal, for example help free up time for the testers to focus on important things rather than having to spend most of their time in regression testing.

Another very important thing to keep in mind in implementing test automation is that you need to keep a sharp eye out for trying to automate everything. Technically it may be possible, but it is hardly ever a good idea to want to automate everything. Quite some things require human intervention or human interpretation and cannot be simply made into a Boolean (which is what test automation is in the end, something is either true or false).

Look & feel testing, or validating, should in most cases not be automated for example. This for the simple reason that, despite it being possible, will more often than not raise either false positives or more likely, false negatives. Since these tests often require some form of image recognition and comparison a change of screen resolution or a small CSS change (font difference for example) will make the test fail, resulting in either maintenance or tests being made redundant.

For me however, the main show of success is that the automated tests are actually used and maintained actively.

Having a test automation framework and relying on it to do its job is not good enough. Just like every other software automated tests also want some TLC or at least some attention and maintenance.

Something that is still often seen in test automation is that the tests run but with non-determinate errors, in other words errors that are not really due to the testcases but are also definitely not a bug in the system under test.

If you show some tender love and care to your automation suite, these errors will be spotted, investigated and fixed. More often however these errors will be spotted, someone will “quickly” attempt to fix it and fail, after which the “constantly failing test” will be deemed useless and switched off.
Besides non-determinate errors there is another thing I have seen happen a lot in the past.

Some automation engineers spend a lot of time and efforts building a solid framework with clean and clear reporting. They add a lot of test cases, connect the setup to a continuous integration environment and make sure that they keep all tests working and running.
They then go ahead building more and more tests, adding all kinds of nice new tests and possibilities. What gets forgotten often however, is to involve the rest of the department, they do not show and tell to the (manual?) testers, they do not share with the developers. So the developers use their own unit tests for themselves and if they do some functional testing they do it manually and sloppily. The (manual) testers go about their usual business of testing the software, as they have always done. Not considering that a lot of the test automation suite they can use and abuse to make their life easier. They will spend time on data seeding, manually. They will spend time on look and feel verification on several browsers, manually.

All this time the developers and (manual) testers could have been using the automation framework and the existing tests in it to make their life a lot easier.

While writing this down it starts sounding silly to me and unlikely that it happens, yet I have seen this happen time and time again. What is it that makes developers but especially testers afraid of hesitant to use automated tests?
I love using test automation to make my life easier when doing manual testing, despite having an very strong disliking for writing code.

Test automation should be seen as a tool, it is there to make life a hell of a lot easier. It cannot replace manual testing, but it can take all the repetitiveness and tediousness of manual testing away, it can help you get an idea of what kind of state a system is in before you even start testing, but it can also help you, a lot, in getting the system in all kinds of states!

Getting a junior up to speed on test automation with FitNesse

Last week we had the privilege of having a junior test engineer working with us for a few days to see what it would take to get him fully up and running with test automation as we have implemented it at our customer.

Our intern, as I introduced him at our client, has a solid eduction in Civil Engineering and lacked any kind of knowledge of test automation or writing software. He has just finished his first project as a junior tester, which was testing a data warehouse migration.

Motivation

What drove us to try this was simple: curiosity and questioning of my personal coaching and explaining skills. Thus far I have had the feeling that I somewhat failed in showing how easy the setup of FitNesse with our custom fixture is to anyone who is not part of a management group. With this engineer I wanted to confirm whether this was me not explaining things clearly or people not following what I explained properly (e.g. me explaining things in the wrong wording).

Starting point

Our “intern”, as said, has little or no hard IT knowledge. He is one of the junior test engineers that came out of the group of trainees Polteq Test Services hired at the beginning of the year. With his degree in civil engineering he is for sure a smart guy, but considering he has never been on the construction side of software, he had some ways to go.

Considering that he had no prior knowledge of either the FitNesse/WebDriver setup nor the environment we are working on, we started with a morning session of explaining and overflowing him with information by answering the following questions.

  • What do the applications under test do?
  • What is the service oriented architecture underneath?
  • How does this all tie together into what you see when you start using the applications?
  • What is FitNesse?
  • What is WebDriver?
  • How do these two work together?
  • What is the architecture of the Fixture?
  • What is a fixture and how does that relate to WebDriver and FitNesse?

After this session he was set to work with FitNesse. The main thing that slowed him down somewhat was the getting used to the syntax as we force it through the use of our custom Slim fixture. At this point he still had only a minor base knowledge of what the application under test does or is supposed to do. Based on the existing functional testcases he managed to fairly rapidly automate a set of testcases, or more specifically, transform them from base functional descriptions into Slim tables which will run successfully as a test.

The result

Writing the testcases was easy work for him, he picked up the base syntax really fast and managed to pump out some 15 working tests in a very short period. It was time for a bit of a challenge.

Considering he had never written a line of code in his life I thought we might as well check to see how fast he would pick up writing a simple wrapper in C# around an Advanced Search page, which includes a set of dropdowns, textfields, radiobuttons and checkboxes which can be manipulated along with a submit and reset button.

The first two methods we wrote out together, him typing while I was explaining what is what. Why do you want the method to be public, why does it need to be a bool, what are arguments and how do you deal with that in FitNesse.  Where do you find the Identifier of the object you are trying to get the wrapper around, what do you do when there is no ID, how do you extract the xpath and make that work well. Once we got through the first few methods I set him at work to figure it out for himself.

The first question I received after a while was: ok, so now I’m done writing these things out in code, then what? How can I now see this working in FitNesse? After making an extremely feeble attempt at explaining what compiling is and deciding to just show where the compile button is, he then set to work to verify in FitNesse that his code indeed is capable of reaching and manipulating every element on the search page and getting to a sensible search result.

Take away

What did I learn in this week? For starters that when coached close enough it is very simple to get someone without experience up and running with FitNesse the way we set it up, which is good to have confirmed again, since that was the aim.

Another thing we have seen proven is that adding new methods to the fixture is dead-simple, changing the ID’s of objects on the pages should not lead to too much hassle maintaining the fixture. For the Base class quite some developer knowledge is still required, but once that part is standing expanding the testable part can be done with some of coaching. So technically we would need to start handing over maintenance of our Base classes to someone within Development and hand off maintaining the rest of the fixture to someone within the test teams here.

One of the things we might consider in making maintenance easier could be to split the leaf-nodes, e.g. the page level, off from the base and helper classes in order for these two to be complete independent of one another, which means that the developer can add and refactor in the base, without breaking the current functionality, once done refactoring or adding stuff, the new DLL can be used to talk to.

Maybe I am getting carried away with making things modular now though…

Overall, good to see our idea of making things easy to transfer indeed seem to work well, although I do not want to say that this one week was enough to hand over everything of course!

Based on this week I have started to explain things to the test team internally, which does seem to indeed be an improvement. I do believe that having this week gave me a chance to play around with the ways in which I explain stuff, especially on a non-technical level.

Erwin, thanks for being here and listening to my explanations, following instructions and asking questions! It was a joy working with you this week.

Anything can be a testtool

Last night we had a meeting at Polteq where Test tooling was at the center of attention. Interestingly most participants consider test tooling immediately to be related to test automation, I think that with a little creative thinking most testers can get a lot more out of tools than just mere test automation. I see tools as just about anything I use in my work as a tester to get my work done. This may be an application like notepad++ to keep track of what I have done or quickly find and replace a word or phrase in several places;  Selenium WebDriver to remove a lot of repetitive work in testing; Excel to create data which can be inserted into DB directly from excel.

One of the things that I realized over the course of the conversation is that, not only are there a lot of different ideas on what tools might be, but there is also confusion about how to use tools.

I have always tried to be creative in my use of tools, in other words, abuse a tool, apparently not everyone thinks like that.

An idea was posed that we compile a list of tools and in what situations these tools can be used, which is an interesting idea, I am just not convinced this is the right approach. A list of tools can of course come in extremely handy, but will we be able to come up with a more useful or complete list than for example on opensourcetesting.org ? Plus, will this stimulate those testers that think tools are directly related to test automation to go look at the list? I am not sure.

I therefore proposed to come up with a list, of 10 or so, tools that are either by default installed on a Windows PC  or are easy to find and download and give some ideas of how you can make these tools work for you in a slightly unorthodox way.

While writing this post I realized how difficult it is to just think up ideas how to (mis)use tools. Generally my ideas for how to make life easier while testing come to me kind of naturally. For example when doing a major refactoring in FitNesse of the testcases, we tried at first to use the Refactor functionality within FitNesse.

This is a fairly simple regular expression find/replace. Works well enough when you do not really care what you are replacing. However when you need to know what you are replacing this refactor function is not good enough, it doesn’t give you any control since it just goes off and does the replace.

What we needed was a slightly more sophisticated way of doing the search and replace. That is where Notepad++ came knocking. This basic text editor is capable of searching within multiple files in a set of directories and showing you the results for this search, it is also capable of replacing all occurrences of these keywords in one big bang, while still showing you what it is doing.

When kicking off our current project, we needed some way to quickly build a hierarchical overview of the applications under test. We first thought of using the sitemap xml, importing that into Excel and using that. This would however, not give us the opportunity to play with it and use it as an inspiration to base the custom fixture on. We ended up using an extremely easy way to build a hierarchical overview, where all nodes can be moved, linked, collapsed and expanded at will: a mind mapping tool. We used Freemind, it is a wonderful little tool, easy to use and free to download!

There probably are an unfathomable amount of other tools that can be abused in this way.  Please share them with me!

Tools are there to do stuff for you, to make life easier. Nobody is stopping you from abusing a tool to your advantage!

Are we building shelf-ware or a useful test automation tool?

Frustration and astonishment inspired this post. There currently is a big regression testing cycle going on within the organization, over the past 4 months we have worked hard with testers to establish a sizable base of automated tests, however the moment regression started everyone seemed to drop the automation tools and revert back into what they have always done: open excel and check the check-boxes of the scripted tests.

Considering that we have already setup a solid base with a custom fixture enabling the tests, or checks if you will, to do exactly what the tester wants them to do and do what a tester would do manually whilst following the prescribed scripts, and having written out, in FitNesse, a fair share of these prescribed scripts, what is stopping them from using this setup?

Are we automating for the sake of automating?

While working on this, extremely flexible, setup with FitNesse and Selenium WebDriver and White as the drivers I have started wondering more and more why we are automating in this organization. The people responsible for testing do not seem to be picking up on the concept of test automation, they are all stating loudly that it is needed and that it is great that we are doing it, but when regression starts they immediately go back to manual checks. I say manual checks on purpose since the majority of testing here is done fully scripted, most of these scripts do not leave anything to the testers imagination, resulting in these tests being checks rather than tests. Checks we can execute automatically, repeatedly and consistently with tools such as FitNesse.

How do you make testers aware that a lot of the scripted tests should not be done manually?

Let me be clear on this, I am a firm believer in both manual and automated testing. They both have their value and should be used together, automated testing is not here to take away the manual testing, rather it is here to support the testers in their work. Automated testing should be complimentary to manual testing. Thus far in this organization, I have seen manual testing happening and I have seen (and experienced) a lot of effort being put into writing out the automated tests in FitNesse. However there has not been a clear cooperation between the two, despite the people writing the automated tests being the same individuals who also are responsible for executing the manual tests (which they have rewritten into FitNesse in order to build automated tests).

We have tried coaching on the job, we have tried dojos, but alas, I still see a hell of a lot of manual checks happening instead of FitNesse doing these checks for them. What is it that makes people not realize the potential of an automation tool? Thus far I have come up with several possible causes

  • In our test-dojos we mainly focused on how to write tests in FitNesse rather than focusing on what you can achieve with test automation. This has led me to the idea that we rapidly need to organize another workshop or dojo in which the focus should be on what the advantages of automated tests are.
  • Another reason could be that test automation was not initiated by this team, it was put upon this team as a responsibility. The team we are currently creating this fixture for is a typical end-of-the-line-bottom-of-the-testing-chain team, everything they get to test is thrown over a wall and left to them to see if it works appropriately. Most of them do not seem to have consciously chosen to be testers, instead they have accidentally rolled into the software testing field. Some of them have adapted very well to this and clearly show affinity and aptitude for testing, others however would, in my opinion, be better of choosing a different occupation. It is exactly the latter group that needs to be pulling this test automation effort currently going on.
There are more reasons I could go into here, but I believe these two to be the main issues at hand here which can actually be addressed.

So what will make people use automation tools properly?

The moment I can answer this one in a general rule-of-thumb I will sell it to the highest bidder. For within this organization however there doesn’t really seem to be a simple solution just yet. As I have written before, there is not yet one sole ambassador for test automation in this organisation. Even if there is, we will need to cause a shift in the general mindset of the testers. Rather than just walking through their predefined set of instructions in excel, they need to consider for themselves what has already gotten covered in the automated tests, how can I supplement these tests with manual testing?

We will need to find a way to get the testers to step out of their comfort-zone and learn how to utilize tools other than Excel and MS Word. Maybe organizing a testing competition will work, see who can cover the most tests in the shortest time and with the highest accuracy?

I am not a great believer in measuring things in testing, but maybe inventing some nice measurements will help the testers see the light. For example “How often can you test the same flow with different input in a certain timeframe?”.

Did we build shelf-ware or did we add value to the testing chain?

At the moment I often ask myself whether I am building shelf-ware or actually am building a useful automation tool (trying to stay away from terms like framework, since that might only increase the distance between the tool and the testers). Whenever I play around with the FitNesse/WebDriver/White setup we currently have running I see an incredibly versatile test automation tool which can be used to make life a lot easier for those who have to test the software regularly and repeatedly (not just testers, but also developers, product owners etc. can easily use this setup).

It is completely environment agnostic, if needed we can (and have in the past) run the same tests we run in a test environment also in production. It is easy to build new test cases/scripts or scenarios (I seem to have lost track what would be the safe option here to choose, they all have their own subconscious connotations) since it is a wiki. All tests are human readable, if you can read an excel sheet, reading the tests in FitNesse with Slim the way we built it, should be child-play.

Despite all these great advantages, the people that should be using it are not.

Reading all this back makes me consider one more thing; we started off building this setup with these tools based on a request from higher management. The tool selection was done by the managers (or team leads if you will) and not by the team themselves. Did we miss out on the one thing the IT industry has taught us? Did we build something we all want, but not what our customer wants and needs? I hope not, for one thing, I am quite sure this is what they need, an easy to use tool to automate all tedious, repetitive check work.

Question that remains: is this what our customer, or to be more exact, our customers’ end user, the tester, wants?

Test automation ambassador needed

I need to define the role of a test automation ambassador.

What would a true, organization, test automation ambassador look like? Should it be a person with lots of experience, especially in the automation field, or can it also be a rookie, fresh out of college?

I guess it all depends on the organisation, right?

Imagine a sizable technology organization where several IT departments live, they all have their own product owners, their own business owners, their own development teams and thus also for a big part their own codebase.

Where does this all come from? We work with separate teams within the IT departments, these teams are completely separated. They have their own product owners, their own business owners, their own development teams and thus also for a big part their own codebase.

From the test automation side we have a unique position within the organisation, we have a helicopter view. We are separated from all these other departments and have our own thing todo: open up all the pieces of the platform to enable test automation with FitNesse and whatever drivers we need. This position creates a lot of perks, such as the freedom to work independently from any of the teams, we get to work first with new tools and toys since we are seen as a bit of a playground.

It of course also poses a potential problem. We are implementing test automation as externals, we are there to help, not to own. Where do we now place the ownership of test automation, and of the FitNesse part in particular, needs to have a place. In my view test automation ownership should not be laying within any of these separate teams, it should be outside of those.

Why does the ownership need to lay outside of the teams?

For one, the teams are competing for resources, another problem would be that they all have different agendas and deem their own part of work the most important contribution to the IT landscape within the company, in other words, their own piece of the automation pie will be well maintained, the rest will be neglected. Most importantly however, non of these teams have a clear overview of what the entire platform, i.e. all components together, look like from a functional point of view.

The owner of the FitNesse side of test automation needs to have this overview. This same owner however, also needs to clearly have some weight to put in the scales to ensure that all teams

  • use test automation effectively
  • maintain their part of the test suite (FitNesse test cases)
  • maintain and add their part of the custom fixture
  • do not break the overall regression, or smoke test or end-to-end test

Logically the owner of test automation would be within the group responsible for regression testing. This is exactly where the challenge is however. The team responsible for regression (let’s call them the regression team), is understaffed and fairly inexperienced, completely inexperienced when it comes to test  automation and how to successfully roll that out over an entire tech organization. The separate team members all have their own strengths and as a team do a very solid job in manually executing regression or smoke tests, but there does not seem to be one person strong enough to pull the automation effort beyond writing the testcases.

Ideally the ambassador of test automation (in this organization at least), and thus the owner of both the FitNesse fixture and the FitNesse testcases resides outside of all teams that have a use for test automation. This opens up the road to continuous development and maintenance on the automation suite, it will ensure independence of other teams and thus the ambassador will be able to make clear decisions based on what is good for the automation program first and think about the teams second, the individual teams are contributing to both the Fixture and the regression suite anyway, so their needs will be covered within sprint or roadmap.

I fear we will have to apply the polder-model here however, and find a way to make it work. What we have built thus far, both as a custom fixture and in terms of testcases, can already be of huge added value to the organization.

However I still hope we can find the prospect ambassador and coach, shape and train this person to have both the knowledge, skills and mental strength to take the next steps needed to get this organization closer to continuous testing.

Awareness of what test automation can do however is holding people back from using it.

– Edit –

So what skills should this ambassador at least possess ?

  • Affinity with testing and test automation in particular
  • Solid understanding of what can be tested automatically and more importantly what should and should not be tested automatically
  • Capability of explaining  to all levels within the organization what we can achieve with test automation
  • Presence and charisma to not just sell test automation within the organization to sceptics, but convince them and show them the added value of it and make them want to use it
  • Insight in how to maintain test scripts across the teams, how to deal with the inheritence from several teams into regression and how to organize this all into a solid, robust, trusted automated regression set
  • At least a base knowledge of programming in order to help maintain the FitNesse fixture and to be able to help new test engineers get started with the inner workings of the fixture
  • Know and understand how the organization works together and how you can get the several teams to contribute effectively to the test automation effort