All my automated tests are passing, what’s wrong?

The other day I had an interesting discussion with some developers within a scrumteam I am part of. We were discussing the use of test automation within the team and how to deal with the changing requirements, code and environments, which would lead to failing tests.

They gave the clear impression they were very worried about tests failing. I asked them what would be wrong with tests failing in a regression set during sprints, which led them to look at me with a question in their faces: Why would a tester want tests to fail??
If anything I would automated tests expect to fail, at least partially.

While automating in sprint I’m assuming things to be in a certain state, for example I’m assuming that when I hit the search button nothing happens, in the next sprint I really hope this test will break, meaning the search button is leading me to some form of a result page. That way all tests are, just like the rest of the code, continuously in flux and get constantly updated and refactored.

This of course fully applies to automating tests within a sprint, when automating for regression or end-to-end testing however, I would rather expect my tests to pass. Or at least the majority of regression tests should keep passing consistently.

When is automation useful?

Recently I helped write a proposal aimed at investigating whether or not to automate regression testing of a system is worth the investment. Looking at their current time spent in regression testing I started wondering when it is worth automating the hell out of everything and when it is just eating money on being hip to have tests automated.

I guess it all depends on a combination of things, such as time spent in regression testing, effort and investment needed to automate things, knowledge about test automation in house (e.g. does an external need to come in and do it for you?), how often do you need to run through regression on the platform, can all of regression be automated or just a part of it etc.

So, seems like you have to be a bit of a mathematician to see if test automating is worth your while, doesn’t it?

Let’s get realistic then instead.

I truly believe the easiest way to see whether test automation is your piece of cake, there’s one very effective way to do it: set a clear goal on what you want to achieve with test automation and based on that do a Proof of Concept. In other words, try it out. This is assuming you have the knowledge to write the code for test automation in house.

Start off with a little bit of research to figure out which tool will quite likely work best for you, there are plenty of Forums out there, or simply ask some fellow testers out there. Take the tool, or for all I care the demo version of some tool, you hope will work for you and try it out. Automate a few tests, just to see how fast it can be done. Do not try to make it all pretty, perfect and reusable immediately. First focus on what’s important: will it work? Does it help me get to my goal?

If it does, cool, now you can start for real, refactor the PoC-code or simply throw it out and start from scratch.

This of course becomes a slightly different story if you do not have the knowledge in-house to write the code yourself, or if you simply have no clue where to start with test automation.

If this is the case, please do not adjust the approach all that much. It is still the best, in my view, to let someone prove to you that their approach and their choice of tooling indeed is the best for your application and organisation. Keep in mind that at the end of the line, you still need to be able to use and maintain the tool and automated tests yourself, in-house. Having some contractor for that running around constantly makes the test automation effort an extremely tedious and expensive one.

Automating a legacy system

In the past I have worked a lot with fairly new, web-based applications. So here and there I would run into some “old” desktop applications, but the majority of my professional career has been, working with web-based technologies.

In my current assignment however I have hit a nice new technology, at least new to me: COBOL over TN3270

When I started on this assignment I had one big fear: being forced to use one of the bloated commercial tools out there which claim they support TN3270 off the shelf. Oh, did I say bloated? That sounds like I am biased towards the so called big commercial tools.

This sense of bias is partially true, I am in favor of using small drivers like WebDriver or White where possible over using a full environment such as QTP or Rational Functional Tester. The big tools quite probably have  their own unique selling points, however for a small IT organisation they are a huge investment and quite often overkill compared to what is actually needed.

When we started looking into tools we almost instantly dismissed the “big ones”, not so much due to their cost, but mainly due to the amount of customization still needed to make them work “out of the box” with TN3270 and the rest of the environment we’re automating. When talking to the vendors they all claimed that their tool is excellent to use for an environment for this, when asked however whether the tool will be pluggable out of the box the answer was consistently no, you will need to do some customization (or better yet, one of their consultants would need to do some customization). This same amount of customization will need to be done with a small driver as well, so why spend a fortune on something that is no better than a small, cheaper or even free tool?

For the Proof of Concept we decided to take two different drivers: Jagacy and s3270.

Jagacy is in their own words:

“… our award winning 3270 screen-scraping library written entirely in Java. It supports SSL, TN3270E, and over thirty languages. It also includes a 3270 emulator designed to help create screen-scraping applications. Developers can also develop their own custom terminal emulators (with automated logon and logoff). Jagacy 3270 screen-scraping is faster, easier to use, and more intuitive than HLLAPI. It excels in creating applications reliably and quickly.”

I cannot but agree with especially the latter part: it is indeed faster and easier than HLLAPI. In terms of more intuitive, I am not quite convinced (yet).

s3270 is a lot more simplistic than Jagacy:

” opens a telnet connection to an IBM host, then allows a script to control the host login session. It is derived from x3270(1), an X-windows IBM 3270 emulator. It implements RFCs 2355 (TN3270E), 1576 (TN3270) and 1646 (LU name selection), and supports IND$FILE file transfer.”

In other words, s3270 can be used as a layer between an application and a legacy TN3270 system.

Talking directly to s3270 is not the most intuitive thing, this requires the code to start a process, keep track of the state of this process and throw input into this process stream. All nice and doable, however it was already invented before by others and they quite likely did an adequate job at making this work, considering TN3270 is a fairly old protocol.

In order to make life for us easier while using s3270 or the PoC, we searched around a bit and found a wonderful open source tool h3270. Put simply, h3270 is a program that allows you to communicate with tn3270 hosts from within your browser. This open source tool helped us quickly hook up our test code to s3270. We dit not however need this browser functionality but are thankfully using the object orientated abstractions within h3270 for our own purposes

One of the wonders of working with tn3270 is that a whole new world of states opens up. In a web-browser for example, something is either there and visible or it is hidden from the user. Keyboards are never locked, when opening a new page the browser will tell your driver when it is done loading. With tn3270 it is not all that simple. First of all, there are features like a locked keyboard, it works with function keys that have disappeared from our physical keyboards years ago, the state of a screen is unknown most of the time, etc.

On the other hand, a terminal emulator is still nothing other than a form of sorts, just a bit more difficult to read and write.

Basically one of the things we once more saw proven, is that in test automation the basics are always similar. It is just the way you need to talk to the system that may be different.

Are we afraid of test automation?

After having spent the last 9 months working on a test automation implementation for a team I am not running myself I have started to wonder what are the main success factors for test automation, besides the obvious: achieving the goal you set out in the first place?

One of the main factors that will help test automation be a success in your organisation is to not consider test automation a goal, but use it as a means to achieve a goal. The goal should not be something like “have x% of regression automated”, your goal should be something in which test automation can be a means to achieve this goal, for example help free up time for the testers to focus on important things rather than having to spend most of their time in regression testing.

Another very important thing to keep in mind in implementing test automation is that you need to keep a sharp eye out for trying to automate everything. Technically it may be possible, but it is hardly ever a good idea to want to automate everything. Quite some things require human intervention or human interpretation and cannot be simply made into a Boolean (which is what test automation is in the end, something is either true or false).

Look & feel testing, or validating, should in most cases not be automated for example. This for the simple reason that, despite it being possible, will more often than not raise either false positives or more likely, false negatives. Since these tests often require some form of image recognition and comparison a change of screen resolution or a small CSS change (font difference for example) will make the test fail, resulting in either maintenance or tests being made redundant.

For me however, the main show of success is that the automated tests are actually used and maintained actively.

Having a test automation framework and relying on it to do its job is not good enough. Just like every other software automated tests also want some TLC or at least some attention and maintenance.

Something that is still often seen in test automation is that the tests run but with non-determinate errors, in other words errors that are not really due to the testcases but are also definitely not a bug in the system under test.

If you show some tender love and care to your automation suite, these errors will be spotted, investigated and fixed. More often however these errors will be spotted, someone will “quickly” attempt to fix it and fail, after which the “constantly failing test” will be deemed useless and switched off.
Besides non-determinate errors there is another thing I have seen happen a lot in the past.

Some automation engineers spend a lot of time and efforts building a solid framework with clean and clear reporting. They add a lot of test cases, connect the setup to a continuous integration environment and make sure that they keep all tests working and running.
They then go ahead building more and more tests, adding all kinds of nice new tests and possibilities. What gets forgotten often however, is to involve the rest of the department, they do not show and tell to the (manual?) testers, they do not share with the developers. So the developers use their own unit tests for themselves and if they do some functional testing they do it manually and sloppily. The (manual) testers go about their usual business of testing the software, as they have always done. Not considering that a lot of the test automation suite they can use and abuse to make their life easier. They will spend time on data seeding, manually. They will spend time on look and feel verification on several browsers, manually.

All this time the developers and (manual) testers could have been using the automation framework and the existing tests in it to make their life a lot easier.

While writing this down it starts sounding silly to me and unlikely that it happens, yet I have seen this happen time and time again. What is it that makes developers but especially testers afraid of hesitant to use automated tests?
I love using test automation to make my life easier when doing manual testing, despite having an very strong disliking for writing code.

Test automation should be seen as a tool, it is there to make life a hell of a lot easier. It cannot replace manual testing, but it can take all the repetitiveness and tediousness of manual testing away, it can help you get an idea of what kind of state a system is in before you even start testing, but it can also help you, a lot, in getting the system in all kinds of states!

Getting a junior up to speed on test automation with FitNesse

Last week we had the privilege of having a junior test engineer working with us for a few days to see what it would take to get him fully up and running with test automation as we have implemented it at our customer.

Our intern, as I introduced him at our client, has a solid eduction in Civil Engineering and lacked any kind of knowledge of test automation or writing software. He has just finished his first project as a junior tester, which was testing a data warehouse migration.

Motivation

What drove us to try this was simple: curiosity and questioning of my personal coaching and explaining skills. Thus far I have had the feeling that I somewhat failed in showing how easy the setup of FitNesse with our custom fixture is to anyone who is not part of a management group. With this engineer I wanted to confirm whether this was me not explaining things clearly or people not following what I explained properly (e.g. me explaining things in the wrong wording).

Starting point

Our “intern”, as said, has little or no hard IT knowledge. He is one of the junior test engineers that came out of the group of trainees Polteq Test Services hired at the beginning of the year. With his degree in civil engineering he is for sure a smart guy, but considering he has never been on the construction side of software, he had some ways to go.

Considering that he had no prior knowledge of either the FitNesse/WebDriver setup nor the environment we are working on, we started with a morning session of explaining and overflowing him with information by answering the following questions.

  • What do the applications under test do?
  • What is the service oriented architecture underneath?
  • How does this all tie together into what you see when you start using the applications?
  • What is FitNesse?
  • What is WebDriver?
  • How do these two work together?
  • What is the architecture of the Fixture?
  • What is a fixture and how does that relate to WebDriver and FitNesse?

After this session he was set to work with FitNesse. The main thing that slowed him down somewhat was the getting used to the syntax as we force it through the use of our custom Slim fixture. At this point he still had only a minor base knowledge of what the application under test does or is supposed to do. Based on the existing functional testcases he managed to fairly rapidly automate a set of testcases, or more specifically, transform them from base functional descriptions into Slim tables which will run successfully as a test.

The result

Writing the testcases was easy work for him, he picked up the base syntax really fast and managed to pump out some 15 working tests in a very short period. It was time for a bit of a challenge.

Considering he had never written a line of code in his life I thought we might as well check to see how fast he would pick up writing a simple wrapper in C# around an Advanced Search page, which includes a set of dropdowns, textfields, radiobuttons and checkboxes which can be manipulated along with a submit and reset button.

The first two methods we wrote out together, him typing while I was explaining what is what. Why do you want the method to be public, why does it need to be a bool, what are arguments and how do you deal with that in FitNesse.  Where do you find the Identifier of the object you are trying to get the wrapper around, what do you do when there is no ID, how do you extract the xpath and make that work well. Once we got through the first few methods I set him at work to figure it out for himself.

The first question I received after a while was: ok, so now I’m done writing these things out in code, then what? How can I now see this working in FitNesse? After making an extremely feeble attempt at explaining what compiling is and deciding to just show where the compile button is, he then set to work to verify in FitNesse that his code indeed is capable of reaching and manipulating every element on the search page and getting to a sensible search result.

Take away

What did I learn in this week? For starters that when coached close enough it is very simple to get someone without experience up and running with FitNesse the way we set it up, which is good to have confirmed again, since that was the aim.

Another thing we have seen proven is that adding new methods to the fixture is dead-simple, changing the ID’s of objects on the pages should not lead to too much hassle maintaining the fixture. For the Base class quite some developer knowledge is still required, but once that part is standing expanding the testable part can be done with some of coaching. So technically we would need to start handing over maintenance of our Base classes to someone within Development and hand off maintaining the rest of the fixture to someone within the test teams here.

One of the things we might consider in making maintenance easier could be to split the leaf-nodes, e.g. the page level, off from the base and helper classes in order for these two to be complete independent of one another, which means that the developer can add and refactor in the base, without breaking the current functionality, once done refactoring or adding stuff, the new DLL can be used to talk to.

Maybe I am getting carried away with making things modular now though…

Overall, good to see our idea of making things easy to transfer indeed seem to work well, although I do not want to say that this one week was enough to hand over everything of course!

Based on this week I have started to explain things to the test team internally, which does seem to indeed be an improvement. I do believe that having this week gave me a chance to play around with the ways in which I explain stuff, especially on a non-technical level.

Erwin, thanks for being here and listening to my explanations, following instructions and asking questions! It was a joy working with you this week.

Anything can be a testtool

Last night we had a meeting at Polteq where Test tooling was at the center of attention. Interestingly most participants consider test tooling immediately to be related to test automation, I think that with a little creative thinking most testers can get a lot more out of tools than just mere test automation. I see tools as just about anything I use in my work as a tester to get my work done. This may be an application like notepad++ to keep track of what I have done or quickly find and replace a word or phrase in several places;  Selenium WebDriver to remove a lot of repetitive work in testing; Excel to create data which can be inserted into DB directly from excel.

One of the things that I realized over the course of the conversation is that, not only are there a lot of different ideas on what tools might be, but there is also confusion about how to use tools.

I have always tried to be creative in my use of tools, in other words, abuse a tool, apparently not everyone thinks like that.

An idea was posed that we compile a list of tools and in what situations these tools can be used, which is an interesting idea, I am just not convinced this is the right approach. A list of tools can of course come in extremely handy, but will we be able to come up with a more useful or complete list than for example on opensourcetesting.org ? Plus, will this stimulate those testers that think tools are directly related to test automation to go look at the list? I am not sure.

I therefore proposed to come up with a list, of 10 or so, tools that are either by default installed on a Windows PC  or are easy to find and download and give some ideas of how you can make these tools work for you in a slightly unorthodox way.

While writing this post I realized how difficult it is to just think up ideas how to (mis)use tools. Generally my ideas for how to make life easier while testing come to me kind of naturally. For example when doing a major refactoring in FitNesse of the testcases, we tried at first to use the Refactor functionality within FitNesse.

This is a fairly simple regular expression find/replace. Works well enough when you do not really care what you are replacing. However when you need to know what you are replacing this refactor function is not good enough, it doesn’t give you any control since it just goes off and does the replace.

What we needed was a slightly more sophisticated way of doing the search and replace. That is where Notepad++ came knocking. This basic text editor is capable of searching within multiple files in a set of directories and showing you the results for this search, it is also capable of replacing all occurrences of these keywords in one big bang, while still showing you what it is doing.

When kicking off our current project, we needed some way to quickly build a hierarchical overview of the applications under test. We first thought of using the sitemap xml, importing that into Excel and using that. This would however, not give us the opportunity to play with it and use it as an inspiration to base the custom fixture on. We ended up using an extremely easy way to build a hierarchical overview, where all nodes can be moved, linked, collapsed and expanded at will: a mind mapping tool. We used Freemind, it is a wonderful little tool, easy to use and free to download!

There probably are an unfathomable amount of other tools that can be abused in this way.  Please share them with me!

Tools are there to do stuff for you, to make life easier. Nobody is stopping you from abusing a tool to your advantage!

Are we building shelf-ware or a useful test automation tool?

Frustration and astonishment inspired this post. There currently is a big regression testing cycle going on within the organization, over the past 4 months we have worked hard with testers to establish a sizable base of automated tests, however the moment regression started everyone seemed to drop the automation tools and revert back into what they have always done: open excel and check the check-boxes of the scripted tests.

Considering that we have already setup a solid base with a custom fixture enabling the tests, or checks if you will, to do exactly what the tester wants them to do and do what a tester would do manually whilst following the prescribed scripts, and having written out, in FitNesse, a fair share of these prescribed scripts, what is stopping them from using this setup?

Are we automating for the sake of automating?

While working on this, extremely flexible, setup with FitNesse and Selenium WebDriver and White as the drivers I have started wondering more and more why we are automating in this organization. The people responsible for testing do not seem to be picking up on the concept of test automation, they are all stating loudly that it is needed and that it is great that we are doing it, but when regression starts they immediately go back to manual checks. I say manual checks on purpose since the majority of testing here is done fully scripted, most of these scripts do not leave anything to the testers imagination, resulting in these tests being checks rather than tests. Checks we can execute automatically, repeatedly and consistently with tools such as FitNesse.

How do you make testers aware that a lot of the scripted tests should not be done manually?

Let me be clear on this, I am a firm believer in both manual and automated testing. They both have their value and should be used together, automated testing is not here to take away the manual testing, rather it is here to support the testers in their work. Automated testing should be complimentary to manual testing. Thus far in this organization, I have seen manual testing happening and I have seen (and experienced) a lot of effort being put into writing out the automated tests in FitNesse. However there has not been a clear cooperation between the two, despite the people writing the automated tests being the same individuals who also are responsible for executing the manual tests (which they have rewritten into FitNesse in order to build automated tests).

We have tried coaching on the job, we have tried dojos, but alas, I still see a hell of a lot of manual checks happening instead of FitNesse doing these checks for them. What is it that makes people not realize the potential of an automation tool? Thus far I have come up with several possible causes

  • In our test-dojos we mainly focused on how to write tests in FitNesse rather than focusing on what you can achieve with test automation. This has led me to the idea that we rapidly need to organize another workshop or dojo in which the focus should be on what the advantages of automated tests are.
  • Another reason could be that test automation was not initiated by this team, it was put upon this team as a responsibility. The team we are currently creating this fixture for is a typical end-of-the-line-bottom-of-the-testing-chain team, everything they get to test is thrown over a wall and left to them to see if it works appropriately. Most of them do not seem to have consciously chosen to be testers, instead they have accidentally rolled into the software testing field. Some of them have adapted very well to this and clearly show affinity and aptitude for testing, others however would, in my opinion, be better of choosing a different occupation. It is exactly the latter group that needs to be pulling this test automation effort currently going on.
There are more reasons I could go into here, but I believe these two to be the main issues at hand here which can actually be addressed.

So what will make people use automation tools properly?

The moment I can answer this one in a general rule-of-thumb I will sell it to the highest bidder. For within this organization however there doesn’t really seem to be a simple solution just yet. As I have written before, there is not yet one sole ambassador for test automation in this organisation. Even if there is, we will need to cause a shift in the general mindset of the testers. Rather than just walking through their predefined set of instructions in excel, they need to consider for themselves what has already gotten covered in the automated tests, how can I supplement these tests with manual testing?

We will need to find a way to get the testers to step out of their comfort-zone and learn how to utilize tools other than Excel and MS Word. Maybe organizing a testing competition will work, see who can cover the most tests in the shortest time and with the highest accuracy?

I am not a great believer in measuring things in testing, but maybe inventing some nice measurements will help the testers see the light. For example “How often can you test the same flow with different input in a certain timeframe?”.

Did we build shelf-ware or did we add value to the testing chain?

At the moment I often ask myself whether I am building shelf-ware or actually am building a useful automation tool (trying to stay away from terms like framework, since that might only increase the distance between the tool and the testers). Whenever I play around with the FitNesse/WebDriver/White setup we currently have running I see an incredibly versatile test automation tool which can be used to make life a lot easier for those who have to test the software regularly and repeatedly (not just testers, but also developers, product owners etc. can easily use this setup).

It is completely environment agnostic, if needed we can (and have in the past) run the same tests we run in a test environment also in production. It is easy to build new test cases/scripts or scenarios (I seem to have lost track what would be the safe option here to choose, they all have their own subconscious connotations) since it is a wiki. All tests are human readable, if you can read an excel sheet, reading the tests in FitNesse with Slim the way we built it, should be child-play.

Despite all these great advantages, the people that should be using it are not.

Reading all this back makes me consider one more thing; we started off building this setup with these tools based on a request from higher management. The tool selection was done by the managers (or team leads if you will) and not by the team themselves. Did we miss out on the one thing the IT industry has taught us? Did we build something we all want, but not what our customer wants and needs? I hope not, for one thing, I am quite sure this is what they need, an easy to use tool to automate all tedious, repetitive check work.

Question that remains: is this what our customer, or to be more exact, our customers’ end user, the tester, wants?