All my automated tests are passing, what’s wrong?

The other day I had an interesting discussion with some developers within a scrumteam I am part of. We were discussing the use of test automation within the team and how to deal with the changing requirements, code and environments, which would lead to failing tests.

They gave the clear impression they were very worried about tests failing. I asked them what would be wrong with tests failing in a regression set during sprints, which led them to look at me with a question in their faces: Why would a tester want tests to fail??
If anything I would automated tests expect to fail, at least partially.

While automating in sprint I’m assuming things to be in a certain state, for example I’m assuming that when I hit the search button nothing happens, in the next sprint I really hope this test will break, meaning the search button is leading me to some form of a result page. That way all tests are, just like the rest of the code, continuously in flux and get constantly updated and refactored.

This of course fully applies to automating tests within a sprint, when automating for regression or end-to-end testing however, I would rather expect my tests to pass. Or at least the majority of regression tests should keep passing consistently.

5 thoughts on “All my automated tests are passing, what’s wrong?

  1. Pingback: Five Blogs – 2 April 2012 « 5blogs
  2. The answer depends on whether the automated tests are expected to pass or not.

    If the AT is a throwaway or a work in progress test, then it is OK and even normal that it fails.

    If the AT is a smoke/sanity/regression test or some test expected to be stable, the answer could be:
    – a) TRUE NEGATIVE: the test failed, AND the script is valid -> Found a defect (that’s good, ain’t it?)
    – b) FALSE NEGATIVE: the test failed, BECAUSE the script was not valid -> Reminder to update your AT scripts and rerun the test; until you do this, you don’t know whether the code is OK or not, but you’re working on it.
    – c) TRUE POSITIVE: the test passed AND the script is valid -> Rejoice!
    – c1) TRUE POSITIVE’: yes, the script passes, but you’re not really looking into the places (functional areas) where you could be catching regression defects… (you need to write more efficient tests!!!)
    – d) FALSE POSITIVE: the test passed BUT the script is invalid -> You should avoid this situation as much as possible… (and it probably won’t be easy)

    So, as long as we are in a), b) or c), it should be part of the normal workflow…
    Being in c1), if detected, might put Test Automation cost/benefits under debate, although it can redirect the efforts in a very positive way.
    However, d) that is something that should be avoided with a careful design of the test cases and a deep knowledge of both the scripts and the application under test, and with regular AT script reviews (at least on a “what’s it doing” non-technical level), or by any other means that you can come up with).

  3. Pingback: Test automation in Agile | Testing Experiences

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.