Why test “by the book”

The other day I was reading a blog post on agile and why agile will fail in many instances. One of the comments got my specific attention, the comment states the following:

” a process with little Agility due to the remains of the “old process”. ”

This is why by-the-book scrum is so powerful. Too many agile consultants try to fit agile into the existing org structure and processes, thereby allowing existing dysfunction to remain, or worse, covering it up. They try to modify everything right out of the gate, instead of just choosing scrum.

This got me thinking about why so many methodologies seem to get followers who treat “their” methodology as a religion. I have been pondering about the different development and testing methods, such as XP, Scrum, Lean for development life cycles and ISTQB, TMap and such for testing in particular.

In religions it is generally considered bad to be extreme in following the rules, hence the term extremist, whether this is an orthodox, katholic, jewish or islamic extremist, they are always considered to be dangerous to society. Isn’t it the same in software development and testing? Aren’t the people who go to extremes to follow the rules as dreamed up by some author also extremists that seem to lose sight of the context?

Thus far the best implementation of any methodology I have seen, is a form of hybrid, or as the Dutch would call it a “polder model”, where you make a compromise between “the book” and “what actually works for us as a team or organization”.

Are methodologies best practices then? Aren’t methodologies meant to help people get a frame of reference and fill that in for themselves, by thinking about the frame of reference, critizing it, adjusting it to their needs. Shaping the method in such a way that it works optimally for you, in this situation, in this particular context. When moving on to a new task or assignment, you can take these learnings with you and see what of it works for you within this new context and adjust whatever doesn’t work.

Maybe it is time for a new methodology, which ties in well with a solid development method by Zed A. Shaw. A possible working name could be “Testing, fuckwit!”.

Throw-away test automation

I quite often tell clients that their approach to test automation is not sustainable, this got me thinking, does test automation always have to be sustainable and reusable?

This all depends on the goal you’re trying to meet I guess. If your goal is set for long-term cost efficiency, shorten the timelines on regression testing and through that get to a more rapid release cycle, yes you will need to be focussed on the re-usability of your automation suite.

However there are plenty of instances where you want to automate something to make life easier right here, right now. Most testers, I hope, know the feeling of having to go through tedious, repetitive work, setting up data for a test, going through the login flow of an application to get to the feature you want to test etc. For actions like that, you can very well use automation. In fact, you can quite often use the most simple form of automation, record/playback without having to adjust your scripts for maintainability or re-usability.

Tools like Selenium IDE and AutoIt are excellent for making life easy right here, right now when you need to quickly automate something to make life easy. Funnily enough a lot of testers either do not realize these tools can be used to make life easier. When talking with colleagues about test automation they quite often think of big test automation packages, like QTP or Rational Robot. Sometimes they ask how much you need to know of software development and writing code to automate things. And most of these conversations I let myself be sucked into the tool talks and indeed talk about the difficulties of setting up a test automation framework.

In future conversations I am going to try to explain my colleagues and fellow testers that automation does not need to be a big operation, it doesn’t need to be reusable and maintainable, at least, depending on your goals. As long as your goal is to make life easy here and now, there is no need to build something awesome.

For a lot of things, a simple script, either hand written or simply recorded, can be more than enough to get to your goal, when done with your tasks you can then throw them away, but preferably be a bit smart about it and just dump it somewhere in a repository, you might have to do this task again.

Is testing the dumping grounds of IT?

The other day I was talking to a few developers I was on an assignment with about getting testers added to their scrum team, and the response I got from them disturbed me. They told me that in their experience most testers do not work together in the team, they work against development, trying to get everything fully tested, despite them knowing this is not a feasible thing, and with that delay projects. On top of that they told me, most of the testers they have worked with, are part of the dumping grounds of the IT industry. And with that they meant that in their view most testers are not good enough to be a developer, so they decided to become testers instead (<sarcasm> cause, come on, testing is not that difficult anyone can do that! <\sarcasm>).

I was shocked to hear there are still a lot of developers out there who believe that testers are the dumping grounds of the IT industry, but I was even more shocked of their experiences with testers, working in an “us versus them” modus operandi instead of working in a team, part of a joing effort with a shared focus and goal.

What is it that still makes testers often work against developers instead of with them?

Most testers I have worked with over the past years agree that working side by side with development is the most effective and efficient way of working, this way you both keep track of your joint goal: get the software out on time, on budget and according to what your customer (or end-user for that matter) wants and needs. Together you try to add value to the software.

So is it indeed true that there are still a lot of testers out there in the field who are indeed not seeing the big picture and are trying to prove their worth by working against dev and looking for bugs that are not relevant, e.g. just looking for bugs for the sake of finding one, no matter what the value of that bug is to the end-user/customer, just so they can triumphantly point to a developer that indeed, “see! There are bugs in your code, you did it wrong!” Unfortunately I fear there are still too many testers out there that think and work this way, not to even mention all the developers out there that seem to not understand the added value of a good tester to the team and to the developers work!

Fortunately there is a wonderful contrast out there as well, in the form of this blog post by Nathan Lusher who shows that there indeed are good testers out there, who weigh in on a project and prove the value of testing and with that show that testers are not (or at least not everywhere) the dumping grounds of IT.

In my experience, there are a lot of very good, inspired and knowledgable testers out there, who see the added value of working together, in a team with a shared goal, a shared approach and shared respect. If testers want to get the respect of developers, I believe it is up to quite a lot of the testers to start by showing respect to the developers and where needed, increasing their technical knowledge in order to be able to counterbalance a developers viewpoint. You get what you give!

One year as a test consultant – a retrospective

Throughout my career I have mostly worked in house for a “software house”, with which I mean an organisation that builds software as a core of their business model. Even my times at Finalist IT Group and Quantiq X-Media, when working for external customers, and in the case of Finalist on site at the customer, I have always worked for organisations that create software to sell or sell the services of developers.

When I left Spil Games I decided I wanted a change in my career? or better said, i wanted to try different side of the business. I had spent the better part of the last 4 years managing people and processes and enjoyed it a lot and now it was time to move my skills to a different level making sure enjoying myself as a software tester is as well in scope as the managing bit. I really wanted to get back to what I like most: software testing, setting up a testing process, showing developers how things can be better when continuous testing is going on, in short “finding solutions by executing and not just managing”.
For the last year I have worked as a test consultant at Polteq Test Services. In this year I have touched a range of things in my work I am extremely passionate about, setting up test automation, attempting to help testers improve themselves and the product they work on, helped review a book, used my network to help companies deliver better products by having them tested, helping out writing commercial offers for potential customers of Polteq and probably more I don’t even remember.

So far I have quite enjoyed the variation in the work and quite enjoy being on site at the customers. The one thing I truly miss though is the direct interaction with my colleagues. When working fully in-house there are always steady colleagues, who share your thoughts and worries about the employer, the atmosphere etc. In consultancy however, quite often you do not have your own colleagues on site, you’re mostly working with the customer. So now and again you want to be able to vent frustrations, whether they are about work, traffic or your customer, it is not always easy to do that when on site.

A side effect of working for a company specialized in software testing is, that I am a lot more involved in the “community” and development of the trade. My twitter stream is a lot more active, I have started blogging about my work, I try to stay in touch with communities and groups on LinkedIn and of course on Software Testing Club.

Extra personal effect of me no longer managing people, I am generally a lot more relaxed at home, I have learned to leave my work behind me and not (well, ok, hardly) take it home with me.

Overall looking back to this year I can say that I enjoyed my new position as a consultant. My expectations were quite high to be honest and  I enjoyed it even more. So far it turned out to be beneficial for both my professional and personal life. I used my skills and capabilities in a totally out-of-box way, discovered new talents and potentials and tried out quite some new activities (book, big presentations, creating a whole new concept / theory, etc).  At the moment I consider this a very good step for my career as this kind of job keeps me motivated and inspired.

All my automated tests are passing, what’s wrong?

The other day I had an interesting discussion with some developers within a scrumteam I am part of. We were discussing the use of test automation within the team and how to deal with the changing requirements, code and environments, which would lead to failing tests.

They gave the clear impression they were very worried about tests failing. I asked them what would be wrong with tests failing in a regression set during sprints, which led them to look at me with a question in their faces: Why would a tester want tests to fail??
If anything I would automated tests expect to fail, at least partially.

While automating in sprint I’m assuming things to be in a certain state, for example I’m assuming that when I hit the search button nothing happens, in the next sprint I really hope this test will break, meaning the search button is leading me to some form of a result page. That way all tests are, just like the rest of the code, continuously in flux and get constantly updated and refactored.

This of course fully applies to automating tests within a sprint, when automating for regression or end-to-end testing however, I would rather expect my tests to pass. Or at least the majority of regression tests should keep passing consistently.