Automating SAP to create load from an RFC port

In my current assignment I am tasked with coordinating the testing of the integration of several retail systems, basically making them work together logically and effectively. Part of the work is  oriented towards load and performance testing of these integrated systems.

What is being done is that SAP Retail systems need to communicate with Locus WMS, since the version of SAP currently running at the customer cannot deal with anything but IDocs a message broker has been setup in between SAP and Locus to translate the IDocs into XML and vice versa. The IDocs are served to the message broker via SAP’s default RFC port, the broker pulls the documents out of SAP, translates them and sends them off to Locus to be picked up and processed. This is a simplification of how it truly works, since it is only meant to help set the scene.

Generating IDoc load

In order to build up load in a structured, guided way from SAP there are a few ideas of what can be done. My initial hopes, were to push IDocs from a load generator to the message broker. This would be the easiest way in which to control the flow of data towards the broker and thus the easiest way to make sure we are fully in control of how busy the broker is. Alas, when talking to the guys behind the broker interfaces it turned out that this method would not work for the setup used. The only way the broker would actually do something with the IDocs was if it could pull them from the SAP RFC port, pushing to the broker would not work, since the RFC receiving end of the broker is not listening, it is pulling.

Alternatively sending data off into the message queue would fill up the MQ, but not help with getting the messages pushed through the Broker, again, due to the specific setup of the Enterprise Service Bus which contains the broker interfaces.

Spike testSo alternatives needed to be found. One obvious alternative is setup a transaction in SAP which generates a boat-load of IDocs and sends the to the RFC port in one big bulk. This would generate a spike, such as shown in this image, rather than a guided load. In other words, this is not what we want for this test either. It might be a useful step for during a spike test, however the first tests to be completed are the normal, expected load tests.

The search for altneratives continued. At my customer, not a lot of automation tools were available, especially not for SAP systems. One tool however has been purchased a while ago and apparently is actively used: WinShuttle

Winshuttle seems to be able to generate the required load, based on Excel input, the main issue with Winshuttle however, was the lack of available licenses. There are two licenses available and both are single use licenses. This meant I would have to find a way to hijack one of the PC’s it was installed on, script my way through it and run the tests in a very timeboxed manner. In other words, not really a solution to the problem.

I then decided to look at this from a whole different point of view: what can I use to make SAP execute a bunch of transactions, is freely available and flexible enough to also monitor what is happening on several sides of the message broker? The answer that came to me was not quite what I had expected: AutoIt.

SAP-main-screen-side-by-side
Starting SAP from AutoIt was simple, running through the application and manipulating SAP however was a bit less intuitive.
In this screenshot two SAP screens are put side by side, the left-hand side is what the userinterface in SAP looks like to the end user. The right-hand side is how AutoIt sees the screen, e.g. a big blob of nothingness.

SAP-AutoItScreenInfo3

To be a bit more specific, here’s what AutoIt can tell us about the SAP toolbar:

In other words, AutoIt sees the entire toolbar as one object, with one exception, the edit box for transactions. This box has a very easy and intuitive name: Class: Edit Instance: 1, making it easy to ensure the focus on this box can be easily set and thus the transaction being started to upload files.

Since the main screen of SAP is a blind box to AutoIt we had to resort to a very sloppy way of working, using the TAB button to navigate through the screen, resulting in code roughly looking like this:

Send("ZWBESTUPL{ENTER}")
 If WinWaitActive("Bestellingen (winkel) aanmaken vanuit CSV-bestand") Then
    _log('Successfully selected the Bestellingen aanmaken vanuit CSV-bestand transactie')
    Send("{TAB 3}")
    Send("{SPACE}")
    Send("+{TAB 3}")
 Else
    _log('Something went wrong. Could not get to the Bestellingen aanmaken vanuit CSV transactie')
    Exit
 EndIf

load-graph The resulting load ramp up was a linear rampup of IDocs being generated and sent to the SAP RFC port, where they were picked up by the Message Broker and subsequently tranformed and sent back to the Locus system, where the load turned out to be quite on the high side.

All in all this was a fun excercise in automating SAP to do something it is absolutely not meant to do with a tool not built nor designed to do what it did. In other words, it was wonderful to be able to abuse a bunch of tools and achieve a very clear and convincing result!

JMeter and Oracle webcenter

I have started working on a project with lots of load & performance testing, which I really enjoy setting up and executing. However one of the tasks is L&P on a new implementation of Oracle ADF WebCenter, which turned out to be somewhat of a headache. The tools I use for L&P are typically Apache JMeter, Fiddler2, BadBoy and possibly The Grinder, JCrawler or LoadUI.

For the testing of the Oracle WebCenter I picked JMeter. 

The site under test is an intranet with a sizeable community using it. Especially start of day has a tendency to give some heavy hits on the most dynamic pages of the site, so these pages were my focus area. Unfortunately these pages are all behind a login. This login is generally, since the majority of these users use IE, done via Single Sign On where WebCenter verifies the user based on the Windows credentials and grants access based on that.

Going through the SSO login is not possible with JMeter, first of all since JMeter is not a browser, it merely acts to a certain extend as a browser and secondly JMeter has no access to my Windows username, at least not as far as I know. So I got served a login page. Usually getting past a login page with JMeter is fairly easy, the login page I received however didn’t like my passing a username and password in the post call via HTTP (intranet, so no SSL used obviously, cause the intranet is “safe”, right?).

Oh well, let’s google how to get through to the WebCenter main pages. It’s Oracle so someone is bound to have written something sensible about it! Googling the terms “Oracle ADF Jmeter” gave me some nice hits. Titles such as “Configuring Apache JMeter specifically for Oracle’s ADF 11g” or even better, a post on the Oracle Blogs “New recordings on using JMeter to test ADF applications” including links to a demo video.

Some excerpt from these posts:

Sometime back I blogged about Stress & load testing web applications (even ADF & Apex) using Apache JMeter. That post dealt with the generic setup of recording a web session and then replaying under load via JMeter.

In the demo video a bit of a delusional idea of recording and then playing back the Load test “… is a free tool for essentially recording any HTTP session … and then you can replay it …”  (this is in the beginning seconds of the demo, so no need to watch the entire thing.

Why do I highlight these two things so much?

In order to answer this I want to first of all thank Chris Muir for his efforts both on his personal blog and on the Oracle blogs to show that JMeter can be used with the fuzzy logic that is the login of Oracle ADF. However I strongly disagree with the approach of recording and playing things back, especially with an application containing a lot of  ” fuzzy configuration you must get exactly right otherwise ADF gets confused by the messed up HTTP requests it receives from JMeter” as Chris states himself.

Since there is so much fuzziness, to stick to his terms, going on in ADF, it is a lot more sensible to try an understand what it is ADF is looking for. In order to do this one should not record and playback. When recording and playing something back you rely on the “magic” of software. You are not sure what you’re doing, let alone that you can make any clear assumption on what has been “tested” if anything at all for that matter.

So, how do you go about learning how the ADF platform works without having developer access to the beast? You start dissecting it bit by bit. This is why I love doing load & performance testing, it is like solving a puzzle.

I started off by walking through the login flow in Firefox with FireBug switched on. Firebug integrates with Firefox to put a wealth of web development tools at your fingertips while you browse. You can edit, debug, and monitor CSS, HTML, and JavaScript live in any web page.
Firebug gave me the URL’s I needed, so I started to get an understanding of the calls being made, but it didn’t provide me sufficient input yet for getting through the full login flow. In order to get on with this I needed to go a bit deeper, I needed Fiddler.

Fiddler is a Web Debugging Proxy which logs all HTTP(S) traffic between your computer and the Internet. Fiddler allows you to inspect traffic, set breakpoints, and “fiddle” with incoming or outgoing data. Fiddler includes a powerful event-based scripting subsystem, and can be extended using any .NET language.Fiddler is freeware and can debug traffic from virtually any application that supports a proxy, including Internet Explorer, Google Chrome, Apple Safari, Mozilla Firefox, Opera, and thousands more. You can also debug traffic from popular devices like Windows Phone, iPod/iPad, and others.

Passing all traffic through Fiddler, with a filter on the specific domain I was working on made things even more insightful than they had been with Firebug. I all of a sudden noticed the HTTP headers being passed around in my browser differing from the ones in JMeter, so I adjusted JMeter to pass all the same information. The header manager now contains something like the example shown here. Where one of the most important things to add is the referer.

The fact that I made the rest of the header act fully as if it is a webkit or Mozilla browser makes little to no difference, the main thing to do it ensure you add a referer URL to the HTTP headers. This referer needs to be constantly set dynamically, if you need to figure out how to do it, Nabble.com has an excellent explanation for it, but in short it comes down to this:

Thread Group
+HTTP Header manager (Referer = ${Referer})
+set the initial value of the variable Referer as blank or deal with the
first request differently (for e.g. if you dont want the referer at all) in
wh
+Simple Controller
Request 1
Request 2
Request 3
....
++Regular Expression Extractor (as a child of simple controller so that it
applies to all requests)
Name = Referer (same as the header manager variable)
Choose URL in the radio button ,
Expression=(.*)  Template=$1$ (i.e. we get the whole URL)

Get this part right and the rest of building your load test for Oracle ADF is going to be a lot easier. As stated before, the Oracle ADF stuff is not intuitive, getting the HTTP headers right however will help a lot in getting on the right path to logging in.
Besides the headers some more information is, or at least can be, required to get through the login flow. For our configuration I’ve had to deal with a total of 13 redirects which had to be rebuilt from scratch, during the first batch of redirects all kinds of information is gathered about the user session and the state of the user in a set of variables which are needed to be passed on, so be ready to add some nice RegExp extractors to URL’s here and there.

These redirects should be executed by JMeter as a next step, not by Follow Redirects. So please make sure you have that switched off. If you do not do this you will get in a state where ADF says the user session has expired, which is just ADF’s way of saying it doesn’t know who the current session is because the ADF HTTP state parameters JMeter is sending to the ADF server are not what it expected.

It took some trial and error to get it working, but by building it fully by hand rather than by using record/playback (which is not a good option for any type of reusable automation in my view) I learned what ADF is expecting and I now have a set for load & performance testing which I can leave with my customer without having it stuck to a particular session, base URL or user. It is fully reusable and can be run by anyone who can fill in a username and password and then hit the Save and Run buttons.

The difficulty of course lays in the tail, interpreting the outcomes of the test. But that is inherent to Load and Performance testing!

Test automation on SAP, is it really that much different?

SAP logo This year I got to know SAP fairly intimately, looking at it and into it from a test automation perspective, inventorising the possibilities and opportunities of automated testing of a (huge) SAP implementation. During this time I ran into a fair amount of SAP related people, ranging from SAP consultants and sales people to ABAP-developers, HP sales people and SAP preferred suppliers. They all are making it seem as though SAP development and testing is a different world, nothing to do with the “normal” software development world. In my view this is wrong, SAP is just software. Yes, it has a bunch of particularities which you do not get in so many other packages, but in terms of the actual functionality it is fairly comparable to Siebel and Oracle (no, I am NOT saying it’s the same, I am merely saying it is comparable). With neither Oracle nor Siebel this almost religious separatism exists, yet they too are bound by the laws of business process models, transaction codes and what not. So how come SAP is seen as so special and the others are not? Is SAP special? SAP TAO & HP Quality CenterWhen you start talking about test automation and SAP the first things that pop up are some SAP proprietary names such as CATT, eCATT and SAP TAO. Fortunately SAP themselves recommend against the use of either CATT or eCATT, so let’s dismiss these right here and now, they are tools that once were somewhat helpful but now should be considered redundant for most SAP implementations. SAP TAO however is of a different breed. SAP TAO is pushed by SAP as being the solution to use when trying to automate your testing. One minor issue with SAP TAO however is that it does not really automate anything on its own, you invariably need HP Quality Center (HPQC) and Quick Test Professional (QTP) with it. HP tooling has some tailor-made solutions to integrate well with SAP TAO and more specifically with the SAP Solution Manager. The setup as proposed in this picture is the ideal picture as SAP would like to envision and implement a SAP testing solution. However, not all organisations have Solution Manager up and running for anything other than transport and low level reporting, nor do all organisations have the budget for the HP tool set. When working with SAP TAO effectively and efficiently, the Business Blueprint, the description of all business processes as used by the organisation with the SAP systems, should be residing in the SAP Solution Manager. This blueprint should be maintained carefully and always be up to date. When changes to the system are made, either by updates to the system or by customizations in ABAP, these changes should be visible in the Solution Manager, ensuring the SAP Solution Manager Business Process Change Analyzer can identify which processes have changed and based on this impact analysis propose tests within HP Quality Center to be executed. With SAP TAO the testers can “automate” the tests, which effectively means record the steps. SAP TAO then adds some secret sauce by cutting longer scripts up into maintainable and reusable chunks. These scripts will then be sent from SAP TAO into HPQC, where they can be associated with functional test descriptions. When a tester now wants to run one of the automated tests, or for that matter wants to run the entire automated suite,  HPQC is used again to trigger the scripts, which get executed with QTP. In other words, the actual testdriver is QTP, not SAP TAO. When starting up a SAP GUI instance and analyzing it with something like UISpy or some other tool which can show the objects on a screen, the fields and buttons are barely visible and not really open to test automation. Yet it is possible. If SAP is configured to enable scripting, the UI objects become accessible and thus the GUI is scriptable with any tool of your choice. The moment this little flag has been set, a whole new world opens up in the GUI, it’s all of a sudden open, the fields, screens and buttons all have an ID and can be hooked into by a driver of your choice. Effectively what the enable scripting setting does, is ensuring non of the huge, expensive tools mentioned above are needed, it is possible to run through the application with any driver you want. The main thing needed in order to properly and solidly automate testing in SAP now, is a well grounded knowledge of the Business Processes the implementation is supporting (or driving).  This is no different than what is needed when automating SAP with SAP TAO. The benefits of having the option to choose your own drivers, your own programming language and your own reporting framework are huge. If SAP is merely in the organisation to support the business processes and software developers within the organisation are writing their own code in Erlang, C++, C#, Java, Ruby, Python or whatever else you can imagine, the testsuite for SAP can be in that same language. Having the automated testsuite in a well supported language rather than just in QTP’s own VBScript, ensures a larger possible support base for the automated tests. It enables easy integration of home-built software with the SAP systems since all tests can be built in one language and in an end-to-end setup, again supported by the organisation’s own development group. The SAP TAO and HPQC setup do have some benefits of course. First of all, there is a huge corporate support for both HP and SAP software products. But more importantly, there are some technical benefits of using SAP TAO, if the environment is setup properly. As mentioned above, there is this tool called the Business Process Change Analyser, or BPCA, which can help extract transaction based changes from a transport and help the tester decide, based on these changes, which test scenarios need to be run to effectively cover the business processes (or mainly the transactions associated both directly and indirectly to the transport). Next to that there is the benefit of using HPQC, I can hardly believe that I am saying this, since I am personally not a big fan of the HPQC suite, however the reporting possibilities and capabilities within HPQC are close to limitless. This means that it is possible to generate excellent reports, automatically, for both management level execs and for the business analysts and ABAP-specialists, on each test run without having to think about it. Having the full benefits of this setup however comes at a cost, a fairly sizable cost. The licensing for HPQC, QTP and SAP TAO or not to be ignored for starters. A hidden cost lays within the organisation, as stated, for the BPCA to do anything, Solution Manager needs to be utilized fully, the Blueprint needs to be ready and up to date, more over, it needs to be well maintained to ensure it remains the “Single Source of Truth” (as SAP coined it). So, to answer the initial question: Is SAP special? It is, as a business process tool, definitely special, strong and extremely versatile. When looking at SAP as a system that requires testing and test automation however, I am not convinced it is special, it’s just software, which is open for testautomation with a range of drivers, one of these drivers might be QTP. If you do indeed choose to go for QTP with a SAP system, have a look into SAP TAO. However, do not feel that it is the only one out there which can effectively and efficiently be used for SAP test automation. All the others claiming they can, probably indeed can just as well as SAP TAO with QTP. In the end it is all about how you use and abuse a tool and whether you use QTP, White or Panaya, they all in the end merely function as a driver, it is the code the testers build which matters!

The difference for test automation between cutting edge and legacy software

Within one of the LinkedIn groups (sorry, you need to be a member of the “QA Automation Architect” group to be able to read it fully) we started talking about the difference the state of project or product can make for test automation. In this post I will make a distinction between 2 states: new where no code has been written yet and existing  where application code has been written, but no test automation has been implemented.

Cutting edge

New So when creating a totally new product, life for the testers can be made easier by design, that at least is the thought. This does imply that testers, and not just the “manual” testers but all testers, including automation testers if these are a separate breed as some people seem to think, need to actively participate in the requirements phase of a product. With actively participating I do not mean to imply that they are normally not participating, I mean they need to look a bit further than just at what to test, is it testable etc.

They should also use their insights and ideas to help both product owners and software developers to understand what are the things that might make life easier for testing this new product.

When for example building a new web application, they might consider adding a simple REST api to the application, which in production can be closed off based on IP or firewall rules or something like that. A simple REST-API will make life a lot easier when creating your automated tests.

Another thing to make life easy might be ensuring clear and logical naming conventions to be used for all page object in order for the automation to use the Page-Object-Model. Not only is using solid naming conventions good for automation, it also makes maintenance on the application itself easier, since all objects are identifiable by their unique ID.

Legacy

How is existing code different from non-existent, other than that one is already in production and the other has to be created? As far as test automation is concerned, especially when talking about legacy software, it may turn out to be a lot more difficult to find proper hooks into the application for solid automation other than on the labels of buttons or fields.

When you have a fairly recent application it may be a website or a desktop app, both have the possibility that there are some sorts of ID’s for all objects. However when talking about true legacy software, such as 15 year old Delphi, it is quite unlikely the developers used WinForms, Win32 or SWT. Not having hooks like that into the application can result in having to scrape the UI for object labels, which is fine when testing one particular language, but if your software was localized things can get even more complicated.

Getting consensus within the technology group about new software is one thing, getting a “non-functional”, non-business related change about in existing software however is a whole different thing.

As long as the code is still “alive”, e.g. new features are still being added, bugs are being fixed and in general there are still developers working on the application, there is hope of getting some more “automatability” in the code.

First of all, while fixing bugs old code is touched, adjusted and retested, this is always an opening to talk to the developers resolving the issue about adding a small bit of extra “sauce” to make it easier to add this particular thing to the automated testing suite to ensure chances of recurrence are minimized, of course by fixing the bug you hope to completely obliterate this particular issue but it might cause new damage elsewhere in the application. So while talking to the developer about this function, try to convince him/her that adding a bit of extra to test not only for the fix of this issue, but also to verify the surrounding features.

While new features are added, this can be treated as “new code”, as long as you manage to get agreement on adding identifiers or a separate layer in these features to make test automation at least easier. If you achieve this, you are quite close to closing the majority of the gap. Refactoring is an excellent opportunity to again make minor changes in the application enabling test automation at a different level.

How do you get “automatability” in your specs?

Assuming you want to get your  product easy to automate and thus want to make sure it is thought through, how to get it in the specifications? And more importantly, how do you get it in there without adding things like:

  • unnecessary workload
  • unneeded and unwanted features
  • potential security holes
  • un-maintained code

Enterprise Architecture Layers with a "hidden test automation layer"One of the ways to go about it is by, in collaboration with the developers, enforcing a coding standard in which you ensure all objects receive an ID. Regardless of whether it is desktop or web based, most automation tools are looking for a hook into the UI, if there is one, and one of the nicest ways of doing that is simply by using the ID.

Alternatively you can have a “layer” put right underneath the UI, ensuring you can bypass the cumbersome UI while automating your tests. One of the issues with this option however, can be that you add “hidden” code which gets forgotten easily. It also is a potential risk for the security of your application, since you basically enable a man-in-the-middle hole.

If this path is taken, ensure that this “feature” does not end up being an opening for malicious code to reach your data. A relatively safe solution for this would be to put some (extra) form of authentication in the layer.

There probably are more options you can investigate, the two I mention above are fairly harmless and yet can make life in test automation a lot easier and predictable.

In the end, no matter which way you go, as long as you get both developers and product owners on board in working towards a higher “automatability” of the code life for you as a test engineer could become a lot more fun.

I am not very impressed with theological arguments whatever they may be used to support. Such arguments have often been found unsatisfactory in the past.

Alan  Turing

Test automation in Agile and why it fails

It’s fairly safe to say that quite a lot of test automation efforts fail. It is also very safe to say that without test automation an agile team fails. So how can you make sure that while doing agile your test automation will not fail and thus your agile team will not fail? One of the ways to answer this question is by looking at why test automation often fails within agile environments.

When I am talking about test automation within this post, I am referring to testing that is done to reduce the amount of manual regression work, the so called functional test automation or automatic regression testing.

Moving target

Test automation quite often does not receive the attention it needs and deserves, also in agile teams. Quite some test automation efforts start off too late and without the appropriate preparation, resulting in organic test automation driven by a moving target. The moving target is the system under test which, in agile, is constantly in flux. Each sprint new features are added, bugs are fixed and quite often it is not clear at the start of a project where it is going to end up. Writing automated scripts against such a flexible environment which will stand the test of time, is difficult. It is even more difficult when the base on which automation is done is weak.

Quite often test automation runs behind on what is being delivered within an iteration, this is somewhat logical, considering that it is difficult to test, let alone automatically test what has not been built yet. Ideally while manually testing the new feature(s) as a tester, you’re already pondering how to automate it so that you do not have to do the tedious work more than once. Given enough time within your iteration, you actually might be able to automate some of the features, from what I have seen thus far, generally not all features will be covered in test automation within one iteration. So if these tests are not all automated, what happens to them in the next iteration? Are they omitted? Are they picked up and automated retrospectively?

If you do not keep track of what has been automated during an iteration for both your current iteration and your previous iteration, how can you rely on your test automation? You can’t be sure what exactly it is going through, so a bug can easily get through the net of your automated tests.

This moving target you are testing needs to be traced and tested solidly, repeatedly and in a trust-worthy way!

Definition of Done

In the majority of the DoD’s I have seen, one of the items is something referring to “tests automated”. The thing I have thus far not seen however, is the team adding as much value to the automation code as they do to the production code. Quite a lot of DoD’s refer to certain coding standards, however these standards often seem to not apply to functional test automation. Isn’t your functional automation code also just code? If so, why then should this not be covered in code reviews, be written according to some useful guidelines and standards and hopefully use a framework to make the code sustainable?

Test automation is just writing code

I have seen several automation efforts going on within agile teams where test automation was done without proper thinking having been put into it. A tool was chosen, based on what exactly other than members of the team having heard of it or having had good experiences with the tool. No base or framework to keep the code clean chosen. Since you are writing code, you should follow the same rules as the rest of the software developers. Don’t think your code, since they are merely tests, should not be hooked up to some form of framework. If you want to make your tests survive a few iterations, considering reuse of your code would be logical.

By the way, coding standards do not need to be too complicated. In 2009 “Agile in a flash” came up with a coding standard that could work for all languages and for most environments:

Coding Standards - agileinaflash.blogspot.com

All of the above mentioned points are “logical” when writing an application which is supposed to go into production. However when looking at a lot of (agile) projects, these logical “best practices” seem to be totally forgotten when it comes to test automation.

Succeed in test automation

So, how do you succeed in your test automation? How do you make it work? The answer seems clear to me: test automation is not like writing code, it is equal to writing code. Since it is the same, treat it the same way!

Do your code reviews, follow a form of a standard, use a (simple) framework to make life easier in writing tests, create reusable modules in your automation code. In other words, treat your functional test automation with the same respect as your production grade code. Who knows, you might want to run your tests against your production environment some day! In setting up your initial test automation environment and framework, don’t be shy and ask the developers in your team for tips, tricks and suggestions. They quite likely have gone through those setup steps more often than you have, so use their knowledge. Asking them for their insights and ideas not only helps you, it also helps them feel more responsible for doing their 5 pennies worth on the test automation side. They will get a clearer idea of what you intend to achieve, so they might also be more willing to help out keeping their code testable, they might even enjoy helping you write the testscripts!

Resources

Some informational resources where you can find some ideas on how to setup the test automation framework:

How do you test for SEO

In this post I mention SEO and Search engine optimization several times, I am referring here to the optimization of a website for natural search, so without paying for it showing up high in the search result lists.

While on holiday I spent some time talking to a local entrepeneur. He makes his money through his own website, we got to talking about his website and about the translations of his site in particular. Since this site was translated from Greek into English, French, German, Italian and Russian I had a quick hunch that his meta-keywords would not be in order for all the separate languages (which does not apply to all translations he made by the way).

When asked to test a site specifically for SEO, what are the things to look at? As I mentioned above, there are a few tell-tales when you start your testing, especially when the site has been translated:

  • lang – this should be set to the actual language of the page you are testing
  • meta-keywords – these should be in the same language as the lang set in the header
  • meta-description – this should be in the same language as the lang set in the header
  • Alt-text for images –  – these should be in the same language as the lang set in the header
  • page specific URL’s – should be in the same language as the lang set in the header

Please note that this is just a sub-set of what needs to be looked at when testing a site for SEO optimization.

Based on the before mentioned website I will give some examples of what to look for when testing for search engine optimizations.

HEADER

Looking at the header of the Russian version I indeed saw exactly what I assumed I would see:

<html xmlns="http://www.w3.org/1999/xhtml" xml:lang="ru-ru" lang="ru-ru" >
<head>
<base href="http://www.corfu-villa.gr/ru.html" />
<meta http-equiv="content-type" content="text/html; charset=utf-8" />
<meta name="robots" content="index, follow" />
<meta name="keywords" content="corfu villa, villa corfu, seaside villa corfu, pool villa corfu, villa with sunset view corfu, villa rental corfu, corfu villa rentals" />
<meta name="description" content="Corfu villa. Two elegant seaside pool villas located in Chalikounas Corfu with amazing panoramic sunset view. See photos check 2010 availability and book online with our paypal secure system." />
<meta name="generator" content="" />
<title>Корфу Вилла | бассейн вилла с потрясающим видом на закат | villa.gr Корфу</title>

If you look fast and are not sure what to look at this looks fine, however from a Search Engine point of view this header is a bit of a drama.

The language is set to Russian in the first line, the title is in Russian, in the Cyrillic alphabet. The SEO issue however is in between the language declaration and the title: keywords and description are in English. When a Russian is trying to find a “seaside villa in Corfu” he will probably not use the English words for it, instead the keywords used will quite likely be “вилла на Корфу с видом на море“.

Just for fun, here are the result pages for the two searches; the English search on Google.ru and the Russian search on Google.ru. On the Cyrillic search the first page doesn’t have any links to our test site. On the English search however, the site is the first to surface, underneath the paid links. Problem I see with that result page however is, the url we get back is the main URL rather than the Russian URL.

The header of the homepage of the site is just the beginning of testing it for search engine optimization.

IMAGES

This being a site aimed at renting out a villa with amazing views in a fairly decadent location, it is quite visually driven. As a tester you might not pay too much attention to the images, however when testing the search engine optimization, the images should be looked at as well.

Sticking to the example of the Russian version of this site I grabbed another piece of the source code:

<div title="Corfu Villa Boxes" id="boxes">
<div title="Corfu Villa Gallery. Click to view more photos" id="left_box">
<div class="module">
<table border="0" cellspacing="0" cellpadding="0" width="100%">
<tbody>
<tr>
<td><a href="/ru/gallery.html"><img src="/templates/corfuvilla/images/gallery.jpg" border="0" alt="Corfu Villa Gallery" /></a></td>
</tr>
</tbody>
</table>
</div>
</div>
</div>

Within this snippet you’ll notice again an issue similar to that in the header meta-tags. The page is supposed to be in Russian, yet the title of the div is in English. The alt-tag of the image is also in English. Both of these are supposed to be in Russian in order for this page to properly be indexed by the search engines in that language. If you really want to make an effort, the image name should also be in Russian.

The useage of the images themselves, on the rendered HTML page already gave an indication that SEO and translation were not well thought through or at least not fully implemented.

Corfu villa gallery image

The text on the image should of course also have been translated. When, as a tester, you see mistakes like this on a website, this should quickly give you an idea that the SEO has not been done properly, nor quite likely will the translation of the site have been done properly. Ideally you would want the text on this image to be configurable in the CMS and be attached to the language the page is in.

Of course there are more things you have to look at when testing for SEO, however I will stop here for now.

Why test “by the book”

The other day I was reading a blog post on agile and why agile will fail in many instances. One of the comments got my specific attention, the comment states the following:

” a process with little Agility due to the remains of the “old process”. ”

This is why by-the-book scrum is so powerful. Too many agile consultants try to fit agile into the existing org structure and processes, thereby allowing existing dysfunction to remain, or worse, covering it up. They try to modify everything right out of the gate, instead of just choosing scrum.

This got me thinking about why so many methodologies seem to get followers who treat “their” methodology as a religion. I have been pondering about the different development and testing methods, such as XP, Scrum, Lean for development life cycles and ISTQB, TMap and such for testing in particular.

In religions it is generally considered bad to be extreme in following the rules, hence the term extremist, whether this is an orthodox, katholic, jewish or islamic extremist, they are always considered to be dangerous to society. Isn’t it the same in software development and testing? Aren’t the people who go to extremes to follow the rules as dreamed up by some author also extremists that seem to lose sight of the context?

Thus far the best implementation of any methodology I have seen, is a form of hybrid, or as the Dutch would call it a “polder model”, where you make a compromise between “the book” and “what actually works for us as a team or organization”.

Are methodologies best practices then? Aren’t methodologies meant to help people get a frame of reference and fill that in for themselves, by thinking about the frame of reference, critizing it, adjusting it to their needs. Shaping the method in such a way that it works optimally for you, in this situation, in this particular context. When moving on to a new task or assignment, you can take these learnings with you and see what of it works for you within this new context and adjust whatever doesn’t work.

Maybe it is time for a new methodology, which ties in well with a solid development method by Zed A. Shaw. A possible working name could be “Testing, fuckwit!”.