How I, unintentionally, social engineered information out of a bank employee

I had a wonderful banking experience this week.
I was trying to move some money from a savings account to a current account, the savings account having plenty of money on it and the current account being on zero.

On my first attempt, in the weekend, there was no direct feedback from the system, so I guessed it would take till Monday for the transaction to be processed cause banks are closed over the weekend (yes that is how things work in Holland more often than you’d expect).
Monday I checked the account and no money there, on checking the transaction I found out the transaction was refused by the system without any indication why.
So I decided to try it again, without waiting for the feedback from the system cause it was now after office hours (yes I make really odd assumptions when banking, unfortunately experience has taught me that more often than not these assumptions are correct).

Next morning I check the account, no money in it. I think, that’s odd… So I check the transaction and yet again it is refused without a reason, just some inexplicable error code, which meant nothing to the helpdesk either as it turned out later on.

Being a persistent person I decided to try it once more, this time during office hours so that I could see the immediate feedback from the bank and of course could call them.

No surprise the transaction was once again refused.

On this I called the bank’s helpdesk.

I explained what was going on, e.g. that i tried transferring money from a savings account to my current account, both within this bank. To my big surprise the only identification I needed to give to the helpdesk person was the bankaccount I was transferring from, no extra verification who I am and whether I can proof that I indeed should have access to these details. Instead she asked me which of the current accounts I was trying to transfer to, naming the account numbers for me and telling me on what name they are, so that she could lookup the transaction and see what was going on.

Ever since reading Kevin Mitnick’s first book, “The art of deception” I have thought that social engineering could not be as easy as he makes it seem. Turns out it truly is. I social engineered, unintentionally, the account numbers and attached names out of a helpdesk employee of a bank. With this I could quite easily start playing around on the internet and clean out the accounts with purchases based on direct debit transactions.

The other odd thing came to me when I was told what was going on, why were the transactions being refused: we apparently never returned the original, signed contract for this savings account. I of course requested a new version of the contract to be sent to me.

On receiving it I decided to read the contract and the small print on it for a change. The smallprint contains a short section about possible wrong information on the contract, such as counter-account numbers or names and how you can adjust these. All you have to do is write the correct details on the contract, sign it and send it back.

So it turns out you can open a savings account, attach it to an account, put money on the savings account from one of the attached accounts all without ever signing the contract. On signing the contract you can change the details about the attached accounts to something else just by writing the new information on the contract and send the contract back to the bank. All by snail-mail.

On receiving the new details the bank will make the appropriate adjustments and send a confirmation (snail)mail about this to the account holder.

This sounds like a very slipperly slope for a bank to me. I was first of all shocked to notice that my details were given away so easily without verification and secondly that if someone wants, they can hijack an account relatively easily…

I did tell the bank that their verification of the caller should be changed, curious to see if they actually do…

SSL and other frustrations in testautomation lead me to new insights

The last two days we have been preparing some automated tests which will be used in production during maintenance.

Effectively these tests will verify if all the web and application servers are back up and connected to all the services in the backend systems. During this maintenance servers will be pulled out of the overall production pool and one by one, or in batches, be upgraded to a newer version of something or other, what exactly is not relevant in the context of our tests

Our client being a bank, have all parts of the sites reachable via HTTPS and firewall blocked. In order to reach all the individual webservers directly we have to circumvent the use of the loadbalancers.

In our test automation setup we have made the assumption that all URL’s are served over SSL. Now in preparing these production tests we have found out that this is not always true, for example when bypassing the loadbalancers and firewalls and running the automated tests directly against the individual webservers. So we have had to come up with a new feature in our automation suite: toggling the use of SSL. This is all nice, but it does create a bit of an issue while testing the new suite. Normally when I am creating something to be run on production I try to wait as long as possible before actually running it on production, e.g. debugging my test I do on a test environment rahter than on production. In this case however I have had to debug the tests on production, behind firewalls and SSL layers, since the test environments all require SSL in order to reach them.

Another typical headache I ran into is that some people deem it necessary to make hard links rather than relative links. The objective of one of the tests was to login to a certain account and navigate through the site, like a user would to a different part of the site. The actions would guide us from a new part to an older part of the sites, so in other words, going from asp to aspx. While trying this we found out the new aspx parts have hard links rather than relative links, thus kicking us out from in between firewall and webserver and into the loadbalancer, thus killing our session and breaking the tests.

We were asked to make the tests specifically so that the navigation flow moves from the new code into the old code, since endusers seem to have issues with this transition so now and again and there were hopes that this test could possibly surface these inconsistent issues users see in production, since it has not been possible to reproduce the issue in the test environments.

All of this made me think more about the different strategies one would need to come up with when testing the full chain of a product moving from development environment over to a test environment and then on into production. How valid are the automated tests we have done in test, can we compare them to results coming out of production? Can we run the same tests in production as we do in test? Should the test environment be updated in order to represent production even more? Should the binaries that are deployed in test be 100% identical to the ones in production?

I would like to think that the tests we have done in the test environment are indeed valid and judging by the fact that most of these tests run succesfully on production outside the firewall, e.g. the way our end users reach all functionality, I do believe the tests are indeed valid. It does show clearly to me however, that while testing we do miss some things, such as hard links rather than relative links, which may result in loss of a session for example. Question is how we will be able to catch these links earlier on, rather than in production.

There is still quite some work to be done before we can safely say we have it all covered. Building this fairly simple test suite in FitNesse and WebDriver have shown me some new things to consider while testing applications, such as where is the SSL layer placed, how can i verify if a url is built relative or hard, should we always want to verify this? How do I play around with hostnames and what can this bring me; in this case it brought me the exiting world of hardcoded URL’s to parts of the site and the insight that the binaries used in the test environment are built differently from the ones going into production.

All in all it was just another day in a testers paradise!

 

 

What did I get out of today’s testingdojo

It’s funny to see how difficult it is to get a group of people, who work with one another daily, to talk freely and share their ideas, even when their manager is not present and they are amongst their peers.

During today’s testingdojo, which again was supposed to last an entire day focussing fully on working with FitNesse, we started off with a talk about what we aim to achieve at our customer’s with test automation. I tried to enthuse the group by pushing them to think about the possible difference between “test automation” and “computer aided testing” and if there are differences, what does one mean and what does the other mean. From there I hoped to get to insight into what they think we should aim to achieve and of course whether or not their ideas make sense to us, as the leads on implementing test automation.

A real discussion on this never took flight unfortunately, moreover, the two people we have been working with closely on the implementation remained most silent of all. I am still not sure what the cause of this silence from their side was, natural shyness, cultural pressure, or something else. Instead I ended up pulling some keywords out of the group and discussing my thoughts on them. Not too bad either, but I do not believe I should have been the one talking this much about the subject.

The second part where I hoped to create a bit of discussion was on what the group believes to be good practices in testautomation. This also took some pains from my side, along with some poking, probing and planting the occasional seed, but some discussion arose on this. After a while one of them remarked that in the end it seemed that all things that can be considered good or best practices in testautomation also fly for manual functional testing.

This insight led me nicely back to clarifying the first point, what are we aiming to do: trying to remove manual testing all together or trying to create more free-space and time to enable them to do more and different manual testing? I do believe I got the picture across that we are not trying to take away manual testing, but rather trying to help them remove repetitive work. Since repetitive testing of the same items and same or similar functionality is quite likely to create a form of feature-blindness.

The term feature-blindness seemed to be a new concept for a big part of the group; however I managed to get this concept explained fairly easy by example.

In the end the morning session was not exactly what I hoped it would be, but it clearly did get the points I wanted to make across. Which were: think of what you want to test, try to describe for yourself why you want to automate something and then read it back in order to figure out whether it indeed still makes sense to automate this. Try to keep your tests small, self contained and reusable. Refactor your FitNesse tests into reusable scenarios, but also keep an eye out on over-complicating things by making everything a scenario, e.g. do not make a scenario for the sake of making it, only create it if you indeed have several identical tests which need different input data. And the most important of all as far as I am concerned in functional testautomation: Keep It Simple and Stupid. Even fancy stuff you should be able to keep simple, readable and brief. If at a first attempt you fail at doing it, don’t worry, move on and come back at a later stage to refactor your test.

One not so nice thing about today’s dojo was that for the second time in a row the second part of the day was rudely disturbed by some very unexpected downtime of our test-environments. We were told in advance that one of the environments would be taken down for urgent maintenance and patching, unfortunately both environments went down during this change which resulted in us sending the group off earlier than anticipated.

Main takeaway for me: I really enjoy doing these knowledge sharing and coaching sessions, I like it a lot and see it as a great bonus to my work as a consultant, especially since it makes me (and hopefully my colleagues) think about why I am doing things they way I am doing them.

Can we convince the team there is no silver bullet?

So how do you go about explaining the difference between full-scale test automation (as in the silver bullet concept) and useful, sensible computer aided testing.

Over the past few months I have been trying to get the message across to the entire testteam at my customer that testautomation is not a goal, rather it is a tool which can help the team spend less time in regressiontesting and thus spend more time focusing on testing new functionalities, and new combinations which were not covered before.

In our next testingdojo we will attempt to reach a breakthrough in that, question is however, will the method with which I hope to get this breakthrough actually work?

I hope to make the team see the light by opening up a discussion, I have everything prepared for a discussion on the subject of testautomation best practices and why do we do testautomation. Now that the actual day is getting closer though, I am starting to fear that the group may not be strong enough to actually go into a discussion about the subject.  Which means I need to have a backup plan, which at the moment I do not have.

Reason I am hoping for a discussion is that thus far the team has shown they learn best by experience and example. Whenever in the past I ran into people who learn best by experience I would let them make their mistake and then take go over it with them to see what went wrong and why. In this case however taking that approach might be a costly mistake.

Wanting to minimize time spent on regressiontesting is a good goal I believe, especially to start working on testautomation. The one problem I see is that the testers here seem to want to go overboard in their coverage. Through “youthful enthusiasm” on the team’s side, we have already spent quite some time on explaining why certain things should not be covered in an automation suite, for example testing the export to excel functionality or print functionality.

What if conversation simply cannot convince the team?

Is there a way to make them experience the senselessness of automating some things without it costing a lot of time? Is there a silver bullet which will help prove testautomation is not a silver bullet?

I still have till tomorrow morning to come up with the backup plan, otherwise it will be a lot of improvising during the dojo.

Thoughts while preparing testingdojo follow up

In the first testingdojo at my current client we familiarized the team with each other and of course with Selenium and Fitnesse for test automation.

As I wrote earlier this was quite a succes, all participants seemed to have really enjoyed the dojo and all learned quite some new tricks on how to use Fitnesse to their own advantages as well as the companies advantages. During this first session we stuck to the basics of what can be done with FitNesse, such as simple testcases, use of variables and some basic reusability like login.

All the fancy stuff we have not even touched upon, so my first thoughts were to cover that during the coming, second, testingdojo.

I am however now doubting that idea. Isn’t it a lot more sensible to explain the basics and have them work, as a team, on what is best-practice for testautomation, how can they implement this best for their organization?

Considering the general lack of knowledge in testautomation within this organization I truly believe this would be the best way to go, however question now is of course, how do you keep a session like that interesting? This could easily turn into a very boring, theoretical discussion rather than a properly interactive dojo.

A thought that occurred to me is that we might start off describing the best practices and then pair them up and make them search for ways to get to these best practices as well as reviewing existing automated testcases and refactor them to adhere to these best practice. In order to do this I would of course need to start off compiling my own list of test automation best practices specially tailored for testautomation within this organization and with FitNesse as a background. So far I have come up with the following list, keeping in mind that my group has limited knowledge of testautomation and will solely work in FitNesse:

  • do you know what we are trying to achieve with testautomation within the organization? Are we automating for the correct reasons?
  • before writing an automated test, describe your objective, does it still make sense to automate this test after reading your objective?
  • choose wisely what to automate and what not, be clear on the reason why you are automating someting
  • keep your tests short, readable and simple in order to keep maintenance low and knowledge tranfer capability high
  • make your tests data-driven, try avoiding hardcoded values. In a fast-moving environment like my customers’ it is key to ensure you are not facing failing tests due to inconsistent or wrong data hardcoded in your tests
  • try thinking in reusable functional pieces, keep an eye open for actions you do more than once and see if it makes sense to a) execute this action more than once and b) if this is indeed the case should we make this function(ality) a reusable scenario?

I am fully confident that I have left out quite a few (even more) important points, however the team itself will need to come up with a list of what they believe is going to be key in making testautomation a success within the entire organization, that is what a testingdojo is about, working together and learning from oneanother.