As experienced consultants working in the field of business application development, in particular, testing business applications, we are sad to report that we cannot recall a single project where we think we got involved on time.
We, and most of all, the project could have been more effective if the testing (and other quality issues) had been concidered earlier, much earlier. Meanwhile, we have given up the illusion we could start on time. We learned to live with the fact we start late. We do it by building on strategies that take this into account and try to be as efficient as possible for the given situation. Please don't get it wrong. We are still the biggest proponents of the "early start". Tought by experience we just take the worse case scenario as the one to start with.
So what are the main concerns when you get called to automate testing at the point where practically no testing can help anyway?
At this stage you are likely to find a development team that seems to have been chasing deadlines before the project even started, a hopelesly understuffed test "team" doing "the best it can", an application in pieces changing unpredictably (from the testing perspective) and management that seems to prefer anything but quality issues. There is no time to preach about inspections, reviews, requirements management, modelling techniques, unit testing as a task for developers and all the other nice things. The natural reaction seems to be crying. However, as it does not look too professional, we try to follow different principles:
start from scratch
use what you can
do it fast
Luckily, these principles work even better in less chaotic situations.
Clearly, the more organised the project the less re-work is needed. One must quickly identify whether the existing "stuff" is of much value or not. Test automation is a programming problem in the first place. It seems that as many people get involved in it, that many "solutions" come out. On the other hand the evolution path seems to be the same for most of them. It is characterised by the errors that people made along the way and the workarounds they build to overcome the problems. A typical scenario seems to be:
capture & replay (C&R)
C&R enhanced with scripting
simple data-driven techniques
advanced data-driven techniques
exotic techniques (test modelling/generation)
Each of the stages has some (at least technical) advantages over the previous one(s). It is desirable to apply techniques from the higher stages (4 or 5) because they are simply better. They always solve some problems encountered in earlier stages. We are not going to discuss the criteria for each stage because there are good books out there. Those who have tried the test automation with GUI-based tools and have not given it up quickly have probably found it out without books (at least up to the level 3). We assume you know something about it.
The point we are making is that one does not have to go through all stages to reach a particular level. One can start at any level. Provided you one has the knowledge and the tools.
When we say "start from scratch" we do not mean you should start with level 1 and go through all the pain. Test scrips from higher levels have very little in common with scripts from level 1 or 2. Starting from scratch means that one should start with all that is necessary for the chosen (higher) level. This often means that one will have to re-implement something for the sake of things like easier maintenance, extensibility, understandability, etc.
EMOS FRM utilises programming techniques that belong to level 4 and 5. This document will show you how to apply these techniquies.
We mentioned earlier that you are likely to find some sort of a testing team at this late stage. They have probably created and executed lots of manual tests already. They are hopefully sick and tired of executing them again and again.
EMOS FRM is a technique capable of capturing most of those manual tests and executing them many times. The important part is that test capturing can be done by those manual testers with absolutely no knowledge (previous and future) of the testing tool. Morover, the scripts that are necessary to execute these tests are very well structured, small and easy to maintain.
Another side-effect of starting late is the fact that you are likely to find '"working" bits of the application. Like in any other project, chaotic or beautifully managed, the ultimate truth lies in the code. This is what automated tests are going to be executing anyway. And since these "working bits" are available, why not using them to generate the tests?
EMOS FRM can generate most of the test scripts and test data templates only by clicking on the application. One does not have to write a line of code. If the application is not available at the particular point in time, then one has to type it in herself.
In this document we are not going to discuss what to test and why to test it. The purpose of this document is to show how to test something that was for whatever reason selected to be tested. We will try to demonstrate that by knowing how to test something one can influence the decision of what to test. This is based on the fact of life which states that we cannot test everything we would like to because there is no time for it. However, if we can implement something "easily", we are going to be tempted to do it simply because we can do it.
EMOS FRM is in our opinion amasing in its capability of expressing numerous unrelated test cases without requiring a single change in the code base. We often talk about of playing LEGO® when we perform testing with EMOS FRM. Indeed, if there is a metaphor that could express the idea of EMOS FRM, that it is LEGO - the game of building bigger objects by combining basic stones. When we talk about "use what you can" the thing we use most is our code itself. EMOS FRM is built on the idea of re-usability. Based on only a few rules we are reusing the same test code for many different test cases.
Test automation has to pay for itself. To find out the point of rentability of a particular test case one typically adds the development time of an automated test case to its execution time and compares this to the time required for the equivalent manual procedure. Regardles of what this calculation really expresses (it can be easily misunderstood and/or misused; so we have our doubts), the simple rule is: the less time for the development, the higher the rentability.
We have mentioned that with EMOS FRM one can generate most of the test code provided the application is available. The calculation is: the later we start with testing the less time we have for testing, however, the later we start the more we can generate. It might be worth a try. There is definitely a point in time when it is to late for anything. Until that point we could be quite productive.
Another aspect of being fast is the time that is needed for maintenance of the test code. We mentioned that we are capable of reusing a lot of test code in many ways. The consequence is that there is not much test code. The smaller the code base the shorter the maintenance time.
In addition to this, if done well, EMOS FRM produces code with very little redundance. For example accessing a particular edit field is typically done with one single line of code in the whole test suite. Regardless of the number of test cases that might affect this field, there is only one place in the code where it is being accessed. Even regardless of the type of operation performed (read or write), it is still this one single line of code.
Our goal is simple:
Automate as many tests as possible with reasonable effort.
Our approach to achieve this goal is by making automated tests understandable to non-programmers. More than that. We want them to write the automated tests.
"So what?", you may ask, "With capture&replay (record&playback) anybody can write automated tests." Well, yes. Anybody can write them. Unfortunately, nobody can maintain them. Our aim is to develop automated test environment to be used over the long period of time. How do we do this?
The test scripts we create are extremely compact and well structured. They can and should be maintained by experienced programmers as this is a programming task with all that comes with it such as design, testing, debugging, documentation, versioning, etc. Think of our techniques as a pattern language (a series of related patterns) to guide you as you develop and organise your automated test environment.
The other, more important part of the testware we produce is test data. Sure, we also cook with water. However, the way we structure the test data and link it to the test sripts is the key to our concept. Again, think of our techniques as patterns for structuring test data. This is not an easy task as our test data include the navigation as well as the data itself. Indeed, our aim is to design the test data that is capable of fully excercising the application via it's GUI. Note that we said "test data" not "test scripts" as majority of other test automation aproaches would.
For the sceptics let at this point just be said that beside test scripts and test data we also deal with concepts such as test suites, test sets, test cases. Just to keep the motivation high.
As we automate tests we usualy perform several important tasks such as:
Find out what needs to be
tested, why and how
it needs to be tested.
Integrate automated testing into the (existing!)
Teach & preach about test automation and its possible application.
We do not want to elaborate these activities any further since it is out of scope of this text.