The TEST task type
In the USoft Approach, TEST is a task type in the DEFINE phase.
You need to test both specifications and implementations.
Testing specifications consists of having them validated by stakeholders such as domain experts, and comparing the specifications against implementations if you use both and have a structured approach to linking them (all of which USoft recommends you do).
Testing implementations is a different matter. There are different types of implementation tests. Not all of them are equally suitable in all situations.
Below is a short discussion of different types of implementation test and test settings. ALL these forms of testing, including acceptance testing, are part of the DEFINE phase. Most significantly, in the USoft Approach, an implementation is NOT considered complete until future users or their representative have been able to perform an acceptance test and have confirmed that the test went OK.
As part of the DEFINE phase, then, you need to organise the transfer (delivery) of all implementations from the Development environment where they were created to an Acceptance Test environment. That environment must have the same technical characteristics of the prospective Production machine. It must also preferably have realistic end user data – in many cases a viable strategy is actually to COPY all the data currently in Production. This approach is the only way in which you can test the delivery process ahead of time.
In a USoft context, which is often data-intensive, experience shows that if you do not test the delivery process ahead of time, data conversion routines necessary at Release time are likely to be a liability.
Generally, tasks of the TEST task type should be performed all the way through a DEFINE phase. As soon as possible after you have completed a task of the IMPLEMENT task type, it is good practice to have a colleague look at your work. USoft refers to this as a cross-check. The colleague does a quick test to see if the feature basically works in a Development environment, and will also have a look at HOW you chose to implement the feature and whether that is the best way, given team standards.
Types of implementation test
Unit tests check that a very specific, often technical, aspect of an implementation works correctly. Unit tests are often automated, for example, by recording the test in the USoft Benchmark test framework. In this case it is easy to rerun them when the application changes to see whether behaviour that should have remained unchanged actually still exists: regression testing. Unit tests are also important for proving exhaustively that a large number of predefined conditions are met. An example of this in a text-based feature is proving that it works properly with the entire range of special characters that it should support. Another example, in the case of browser-based Uis, is proving that the interface works properly on a range of browsers and browser versions, perhaps in combination with a range of different small devices of different makes.
System test is a generic term for tests that do not look at features in isolation but that check to see whether new or changed implementations have any unwanted impact on interrelated other behaviour. USoft rule implementations have the advantage of immediately activating each other, so that unwanted impact is highly likely to be noticed even without specific system tests being defined and run. The same is true of many other kinds of dependency that USoft manages automatically. For example, most forms of impact of structure change on USoft UIs is carried through or signalled automatically. If a change is made to a Logical View that is used by a REST service to expose data, the view is automatically flagged as invalid until it is next checked for correctness, and so on.
Use case tests simulate typical end user behavior. The end user could be a human user but also a machine that calls a service or a batch job in a particular way.
Acceptance testing is performed by future end users or their representatives. It is especially relevant in a contract situation where the commissioning party is itself made responsible, or part responsible, for formal acceptance of the new or changed software. Acceptance testing should be done as the last step in the production of a set of features that together constitute an interation of the PLAN, DEFINE and DELIVER phase – one release version, that is. Acceptance testing is of all features ready for Release. It must be done on a different machine than where the implementations were created, so that the delivery path to that machine is part of what is tested. Most teams who have organised acceptance testing properly have also organised similar testing by the development team itself before the work is submitted to the receiving party.
Tests discussed so far are functionality tests. Performance tests are in a different category: they are used to find out whether specific functionality runs quickly enough. USoft Benchmark has a number of special features such as the grouping and reporting of elapse times. Stress testing answers the question whether a solution still works, and at an acceptable speed, if many concurrent requests occur, or if they concern large volumes of data. Security tests prove that the solution has a required level of anti-hacking defense.
In the USoft Approach, the only types of test that are not part of the DEFINE phase are the confidence test and the release test carried out as part of a release (in the Release subphase of the DELIVER phase).