General notes
Information on issues relating to test planning
and execution and to tool support during testing is
available from the Test Support Center
of PSE.Why do we perform testing?
- To get a system which is as free of
errors as necessary (!)
- To inform the developers how they can avoid
errors!
- To obtain a "testable design"
- a design which is fundamental to
identifying errors!
- To ascertain deviations
from explicitly or implicitly anticipated system
features ("breaking the software")
- To provide management with information
on the quality of a system - for
deciding on the use of the system and its further
development
- To validate a system,
i.e. to show that is satisfies a specification
[Based on B.Beizer, "Black Box Testing",
p.7]
Test coverage as end-of-test criterion
Coverage is a good approximation of the quality
of the test, but must be treated with great
care. What is important is the accumulation of
regressible test cases of all types such as
- user interface tests, formal
- branch coverage in the code via instrumentation
- functional tests (as per specification, marked)
- use cases, etc.
According to Kaner (Cem Kaner : 'Testing computer
software', Int. Thomson Computer Press, 1993), there are
around 100 different types of coverage. The various test
case types provide a feeling for the types of error that
are likely to be identified. Consequently, coverage can
be taken as an end-of-test criterion.
The various views which the test has of the product
are important; however, there is no point in calculating
the entire coverage!
Testability
The testability - the possibility of
verifying the available functionality via an interface
using a test strategy - should be taken into
consideration in all phases of software
development. The developers should pay due
regard to testability and to ensuring sufficient
documentation of the code. Guidelines are available from
the Test Support Center.
Technology management
In order to ensure a successful project, a
mechanism must exist for enabling continuous
improvements during the test (timetable for
a Test Improvement Road Map):
- New technologies are found
- New findings in conferences /
workshops?
- What experience is available ->
Test Support Center?
- Needs are ascertained
- Which model is used for the test,
e.g. function detection model: 'Each
error-free test case increases the
quality of the product!'
- Project practices are checked
- Are guidelines used?
- Can procedures be improved and new
tools used?
- Alternatives are uncovered
- Which guidelines should be used for
development / testing?
- What suitable tools are available on
the market?
- Improvements are made possible
- Does the deadline situation allow the
practices which have been used to be
improved?
- Which subprojects are the new
methodologies and tools used in?
- Can the effectiveness and efficiency
of the test be increased?
- and then back to point 1
Organizational and strategic aspects during
testing
The factors of reliability, number of features,
project costs and delivery data play a significant role
in decisions relating to the test. To ensure that
sufficient funds are assigned to the test, the work
performed by the testers must result in increased customer
satisfaction and help increase the
company's profits. Companies are looking to reduce the
cost of quality. The test therefore has the function of
highlighting the benefits of the test in terms of
reducing other quality costs.
An example: If it is
possible to demonstrate that automation pays for itself
after no more than 3-5 fictitious manual tests, and
approximately 50 regression test runs are to be
anticipated during the software life cycle, the effort
for constructing the infrastructure for automated tests
is sufficiently well founded.
Quality check of testing
Is the test effective, i.e.
are the errors found that should be found? One of the
measures is error seeding, i.e.
the intentional inclusion of typical errors in selected
modules in order to test the error identification
capacity of the test cases.
Workflow during the development process
A workflow diagram must be
created in order to ensure the efficient deployment of
test methodologies and tools in the development process.
This workflow diagram contains the data (e.g. detailed
design specification), activities (e.g. design of test
cases), users (designers, programmers, testers, etc.).
Beginning with the Definition phase and extending to the
Operations phase, the diagram defines all the
relationships - e.g. 'generated', 'required',
'processed', etc. - between the objects.
Integration of testing with other aspects of
quality assurance
The following measures change the type of errors which
occur:
- Preparation and conducting of reviews (including
test case design with the development section)
- Basic and further training of testers
- Use of suitable methods and tools (design tools,
code generators, etc.)
- Error cause analysis (why are certain errors only
discovered at the customer's?)
- Definition of metrics
"Tester thinking"
To process test tasks successfully, "tester
thinking" is needed (in the same way as
"programming thinking" in the case of a
developer) with the following goals:
- to identify errors as efficiently
as possible
- to identify errors as systematically
as possible
- to identify as many errors
as possible.
Kaner: 'As a final note, I hope that
you'll take a moment to appreciate the richness,
multidimensionality, and complexity of what we do as
testers'.
|