mirror of
https://github.com/gnustep/tools-make.git
synced 2025-05-30 00:41:14 +00:00
git-svn-id: svn+ssh://svn.gna.org/svn/gnustep/tools/make/trunk@32192 72102866-910b-0410-8b05-ffd578937521
269 lines
12 KiB
Text
269 lines
12 KiB
Text
|
|
TestFramework
|
|
=============
|
|
|
|
This testsuite is a general framework for testing the core GNUstep
|
|
libraries. Since, in part, we are testing the very basic level of
|
|
an Objective-C runtime, we can't use more complete unit tests, such
|
|
as OCUnit <http://www.sente.ch/software/ocunit/>.
|
|
The aim of this framework is to provide a simple, yet reasonably
|
|
comprehensive regression test mechanism for Objective-C development.
|
|
|
|
Please run the GNUstep testsuite (using this framework) often, when
|
|
adding new features, fixing bugs and running on new platforms.
|
|
|
|
Where working on features common to both Apple's Cocoa/iOS APIs and
|
|
to GNUstep, please try creating and running test cases in the Apple
|
|
environment before implementing/changing GNUstep code, so that you
|
|
are sure the behavior is the same in both cases.
|
|
|
|
|
|
License
|
|
-------
|
|
|
|
The testing framework and many of the test cases in the testsuite are
|
|
copyright by the FSF and distributed under the GPL. However, some tests
|
|
may not be copyright by the FSF, but retain the copyright of the original
|
|
owner (e.g tests submitted as bug reports). You should feel free to add tests
|
|
that are not copyright by the FSF. The copyright of these tests should
|
|
be clearly stated, however, and they should still be distributed under the
|
|
GPL version 3 or later.
|
|
|
|
|
|
Running Tests
|
|
-------------
|
|
|
|
To run a testsuite, use the gnustep-tests script along with the name of
|
|
the project testsuite (directory) you with to test:
|
|
|
|
gnustep-tests base
|
|
|
|
or where a group of tests within a project is to be run:
|
|
|
|
gnustep-tests base/NSArray
|
|
|
|
You may run individual test files by using the gnustep-tests script with the
|
|
name(s) of the Objective-C test source file(s):
|
|
|
|
gnustep-tests base/NSDate/general.m
|
|
gnustep-tests ./mytest.m base/NSDate/general.m
|
|
|
|
Alternatively, you may run tests from within a project/directory. eg.
|
|
|
|
cd base
|
|
gnustep-tests .
|
|
|
|
If you supply no arguments, the gnustep-test script will examine all the
|
|
subdirectories of the current directory, and attempt to run tests in each.
|
|
|
|
During testing, temporary files are created and removed for each test,
|
|
but any temporary files from the most recent test are left, as are the
|
|
log files. You can run 'gnustep-tests --clean' to remove left-over
|
|
temporary files and all log files.
|
|
|
|
|
|
Interpreting the output
|
|
-----------------------
|
|
|
|
The summary output lists all test failures ... there should not be any.
|
|
If a test fails then either there is a problem in the software being
|
|
tested, or a problem in the test itself. Either way, you should try to
|
|
fix the problem and provide a patch, or at least report it at:
|
|
https://savannah.gnu.org/bugs/?group=gnustep"
|
|
|
|
After the listing of any failures is a summary of counts of events as follows.
|
|
|
|
Passed tests:
|
|
The number of individual tests which passed.
|
|
|
|
Failed tests:
|
|
The number of individual tests failed ... this should really not appear.
|
|
|
|
Failed builds:
|
|
The number of separate test files which did not even build/compile.
|
|
|
|
Failed files:
|
|
The number of separate test files which failed while running.
|
|
|
|
Dashed hopes:
|
|
The number of hopeful tests which did not pass, but
|
|
which were not expected to pass (new code being worked on etc).
|
|
|
|
Failed sets:
|
|
The number of sets of tests which have been abandoned part way through
|
|
because of some individual test failure or an exception in support code
|
|
between tests.
|
|
|
|
Skipped sets:
|
|
The number of sets of tests whch were skipped entirely ...
|
|
eg. those for features which work on some platforms, but not on yours.
|
|
|
|
The binary executable of the most recently execute test file is left in
|
|
the obj subdirectory. So you can easily debug a failed test by:
|
|
1. running gnustep-tests with the single test file as its argument.
|
|
2. running gdb using obj/filename as an argument.
|
|
3. setting a breakpoint at the exact test which failed, running to there,
|
|
4. and then stepping through slowly to see exactly what is going wrong.
|
|
|
|
You can use the --failfast option with gnustep-tests to tell it to abandon
|
|
testing after the first failure ... in which case you know that the
|
|
executable of the failed test will be available (unless the test file
|
|
failed to even compile of course). In this case, any core dump file will
|
|
also be left available.
|
|
|
|
Writing Tests
|
|
-------------
|
|
|
|
A minimal test should be a file importing the header "Testing.h"
|
|
(which defines global variables, functions, and standard test macros)
|
|
and containing a main() function implementation which executes the
|
|
actual test code.
|
|
Groups of tests should be placed between calls to the START_SET() and
|
|
END_SET() macros.
|
|
|
|
You should look at the example test files in the
|
|
$GNUSTEP_MAKEFILES/TestFramework directory
|
|
for how to write test cases, and you should examine Testing.h
|
|
in the same directory to see full documentation of the range
|
|
of macros provided.
|
|
|
|
The main workhorse of the test framework is the pass() function, which has
|
|
a variable number of arguments ... first an integer expression, and second
|
|
a printf style format string describing what is being tested.
|
|
If you are calling the function directly you should use "%s,%d" at the
|
|
start of the format string and pass __FILE__ and __LINE__ as the next two
|
|
parameters (for consistency with the test macros).
|
|
The function uses the global variable 'testHopeful' to decide whether a
|
|
test which did not pass is a 'FAIL' (when testHopeful==NO) or a 'DASHED'
|
|
hope (when testHopeful==YES).
|
|
The function sets the global variable 'testPassed' to a BOOL reflecting
|
|
the result of the test (YES if the test passed, NO otherwise).
|
|
The only other functions are for occasional use to report sections of
|
|
the testsuite as not having run for some reason.
|
|
|
|
There are just four test macros.
|
|
All have uppercase names beginning with 'PASS'.
|
|
All wrap test code and a call to the pass() function in exception handlers.
|
|
All provide file name and line number information in the description string.
|
|
All are code blocks and do not need a semicolon terminator.
|
|
Code fragments must be enclosed in round brackets if they contain commas.
|
|
|
|
PASS passes if an expression resulting in an integer value is
|
|
non-zero
|
|
PASS_EQUAL passes if an expression resulting in an object is identical
|
|
to or -isEqual: to another object.
|
|
PASS_EXCEPTION passes if a code fragment raises an exception
|
|
PASS_RUNS passes if a code fragment runs without raising an exception
|
|
|
|
Tests are grouped together, along with any associated non-test code, between
|
|
paired calls to the START_SET and END_SET macros.
|
|
|
|
You can skip an entire set by supplying NO as the argument to the START_SET
|
|
macro, in which case the entire set will be reported as UNSUPPORTED.
|
|
|
|
Any uncaught exception (ie one which occurs outside a one of the four test
|
|
macros and is not caught by an exception handler you write yourself) will
|
|
cause the remaining tests in a set to be skipped. In this case the set
|
|
will be reported as UNRESOLVED.
|
|
|
|
You may also arrange to jump to the end of the set if a test fails by wrapping
|
|
the test in a NEED macro. Doing this also causes the set to be reported as
|
|
UNRESOLVED.
|
|
|
|
|
|
Advanced building
|
|
-----------------
|
|
|
|
In most cases, all you need to do is write an objective-c file as described
|
|
above, and the test framework will build it and run it for you automatically,
|
|
but occasionally you may need to use your own build process.
|
|
|
|
Where tests must make use of external resources or ensure that other tests
|
|
have already been run before they are run, you can make use of the gnustep
|
|
make package facilities to control dependencies etc.
|
|
|
|
Normally each test is built and run by generating a makefile in the directory
|
|
containing the test. This makefile uses the standard conventions of including
|
|
GNUmakefile.preamble before test-tool.make and including GNUmakefile.postamble
|
|
after test-tool.make, which gives you a high degree of control over how the
|
|
tests in the directory are built.
|
|
|
|
In addition to the preamble/postamble mechanism, the file ../GNUmakefile.super
|
|
is included at the start of the generated makefile (if it exists). This allows
|
|
all the tests in a suite to use a common makefile fragment which can (for
|
|
instance) build common resources before any tests are run.
|
|
|
|
For total control, the runtest.sh script checks to see if a 'Custom.mk' file
|
|
exists in the directory, and if it does it uses that file as the template to
|
|
build the tests rather than using its own make file. The custom makefile
|
|
should use @TESTNAME@ where it wants the name of the test to be built/run,
|
|
@INCLUDEDIR@ where it wants the name of the include directory for test
|
|
framework headers, @FILENAME@ where it wants the name of the source file,
|
|
and @FAILFAST@ where it wants the '-DFAILFAST=1' to be substituted (if
|
|
gnustep-test was invoked with the --failfast option).
|
|
The Custom.mk script should build the test named @TESTNAME@ when it is
|
|
invoked without a target, but it should also implement a 'test' target to
|
|
run the most recently built test and a 'clean' target to clean up after
|
|
all tests.
|
|
|
|
You may also specify a GNUmakefile.tests in a project level directory (ie one
|
|
containing subdirectories of tests), and that makefile will be executed when
|
|
you run tests on the project.
|
|
|
|
NB. When you supply custom makefile fragments which build helper programs etc,
|
|
please remember to add an 'after-clean::' target in the makefile to clean up
|
|
your custom files when gnustep-tests is run with the --clean option.
|
|
|
|
|
|
Ignoring failed test files
|
|
--------------------------
|
|
|
|
When a test file crashes during running, or terminates with some sort of
|
|
failure status (eg the main() function returns a non-zero value) the framework
|
|
treats the test file as having failed ... it assumes that the program
|
|
crashed during the tests and the tests did not complete.
|
|
|
|
On rare occasions you might actually want a test program to abort this way
|
|
and have it treated as normal completion. In order to do this you simply
|
|
create an additional file with the same name as the test program and a
|
|
file extension of '.abort'.
|
|
eg. If myTest.m is expected to crash, you would create myTest.abort to have
|
|
that crash treated as a normal test completion.
|
|
|
|
|
|
Ignoring directories
|
|
--------------------
|
|
|
|
If, when given the name of a test to be run, the runtest.sh script finds a
|
|
file named 'IGNORE' in the same directory as the named file, it skips
|
|
running of the test. The effect of this is that the presence of an IGNORE
|
|
file causes a directory to be ignored. This is useful in conjunction
|
|
with ../GNUmakefile.super so that projects to build resources for other tests
|
|
can be ignored by the scripts running the tests, and just built as required
|
|
by ../GNUmakefile.super
|
|
|
|
|
|
Providing extra information
|
|
---------------------------
|
|
|
|
If a README file is present in a test directory, it will be added to the
|
|
logs of the test framework at the point when tests in that directory are
|
|
runs. It will therefore be clearly noticable to anyone examining the log
|
|
after a testrun, and could contain useful information for debugging.
|
|
|
|
If an URGENT file is present, its contents will be added to the logs like
|
|
those of a README, but it will also be displayed to the person running the
|
|
tests. As this is very intrusive, you should only use it if it is really
|
|
important theat the person running the testsuite should have the information.
|
|
|
|
In both cases, you must make sure that the file does not contain anything
|
|
which would confuse the test framework at the point when it analyses the
|
|
log ... so you need to avoid starting a line with any of the special
|
|
phrases generated to mark a passed test or a particular type of failure.
|
|
|
|
If a Summary.sh file is present in a test directory and gnustep-tests is
|
|
used to run just those tests in that directory, the shell script will be
|
|
executed in order to provide the summary of the test results. In all other
|
|
cases the summary is done by the Summary.sh script provided in the test
|
|
framework.
|
|
|