More consistency tweaks and improvements

git-svn-id: svn+ssh://svn.gna.org/svn/gnustep/tools/make/trunk@32223 72102866-910b-0410-8b05-ffd578937521
This commit is contained in:
Richard Frith-MacDonald 2011-02-19 16:46:10 +00:00
parent 1cb87d2dc2
commit 9bb6961816
5 changed files with 133 additions and 111 deletions

View file

@ -1,3 +1,12 @@
2011-01-19 Richard Frith-Macdonald <rfm@gnu.org>
* TestFramework/gnustep-tests:
* TestFramework/runtest.sh:
* TestFramework/README:
* TestFramework/Testing.h:
Add support for start and end scripts in each test directory.
Fix bracketing in macro as suggested by Fred
2011-01-19 Richard Frith-Macdonald <rfm@gnu.org>
* TestFramework/gnustep-tests:

View file

@ -95,7 +95,7 @@ Failed sets:
between tests.
Skipped sets:
The number of sets of tests whch were skipped entirely ...
The number of sets of tests which were skipped entirely ...
eg. those for features which work on some platforms, but not on yours.
The binary executable of the most recently execute test file is left in
@ -171,6 +171,22 @@ the test in a NEED macro. Doing this also causes the set to be reported as
UNRESOLVED.
Ignoring failed test files
--------------------------
When a test file crashes during running, or terminates with some sort of
failure status (eg the main() function returns a non-zero value) the framework
treats the test file as having failed ... it assumes that the program
crashed during the tests and the tests did not complete.
On rare occasions you might actually want a test program to abort this way
and have it treated as normal completion. In order to do this you simply
create an additional file with the same name as the test program and a
file extension of '.abort'.
eg. If myTest.m is expected to crash, you would create myTest.abort to have
that crash treated as a normal test completion.
Advanced building
-----------------
@ -193,17 +209,17 @@ is included at the start of the generated makefile (if it exists). This allows
all the tests in a suite to use a common makefile fragment which can (for
instance) build common resources before any tests are run.
For total control, the runtest.sh script checks to see if a 'Custom.mk' file
For total control, the framework checks to see if a 'GNUmakefile.template' file
exists in the directory, and if it does it uses that file as the template to
build the tests rather than using its own make file. The custom makefile
build the tests rather than using its own make file. This template makefile
should use @TESTNAME@ where it wants the name of the test to be built/run,
@INCLUDEDIR@ where it wants the name of the include directory for test
framework headers, @FILENAME@ where it wants the name of the source file,
and @FAILFAST@ where it wants the '-DFAILFAST=1' to be substituted (if
gnustep-test was invoked with the --failfast option).
The Custom.mk script should build the test named @TESTNAME@ when it is
invoked without a target, but it should also implement a 'test' target to
run the most recently built test and a 'clean' target to clean up after
The GNUmakefile.template script should build the test named @TESTNAME@ when
it is invoked without a target, but it should also implement a 'test' target
to run the most recently built test and a 'clean' target to clean up after
all tests.
You may also specify a GNUmakefile.tests in a project level directory (ie one
@ -215,51 +231,33 @@ please remember to add an 'after-clean::' target in the makefile to clean up
your custom files when gnustep-tests is run with the --clean option.
Ignoring failed test files
--------------------------
When a test file crashes during running, or terminates with some sort of
failure status (eg the main() function returns a non-zero value) the framework
treats the test file as having failed ... it assumes that the program
crashed during the tests and the tests did not complete.
On rare occasions you might actually want a test program to abort this way
and have it treated as normal completion. In order to do this you simply
create an additional file with the same name as the test program and a
file extension of '.abort'.
eg. If myTest.m is expected to crash, you would create myTest.abort to have
that crash treated as a normal test completion.
Ignoring directories
--------------------
If, when given the name of a test to be run, the runtest.sh script finds a
file named 'IGNORE' in the same directory as the named file, it skips
running of the test. The effect of this is that the presence of an IGNORE
file causes a directory to be ignored. This is useful in conjunction
with ../GNUmakefile.super so that projects to build resources for other tests
can be ignored by the scripts running the tests, and just built as required
by ../GNUmakefile.super
The presence of an IGNORE file in a directory causes the directory to be
ignored while running tests. This is useful in conjunction with the
various makefile options listed above ... the makefiles mab be used to
build resources for tests in subdirectories which are ignored by the
test framework itself.
Providing extra information
---------------------------
Providing extra control and information
---------------------------------------
If a README file is present in a test directory, it will be added to the
logs of the test framework at the point when tests in that directory are
runs. It will therefore be clearly noticable to anyone examining the log
after a testrun, and could contain useful information for debugging.
If a Start.sh script is present in a test directory, it will be run
immediately before tests are performed in that directory. It is able
to append information to the log of the test run using the GSTESTLOG
variable.
If an URGENT file is present, its contents will be added to the logs like
those of a README, but it will also be displayed to the person running the
tests. As this is very intrusive, you should only use it if it is really
important theat the person running the testsuite should have the information.
If an End.sh file is present in a test directory, it will be run immediately
after the tests in that directory are performed. It is able to append
information to the log of the test run using the GSTESTLOG variable.
In both cases, you must make sure that the file does not contain anything
In both cases, you must make sure that the file does not do anything
which would confuse the test framework at the point when it analyses the
log ... so you need to avoid starting a line with any of the special
phrases generated to mark a passed test or a particular type of failure.
log ... so you need to avoid starting a line in the log with any of the
special phrases generated to mark a passed test or a particular type of
failure.
If a Summary.sh file is present in a test directory and gnustep-tests is
used to run just those tests in that directory, the shell script will be

View file

@ -154,7 +154,7 @@ static void unsupported(const char *format, ...)
int _cond; \
id _tmp = testRaised; testRaised = nil; [_tmp release]; \
[[NSGarbageCollector defaultCollector] collectExhaustively]; \
_cond = (int) expression; \
_cond = (int)(expression); \
[[NSGarbageCollector defaultCollector] collectExhaustively]; \
pass(_cond, "%s:%d ... " format, __FILE__, __LINE__, ## __VA_ARGS__); \
} \
@ -186,7 +186,7 @@ static void unsupported(const char *format, ...)
id _obj; \
id _tmp = testRaised; testRaised = nil; [_tmp release]; \
[[NSGarbageCollector defaultCollector] collectExhaustively]; \
_obj = ( expression );\
_obj = (id)(expression);\
_cond = _obj == expect || [_obj isEqual: expect]; \
[[NSGarbageCollector defaultCollector] collectExhaustively]; \
pass(_cond, "%s:%d ... " format, __FILE__, __LINE__, ## __VA_ARGS__); \

View file

@ -95,6 +95,12 @@ do
done
export GSTESTMODE
GSTESTDIR=`pwd`
export GSTESTDIR
GSTESTLOG=$GSTESTDIR/tests.log
export GSTESTLOG
GSTESTSUM=$GSTESTDIR/tests.sum
export GSTESTSUM
if test ! "$MAKE_CMD"
then
@ -139,8 +145,6 @@ then
fi
fi
CWD=`pwd`
OWD=
RUNCMD=$GNUSTEP_MAKEFILES/TestFramework/runtest.sh
RUNEXIT=0
@ -175,60 +179,33 @@ present()
run_test_file ()
{
sub=`dirname $TESTFILE`
if test "x$OWD" != "x$sub"
then
OWD=$sub
if test "$GSTESTMODE" = "clean"
then
echo "--- Cleaning tests in $sub ---"
rm -rf $sub/GNUmakefile.tmp $sub/obj $sub/core
rm -rf $sub/tests.tmp $sub/tests.sum.tmp
rm -rf $sub/tests.log $sub/tests.sum
rm -rf $sub/oldtests.log $sub/oldtests.sum
else
echo "--- Running tests in $sub ---"
echo "--- Running tests in $sub ---" >> $CWD/tests.log
if test -r $dir/URGENT
then
cat $dir/URGENT
cat $dir/URGENT >> $CWD/tests.log
fi
if test -r $dir/README
then
cat $dir/README >> $CWD/tests.log
fi
fi
fi
if test "$GSTESTMODE" != "clean"
then
echo >> $CWD/tests.log
echo Testing $TESTFILE... >> $CWD/tests.log
echo >> $CWD/tests.sum
echo >> $GSTESTLOG
echo Testing $TESTFILE... >> $GSTESTLOG
echo >> $GSTESTSUM
# Run the test. Log everything to a temporary file.
export GSTESTMODE
$RUNCMD $run_args $TESTFILE > $CWD/tests.tmp 2>&1
$RUNCMD $run_args $TESTFILE > $GWSTESTLOG.tmp 2>&1
RUNEXIT=$?
if test "$RUNEXIT" != "0" -a "$RUNEXIT" != "99"
then
echo "Failed script: $TESTFILE" >> $CWD/tests.tmp
echo "Failed script: $TESTFILE" >> $GWSTESTLOG.tmp
fi
# Add the information to the detailed log.
cat $CWD/tests.tmp >> $CWD/tests.log
cat $GWSTESTLOG.tmp >> $GSTESTLOG
# Extract the summary information and add it to the summary file.
extract $CWD/tests.tmp "^Passed test:" "^Failed test:" "^Failed build:" "^Completed file:" "^Failed file:" "^Failed script:" "^Dashed hope:" "^Failed set:" "^Skipped set:" > $CWD/tests.sum.tmp
cat $CWD/tests.sum.tmp >> $CWD/tests.sum
extract $GWSTESTLOG.tmp "^Passed test:" "^Failed test:" "^Failed build:" "^Completed file:" "^Failed file:" "^Failed script:" "^Dashed hope:" "^Failed set:" "^Skipped set:" > $GSTESTSUM.tmp
cat $GSTESTSUM.tmp >> $GSTESTSUM
# If there were failures or unresolved tests then report them...
if present $CWD/tests.sum.tmp "^Failed script:" "^Failed build:" "^Failed file:" "^Failed set:" "^Failed test:"
if present $GSTESTSUM.tmp "^Failed script:" "^Failed build:" "^Failed file:" "^Failed set:" "^Failed test:"
then
echo
echo $TESTFILE:
extract $CWD/tests.sum.tmp "^Failed script:" "^Failed build:" "^Failed file:" "^Failed set:" "^Failed test:"
extract $GSTESTSUM.tmp "^Failed script:" "^Failed build:" "^Failed file:" "^Failed set:" "^Failed test:"
fi
fi
}
@ -257,56 +234,90 @@ then
fi
done
else
for dir in $TESTDIRS
for TESTDIR in $TESTDIRS
do
SUMD=$dir
TESTS=`find $dir -name \*.m | sort | sed -e 's/\(^\| \)X[^ ]*//g'`
# If there are no test files found, we need to print out a message
# at this level to let people know we processed the directory.
if test "x$TESTS" = "x"
found=no
# Get the names of all subdirectories containing source files.
SRCDIRS=`find $TESTDIR -name \*.m | sed -e 's;/[^/]*$;;' | sort -u | sed -e 's/\(^\| \)X[^ ]*//g'`
if test x"$SRCDIRS" = x
then
continue
fi
SUMD=$TESTDIR
for dir in $SRCDIRS
do
if test -f $dir/IGNORE
then
continue
fi
found=yes
cd $dir
if test "$GSTESTMODE" = "clean"
then
echo "--- Cleaning tests in $dir ---"
rm -rf core obj GNUmakefile.tmp tests.tmp tests.sum.tmp
rm -f tests.log tests.sum
rm -f oldtests.log oldtests.sum
else
echo "--- Running tests in $dir ---"
echo "--- Running tests in $dir ---" >> $CWD/tests.log
if test -r $dir/URGENT
if test -r ./Start.sh -a -x ./Start.sh
then
cat $dir/URGENT
cat $dir/URGENT >> $CWD/tests.log
./Start.sh
fi
if test -r $dir/README
fi
TESTS=`echo *.m | sort | sed -e 's/\(^\| \)X[^ ]*//g'`
# If there is a GNUmakefile.tests in the directory, run it first.
if test -f GNUmakefile.tests
then
if test "$GSTESTMODE" = "clean"
then
cat $dir/README >> $CWD/tests.log
$MAKE_CMD -f GNUmakefile.tests $MAKEFLAGS clean 2>&1
else
$MAKE_CMD -f GNUmakefile.tests $MAKEFLAGS debug=yes 2>&1
fi
fi
fi
# If there is a GNUmakefile.tests in the directory, run it first.
cd $dir
if test -f GNUmakefile.tests
then
$MAKE_CMD -f GNUmakefile.tests $MAKEFLAGS debug=yes 2>&1
fi
# Now we process each test file in turn.
for TESTFILE in $TESTS
do
run_test_file
if test "$RUNEXIT" != "0"
then
break
fi
done
# Now we process each test file in turn.
cd $CWD
for TESTFILE in $TESTS
do
run_test_file
if test "$GSTESTMODE" != "clean"
then
# And perform the cleanup script.
if test -r End.sh -a -x End.sh
then
End.sh
fi
fi
cd $GSTESTDIR
if test "$RUNEXIT" != "0"
then
break
fi
done
if test $found = no
then
echo "No tests found in $TESTDIR"
fi
if test "$RUNEXIT" != "0"
then
break
fi
done
fi
if test "$GSTESTMODE" = "clean"
then
rm -f GNUmakefile.tmp tests.tmp tests.sum.tmp
rm -rf core obj GNUmakefile.tmp tests.tmp tests.sum.tmp
rm -f tests.log tests.sum
rm -f oldtests.log oldtests.sum
else

View file

@ -48,7 +48,7 @@ done
if test x"$BASH_VERSION" = x
then
# In some shells the builtin test command actually only implements a subset
# In some shells the built in test command actually only implements a subset
# of the normally expected functionality (or is partially broken), so we
# define a function to call a real program to do the job.
test()
@ -141,7 +141,11 @@ then
TESTNAME=`echo $NAME | sed -e"s/^\(test^.]*\)$/\1.obj./;s/\.[^.]*//g"`
# Check for a custom makefile template, if it exists use it.
if test -r Custom.mk
# Custom.mk is deprecated ... for backward compatibility only.
if test -r GNUmakefile.template
else
TEMPLATE=GNUmakefile.template
elif test -r Custom.mk
then
TEMPLATE=Custom.mk
else