furthr test framework improvements

git-svn-id: svn+ssh://svn.gna.org/svn/gnustep/tools/make/trunk@32181 72102866-910b-0410-8b05-ffd578937521
This commit is contained in:
Richard Frith-MacDonald 2011-02-16 05:44:45 +00:00
parent cb9cf9b16f
commit 2d02dc0f97
14 changed files with 348 additions and 133 deletions

View file

@ -1,3 +1,26 @@
2011-01-16 Richard Frith-Macdonald <rfm@gnu.org>
* TestFramework/Testing.h:
* TestFramework/gnustep-tests:
* TestFramework/example1.m:
* TestFramework/example2.m:
* TestFramework/example3.m:
* TestFramework/example4.m:
* TestFramework/example5.m:
* TestFramework/GNUmakefile.in:
* TestFramework/example6.m:
* TestFramework/runtest.sh:
* TestFramework/README:
* TestFramework/example7.m:
* TestFramework/example8.m:
Further cleanups, corrections, simplifications, and documentation
improvements.
Added --failfast option to terminate a test run at the first failure.
Added recording of READEME to logs, and display of URGENT information.
Added tracking of entry to subdirectories so there's something to see
during execution ofa large testsuite.
Added summary at end of tests (needs more work)
2011-01-13 Richard Frith-Macdonald <rfm@gnu.org>
* TestFramework/Testing.h:

View file

@ -6,7 +6,7 @@ include $(GNUSTEP_MAKEFILES)/common.make
TEST_TOOL_NAME = @TESTNAME@
ADDITIONAL_OBJCFLAGS += -I@INCLUDEDIR@ -Wall
ADDITIONAL_OBJCFLAGS += @FAILFAST@ -I@INCLUDEDIR@ -Wall
ifeq ($(gcov),yes)
ADDITIONAL_OBJCFLAGS += -ftest-coverage -fprofile-arcs

View file

@ -71,18 +71,32 @@ tested, or a problem in the test itself. Either way, you should try to
fix the problem and provide a patch, or at least report it at:
https://savannah.gnu.org/bugs/?group=gnustep"
After the listing of any failures is a summary of counts of events:
COMPLETED: The number of separate test files which were run to completion.
COMPILEFAIL: The number of separate test files which did not compile and run.
CRASHED: The number of separate test files which failed while running.
DASHED: The number of hopes dashed ... tests which did not pass, but
which were not expected to pass (new code being worked on etc).
FAIL: The number of individual tests failed
PASS: The number of individual tests passed
UNRESOLVED: The number of unresolved tests ... tests which have
been omitted because of an earlier failure etc.
UNSUPPORTED: The number of unsupported tests ... those for features
which work on some platforms, but not on yours.
After the listing of any failures is a summary of counts of events as follows.
Passed tests:
The number of individual tests which passed.
Failed tests:
The number of individual tests failed ... this should really not appear.
Failed builds:
The number of separate test files which did not even build/compile.
Failed files:
The number of separate test files which failed while running.
Dashed hopes:
The number of hopeful tests which did not pass, but
which were not expected to pass (new code being worked on etc).
Failed sets:
The number of sets of tests which have been abandoned part way through
because of some individual test failure or an exception in support code
between tests.
Skipped sets:
The number of sets of tests whch were skipped entirely ...
eg. those for features which work on some platforms, but not on yours.
The binary executable of the most recently execute test file is left in
the obj subdirectory. So you can easily debug a failed test by:
@ -91,6 +105,11 @@ the obj subdirectory. So you can easily debug a failed test by:
3. setting a breakpoint at the exact test which failed, running to there,
4. and then stepping through slowly to see exactly what is going wrong.
You can use the --failfast option with gnustep-tests to tell it to abandon
testing after the first failure ... in which case you know that the
executable of the failed test will be available (unless the test file
failed to even compile of course). In this case, any core dump file will
also be left available.
Writing Tests
-------------
@ -175,8 +194,17 @@ all the tests in a suite to use a common makefile fragment which can (for
instance) build common resources before any tests are run.
For total control, the runtest.sh script checks to see if a 'Custom.mk' file
exists in the directory, and if it does it uses that file to build the tests
rather than generating its own make file.
exists in the directory, and if it does it uses that file as the template to
build the tests rather than using its own make file. The custom makefile
should use @TESTNAME@ where it wants the name of the test to be built/run,
@INCLUDEDIR@ where it wants the name of the include directory for test
framework headers, @FILENAME@ where it wants the name of the source file,
and @FAILFAST@ where it wants the '-DFAILFAST=1' to be substituted (if
gnustep-test was invoked with the --failfast option).
The Custom.mk script should build the test named @TESTNAME@ when it is
invoked without a target, but it should also implement a 'test' target to
run the most recently built test and a 'clean' target to clean up after
all tests.
You may also specify a GNUmakefile.tests in a project level directory (ie one
containing subdirectories of tests), and that makefile will be executed when
@ -187,23 +215,22 @@ please remember to add an 'after-clean::' target in the makefile to clean up
your custom files when gnustep-tests is run with the --clean option.
Ignoring aborted test files
---------------------------
Ignoring failed test files
--------------------------
When a test file crashes during running, or terminated with some sort of
When a test file crashes during running, or terminates with some sort of
failure status (eg the main() function returns a non-zero value) the framework
treats the test file as having 'aborted' ... it assumes that the program
treats the test file as having failed ... it assumes that the program
crashed during the tests and the tests did not complete.
On rare occasions you might actually want a test program to abort this way
and have it treated as normal completion. In order to do this you simply
create an additional file with the same name as the test program and a
file extension of '.abort'.
eg. If myTest.m is expected to crash, you would create myTest.m.abort to have
eg. If myTest.m is expected to crash, you would create myTest.abort to have
that crash treated as a normal test completion.
Ignoring directories
--------------------
@ -215,3 +242,22 @@ with ../GNUmakefile.super so that projects to build resources for other tests
can be ignored by the scripts running the tests, and just built as required
by ../GNUmakefile.super
Providing extra information
---------------------------
If a README file is present in a test directory, it will be added to the
logs of the test framework at the point when tests in that directory are
runs. It will therefore be clearly noticable to anyone examining the log
after a testrun, and could contain useful information for debugging.
If an URGENT file is present, its contents will be added to the logs like
those of a README, but it will also be displayed to the person running the
tests. As this is very intrusive, you should only use it if it is really
important theat the person running the testsuite should have the information.
In both cases, you must make sure that the file does not contain anything
which would confuse the test framework at the point when it analyses the
log ... so you need to avoid starting a line with any of the special
phrases generated to mark a passed test or a particular type of failure.

View file

@ -95,6 +95,12 @@ static void pass(int testPassed, const char *format, ...)
vfprintf(stderr, format, args);
fprintf(stderr, "\n");
va_end(args);
#if defined(FAILFAST)
if (NO == testPassed && NO == testHopeful)
{
exit(1); // Abandon testing now.
}
#endif
}
/* The unresolved() function is called with a single string argument to
@ -109,10 +115,13 @@ static void unresolved(const char *format, ...)
{
va_list args;
va_start(args, format);
fprintf(stderr, "Unresolved set: ");
fprintf(stderr, "Failed set: ");
vfprintf(stderr, format, args);
fprintf(stderr, "\n");
va_end(args);
#if defined(FAILFAST)
exit(1); // Abandon testing now.
#endif
}
/* The unsupported() function is called with a single string argument to
@ -124,7 +133,7 @@ static void unsupported(const char *format, ...)
{
va_list args;
va_start(args, format);
fprintf(stderr, "Unsupported set: ");
fprintf(stderr, "Skipped set: ");
vfprintf(stderr, format, args);
fprintf(stderr, "\n");
va_end(args);
@ -304,7 +313,7 @@ static void unsupported(const char *format, ...)
}
/* The NEED macro takes a test macro as an argument and breaks out of a set
* and reports it as unresolved if test does not pass.
* and reports it as failed if the test does not pass.
*/
#define NEED(testToTry) \
testToTry \

View file

@ -4,7 +4,7 @@
* a single test case involving plain C and no Objective-C code.
*
* If you run the test with 'gnustep-tests example1.m' it should
* report a single test file completed and a single test pass
* report a single test pass
*/
int
main()

View file

@ -3,7 +3,7 @@
/* A second test ... your first go at testing with ObjectiveC
*
* If you run the test with 'gnustep-tests example2.m' it should
* report a single test file completed and two test passes.
* report two test passes.
*/
int
main()

View file

@ -3,7 +3,7 @@
/* A third test ... using test macros.
*
* If you run the test with 'gnustep-tests example3.m' it should
* report a single test file completed, two test passes, and a test fail.
* report two test passes, and a test fail.
*/
/* Import a header because we want to use a method from it.

View file

@ -3,7 +3,7 @@
/* A fourth test ... testing for an exception.
*
* If you run the test with 'gnustep-tests example4.m' it should
* report a single test file completed, three test passes.
* report three test passes.
*/
/* Import a header because we want to use a method from it.

View file

@ -3,8 +3,7 @@
/* A fifth test ... hope.
*
* If you run the test with 'gnustep-tests example5.m' it should
* report a single test file completed, one hope dashed, two test passes,
* and one set unresolved.
* report one hope dashed and two test passes.
*/
int
main()

View file

@ -3,8 +3,7 @@
/* A sixth test ... need.
*
* If you run the test with 'gnustep-tests example6.m' it should
* report a single test file completed, one hope dashed, one test pass,
* and one set unresolved.
* report one hope dashed, one test pass, and one set failed.
*/
int
main()

View file

@ -3,8 +3,7 @@
/* A seventh test ... nesting sets.
*
* If you run the test with 'gnustep-tests example7.m' it should
* report a single test file completed, one test fail, two test passes,
* and one set unresolved.
* report a one test fail, two test passes, and one set failed.
*/
int
main()

View file

@ -4,7 +4,7 @@
/* An eighth test ... complex code fragments
*
* If you run the test with 'gnustep-tests example8.m' it should
* report a single test file completed, one test pass.
* report one test pass.
*/
int
main()

View file

@ -41,7 +41,7 @@ if test -z "$GNUSTEP_MAKEFILES"; then
fi
fi
GSCLEAN=NO
GSTESTMODE=normal
# Argument checking
while test $# != 0
@ -49,7 +49,7 @@ do
gs_option=
case $1 in
--clean)
GSCLEAN=YES
GSTESTMODE=clean
;;
--documentation)
echo
@ -62,6 +62,9 @@ do
cat $GNUSTEP_MAKEFILES/TestFramework/README
exit 0
;;
--failfast)
GSTESTMODE=failfast
;;
--help | -h)
echo
echo "$0: Script to run the GNUstep testsuite"
@ -69,6 +72,7 @@ do
echo "Runs the specified tests, or any in subdirectories of the"
echo "current directory if no arguments are given."
echo "Use 'gnustep-tests --documentation' for full details."
echo "Use 'gnustep-tests --failfast' to stop at the first failure."
echo "Use 'gnustep-tests --clean' to remove old logs and leftover files."
echo
echo "Interpreting the output"
@ -79,20 +83,6 @@ do
echo "way, you should try to fix the problem and provide a patch, or"
echo "at least report it at: https://savannah.gnu.org/bugs/?group=gnustep"
echo
echo "After the listing of any failures is a summary of counts of events:"
echo "COMPLETED: The number of separate test files which were run."
echo "COMPILEFAIL: The number of separate test files which did not start."
echo "ABORTED: The number of separate test files which failed to run."
echo "DASHED: The number of hopes dashed ... tests which did not"
echo " pass, but which were not expected to pass (new code"
echo " beign worked on etc)."
echo "FAIL: The number of individual tests failed"
echo "PASS: The number of individual tests passed"
echo "UNRESOLVED: The number of unresolved tests ... tests which have"
echo " been omitted because of an earlier failure etc."
echo "UNSUPPORTED: The number of unsupported tests ... those for features"
echo " which work on some platforms, but not on yours."
echo
exit 0
;;
--debug | -d) # ignore for backward compatibility.
@ -144,32 +134,62 @@ then
fi
CWD=`pwd`
TOP=$GNUSTEP_MAKEFILES/TestFramework
export TOP
RUNCMD=$TOP/runtest.sh
cd $CWD
OWD=
RUNCMD=$GNUSTEP_MAKEFILES/TestFramework/runtest.sh
RUNEXIT=0
run_test_file ()
{
echo >> $CWD/tests.log
echo Testing $TESTFILE... >> $CWD/tests.log
echo >> $CWD/tests.sum
sub=`dirname $TESTFILE`
if [ "x$OWD" != "x$sub" ]
then
OWD=$sub
if [ "$GSTESTMODE" = "clean" ]
then
echo "--- Cleaning tests in $sub ---"
rm -rf $sub/GNUmakefile.tmp $sub/obj $sub/core
rm -rf $sub/tests.tmp $sub/tests.sum.tmp
rm -rf $sub/tests.log $sub/tests.sum
rm -rf $sub/oldtests.log $sub/oldtests.sum
else
echo "--- Running tests in $sub ---"
echo "--- Running tests in $sub ---" >> $CWD/tests.log
if [ -r $dir/URGENT ]
then
cat $dir/URGENT
cat $dir/URGENT >> $CWD/tests.log
fi
if [ -r $dir/README ]
then
cat $dir/README >> $CWD/tests.log
fi
fi
fi
# Run the test. Log everything to a temporary file.
$RUNCMD $run_args $TESTFILE > $CWD/tests.tmp 2>&1
if [ "$GSTESTMODE" != "clean" ]
then
echo >> $CWD/tests.log
echo Testing $TESTFILE... >> $CWD/tests.log
echo >> $CWD/tests.sum
# Add the information to the detailed log.
cat $CWD/tests.tmp >> $CWD/tests.log
# Run the test. Log everything to a temporary file.
export GSTESTMODE
$RUNCMD $run_args $TESTFILE > $CWD/tests.tmp 2>&1
RUNEXIT=$?
# Extract the summary information and add it to the summary file.
grep "^\(Passed test\|Failed test\|Uncompiled file\|Completed file\|Aborted file\|Dashed hope\|Unresolved set\|Unsupported set\):" $CWD/tests.tmp > $CWD/tests.sum.tmp
cat $CWD/tests.sum.tmp >> $CWD/tests.sum
# Add the information to the detailed log.
cat $CWD/tests.tmp >> $CWD/tests.log
# If there were failures or unresolved tests then report them...
if grep -L "^\(Uncompiled file\|Aborted file\|Unresolved set\|Failed test\):" $CWD/tests.sum.tmp > /dev/null; then
echo
echo $TESTFILE:
grep "^\(Uncompiled file\|Aborted file\|Unresolved set\|Failed test\):" $CWD/tests.sum.tmp
# Extract the summary information and add it to the summary file.
grep "^\(Passed test\|Failed test\|Failed build\|Completed file\|Failed file\|Dashed hope\|Failed set\|Skipped set\):" $CWD/tests.tmp > $CWD/tests.sum.tmp
cat $CWD/tests.sum.tmp >> $CWD/tests.sum
# If there were failures or unresolved tests then report them...
if grep -L "^\(Failed build\|Failed file\|Failed set\|Failed test\):" $CWD/tests.sum.tmp > /dev/null; then
echo
echo $TESTFILE:
grep "^\(Failed build\|Failed file\|Failed set\|Failed test\):" $CWD/tests.sum.tmp
fi
fi
}
@ -184,67 +204,69 @@ then
mv tests.sum oldtests.sum
fi
if [ x"$GSCLEAN" = xYES ]
then
rm -f oldtests.log
rm -f oldtests.sum
fi
if [ x"$TESTDIRS" = x ]
then
if [ x"$GSCLEAN" = xYES ]
then
rm -rf obj
else
# Run specific individual test files.
for TESTFILE in $TESTS
do
run_test_file
done
fi
# Run specific individual test files.
for TESTFILE in $TESTS
do
run_test_file
if [ "$RUNEXIT" != "0" ]
then
break
fi
done
else
for dir in $TESTDIRS
do
if [ x"$GSCLEAN" = xYES ]
TESTS=`find $dir -name \*.m | sort | sed -e 's/\(^\| \)X[^ ]*//g'`
# If there are no test files found, we need to print out a message
# at this level to let people know we processed the directory.
if [ "x$TESTS" = "x" ]
then
echo "--- Cleaning tests in $dir ---"
cd $dir
if [ $TOP != `pwd` ]
if [ "$GSTESTMODE" = "clean" ]
then
if [ -f GNUmakefile.tests ]
echo "--- Cleaning tests in $dir ---"
else
echo "--- Running tests in $dir ---"
echo "--- Running tests in $dir ---" >> $CWD/tests.log
if [ -r $dir/URGENT ]
then
cat $dir/URGENT
cat $dir/URGENT >> $CWD/tests.log
fi
if [ -r $dir/README ]
then
$MAKE_CMD -f GNUmakefile.tests $MAKEFLAGS clean 2>&1 >> $CWD/tests.log
cat $dir/README >> $CWD/tests.log
fi
fi
rm -rf obj
cd $CWD
else
echo "--- Running tests in $dir ---"
TESTS=`find $dir -name \*.m | sort | sed -e 's/\(^\| \)X[^ ]*//g'`
# If there is a GNUmakefile.tests in the directory, run it first.
# Unless ... we are at the top level, in which case that file is
# our template.
cd $dir
if [ $TOP != `pwd` ]
then
if [ -f GNUmakefile.tests ]
then
$MAKE_CMD -f GNUmakefile.tests $MAKEFLAGS debug=yes 2>&1
fi
fi
cd $CWD
for TESTFILE in $TESTS
do
run_test_file
done
fi
# If there is a GNUmakefile.tests in the directory, run it first.
cd $dir
if [ -f GNUmakefile.tests ]
then
$MAKE_CMD -f GNUmakefile.tests $MAKEFLAGS debug=yes 2>&1
fi
# Now we process each test file in turn.
cd $CWD
for TESTFILE in $TESTS
do
run_test_file
if [ "$RUNEXIT" != "0" ]
then
break
fi
done
done
fi
if [ x"$GSCLEAN" = xYES ]
if [ "$GSTESTMODE" = "clean" ]
then
rm -f tests.log
rm -f tests.sum
rm -f tests.tmp tests.sum.tmp
rm -f tests.log tests.sum
rm -f oldtests.log oldtests.sum
else
# Make some stats.
if [ -r tests.sum ]
@ -255,7 +277,9 @@ else
# any summary with only a single result so the output is pretty.
# Sort the resulting lines by number of each status with the most
# common (hopefully passes) output first.
grep "^\(Passed test\|Failed test\|Uncompiled file\|Completed file\|Aborted file\|Dashed hope\|Unresolved set\|Unsupported set\):" tests.sum | cut -d: -f1 | sort | uniq -c | sed -e 's/.*/&s/' | sed -e 's/^\([^0-9]*1[^0-9].*\)s$/\1/' | sort -n -b -r > tests.tmp
# NB. we omit the 'Completed file' tests as uninteresting ... users
# generally only want to see the total pass count and any problems.
grep "^\(Passed test\|Failed test\|Failed build\|Failed file\|Dashed hope\|Failed set\|Skipped set\):" tests.sum | cut -d: -f1 | sort | uniq -c | sed -e 's/.*/&s/' | sed -e 's/^\([^0-9]*1[^0-9].*\)s$/\1/' | sort -n -b -r > tests.tmp
else
echo "No tests found." > tests.tmp
fi
@ -265,6 +289,84 @@ else
echo
cat tests.tmp
echo
grep -q "\(Failed set\|Failed sets\|Failed test\|Failed tests\|Failed build\|Failed build\|Failed file\|Failed files\)$" tests.tmp
if [ $? = 1 ]
then
echo "All OK!"
grep -q "\(Dashed hope\|Dashed hopes\)$" tests.tmp
if [ $? = 0 ]
then
echo
echo "But we were hoping that even more tests might have passed if"
echo "someone had added support for them to the package. If you"
echo "would like to help, please contact the package maintainer."
fi
grep -q "\(Skipped set\|Skipped sets\)$" tests.tmp
if [ $? = 0 ]
then
echo
echo "Even though no tests failed, we had to skip some testing"
echo "due to lack of support on your system. This might be because"
echo "some required software library was just not available when the"
echo "software was built (in which case you can install that library"
echo "and rebuild, then re-run the tests), or the required functions"
echo "may not be available on your operating system at all."
echo "If you would like to contribute code to add the missing"
echo "functionality, please contact the package maintainer."
fi
else
if [ "$GSTESTMODE" = "failfast" ]
then
exit 0
fi
grep -q "\(Failed build\|Failed build\)$" tests.tmp
if [ $? = 0 ]
then
echo
echo "Unfortunately we could not even compile all the test programs."
echo "This means that the test could not be run properly, and you need"
echo "to try to figure out why and fix it or ask for help."
fi
grep -q "\(Failed file\|Failed files\)$" tests.tmp
if [ $? = 0 ]
then
echo
echo "Some testing was abandoned when a test program aborted. This is"
echo "generally a severe problem and may nean that the package is"
echo "completely unusuable. You need to try to fix this and, if it's"
echo "not due to some problem on your system, please help by submitting"
echo "a patch (or at least a bug report) to the package maintainer."
fi
grep -q "\(Failed set\|Failed sets\)$" tests.tmp
if [ $? = 0 ]
then
echo
echo "Some set of tests failed. This could well mean that a large"
echo "number of individual tests dis not pass and that there are"
echo "severe problems in the software."
echo "Please submit a patch to fix the problem or send a bug report to"
echo "the package maintainer."
fi
grep -q "\(Failed test\|Failed tests\)$" tests.tmp
if [ $? = 0 ]
then
echo
echo "One or more tests failed. None of them should have."
echo "Please submit a patch to fix the problem or send a bug report to"
echo "the package maintainer."
fi
fi
echo
fi
# Delete the temporary file.

View file

@ -88,6 +88,8 @@ if test -z "$GNUSTEP_MAKEFILES"; then
fi
fi
TOP=$GNUSTEP_MAKEFILES/TestFramework
# Move to the test's directory.
DIR=`dirname $1`
if [ ! -d $DIR ]; then
@ -111,34 +113,58 @@ NAME=`basename $1`
if [ ! -f IGNORE ]
then
# remove any leftover makefile from a previous test
rm -f GNUmakefile.tmp
# Remove the extension, if there is one. If there is no extension, add
# .obj .
TESTNAME=`echo $NAME | sed -e"s/^\([^.]*\)$/\1.obj./;s/\.[^.]*//g"`
# Check for a custom makefile, if it exists use it.
# Check for a custom makefile template, if it exists use it.
if [ -r Custom.mk ]
then
if [ $NAME = "Custom.mk" ]
then
echo "include Custom.mk" >>GNUmakefile
else
exit 0
fi
TEMPLATE=Custom.mk
else
# Create the GNUmakefile by filling in the name of the test.
sed -e "s/@TESTNAME@/$TESTNAME/;s/@FILENAME@/$NAME/;s^@INCLUDEDIR@^$TOP^" < $TOP/GNUmakefile.in > GNUmakefile
TEMPLATE=$TOP/GNUmakefile.in
fi
# Create the GNUmakefile by filling in the name of the test,
# the name of the file, the include directory, and the failfast
# option if needed.
if [ "$GSTESTMODE" = "failfast" ]
then
sed -e "s/@TESTNAME@/$TESTNAME/;s/@FILENAME@/$NAME/;s/@FAILFAST@/-DFAILFAST=1/;s^@INCLUDEDIR@^$TOP^" < $TOP/GNUmakefile.in > GNUmakefile.tmp
else
sed -e "s/@TESTNAME@/$TESTNAME/;s/@FILENAME@/$NAME/;s/@FAILFAST@//;s^@INCLUDEDIR@^$TOP^" < $TOP/GNUmakefile.in > GNUmakefile.tmp
fi
rm -f GNUmakefile.bck
if [ -e GNUmakefile ]
then
mv GNUmakefile GNUmakefile.bck
fi
mv GNUmakefile.tmp GNUmakefile
# Clean up to avoid contamination by previous tests. (Optimistically) assume
# that this will never fail in any interesting way.
$MAKE_CMD clean >/dev/null 2>&1
# Compile it. Redirect errors to stdout so it shows up in the log, but not
# in the summary.
$MAKE_CMD $MAKEFLAGS messages=yes debug=yes 2>&1
$MAKE_CMD $MAKEFLAGS debug=yes 2>&1
if [ $? != 0 ]
then
echo "Uncompiled file: $1" >&2
echo "Failed build: $1" >&2
if [ "$GSTESTMODE" = "failfast" ]
then
mv GNUmakefile GNUmakefile.tmp
if [ -e GNUmakefile.bck ]
then
mv GNUmakefile.bck GNUmakefile
fi
exit 1
fi
else
# We want aggressive memory checking.
@ -163,17 +189,29 @@ then
then
echo "Completed file: $1" >&2
else
echo "Aborted file: $1 aborted without running all tests!" >&2
echo "Failed file: $1 aborted without running all tests!" >&2
if [ "$GSTESTMODE" = "failfast" ]
then
mv GNUmakefile GNUmakefile.tmp
if [ -e GNUmakefile.bck ]
then
mv GNUmakefile.bck GNUmakefile
fi
exit 1
fi
fi
else
echo "Completed file: $1" >&2
fi
fi
rm -f GNUmakefile
# Restore any old makefile
mv GNUmakefile GNUmakefile.tmp
if [ -e GNUmakefile.bck ]
then
mv GNUmakefile.bck GNUmakefile
fi
# Clean up to avoid contaminating later tests. (Optimistically) assume that
# this will never fail in any interesting way.
# Clean up any core dump.
rm -f core
#$MAKE_CMD clean >/dev/null 2>&1
fi