20.9 Debugging configure scripts
While in general, configure scripts generated by Autoconf
strive to be fairly portable to various systems, compilers, shells, and
other tools, it may still be necessary to debug a failing test, broken
script or makefile, or fix or override an incomplete, faulty, or erroneous
test, especially during macro development. Failures can occur at all levels,
in M4 syntax or semantics, shell script issues, or due to bugs in the
test or the tools invoked by configure. Together with the
rather arcane error message that m4 and make may
produce when their input contains syntax errors, this can make debugging
rather painful.
Nevertheless, here is a list of hints and strategies that may help:
- When autoconf fails, common causes for error include:
Typically, it helps to go back to the last working version of the input
and compare the differences for each of these errors. Another
possibility is to sprinkle pairs of m4_traceon
and
m4_traceoff
judiciously in the code, either without a parameter
or listing some macro names and watch m4 expand its input
verbosely (see Debugging via autom4te).
- Sometimes autoconf succeeds but the generated
configure script has invalid shell syntax. You can detect this
case by running ‘bash -n configure’ or ‘sh -n configure’.
If this command fails, the same tips apply, as if autoconf had
failed.
- Debugging configure script execution may be done by sprinkling
pairs of
set -x
and set +x
into the shell script before
and after the region that contains a bug. Running the whole script with
‘shell ./configure -vx 2>&1 | tee log-file’ with a decent
shell may work, but produces lots of output. Here, it can help to
search for markers like ‘checking for’ a particular test in the
log-file.
- Alternatively, you might use a shell with debugging capabilities like
bashdb.
- When configure tests produce invalid results for your system,
it may be necessary to override them:
- For programs, tools or libraries variables, preprocessor, compiler, or
linker flags, it is often sufficient to override them at make
run time with some care (see Macros and Submakes). Since this
normally won't cause configure to be run again with these
changed settings, it may fail if the changed variable would have caused
different test results from configure, so this may work only
for simple differences.
- Most tests which produce their result in a substituted variable allow to
override the test by setting the variable on the configure
command line (see Compilers and Options, see Defining Variables,
see Particular Systems).
- Many tests store their result in a cache variable (see Caching Results). This lets you override them either on the
configure command line as above, or through a primed cache or
site file (see Cache Files, see Site Defaults). The name of a
cache variable is documented with a test macro or may be inferred from
Cache Variable Names; the precise semantics of undocumented
variables are often internal details, subject to change.
- Alternatively, configure may produce invalid results because
of uncaught programming errors, in your package or in an upstream
library package. For example, when
AC_CHECK_LIB
fails to find a
library with a specified function, always check config.log. This
will reveal the exact error that produced the failing result: the
library linked by AC_CHECK_LIB
probably has a fatal bug.
Conversely, as macro author, you can make it easier for users of your
macro:
- by minimizing dependencies between tests and between test results as far
as possible,
- by using make variables to factorize and allow
override of settings at make run time,
- by honoring the GNU Coding Standards and not overriding flags
reserved for the user except temporarily during configure
tests,
- by not requiring users of your macro to use the cache variables.
Instead, expose the result of the test via run-if-true and
run-if-false parameters. If the result is not a boolean,
then provide it through documented shell variables.