Am 09.08.2012 23:17, schrieb Joseph Wright:
> On 09/08/2012 18:30, Bruno Le Floch wrote:
>> We run our test suite using a Makefile (or make.bat on Windows), which
>> - calls the appropriate TeX engine the appropriate number of times, then
>> - calls a Perl script to remove paths and other parts of the log file
>> specific to a given installation,
>> - calls diff to compare the result with a saved result.
>> The drawbacks are that it is os-dependent, it uses perl, which may not
>> be installed everywhere.
I think the big question is what is your goal here. My main goal
initially (for 2e and later for l3 code) was to have a robust test suite
that would enable us to identity issues upfront when making changes or
additions. And that on the whole has been very successful in the past
(provided as Joseph said, people wrote test files :-)
The important non-functional requirement is that it works
- and reasonably fast
Obviously you have to run TeX on the test files, but doing the
comparison using TeX is not very time efficient.
Now is it important that it is OS independent? Only if your goal is to
have *many* people use the mechanism and so far that wasn't really part
of the spec.
Perl is easily available both for unix and windows so effectively for
developers the current system is not to difficult to install or use.
The idea of using Lua is interesting, but as Joseph said, it is not that
this would then work out of the box for most people either (not now anyway).
Midterm, or if we think that this should be a package outside expl3 or
2e core development, it would perhaps be a good option though, but for
now my feeling is it would mean putting resources onto something that
doesn't actually bring any new value.
> The current LaTeX3 test system works pretty well, provided the tests are
> written to actually test behaviour correctly :-) Checking log file info
> seems to work well both for 'programmatic' information (writing the
> result to the log), and for 'typesetting' (by using \box_show:N to again
> write to the log). So it does not seem like a bad model. It might be
> worth looking at the TeX part of the process (the test .tex file) to see
> if it needs any tidying up to be more generally usable.
I would even claim that it works better than "pretty well" as it allows
to write the right kind of tests for typesetting results as well as for
testing functionality of code and interface specs and all that in an
automated way, and in fact it caught quite a number of errors in the past.
> Of course, if you do want to look at this then it would also be worth
> looking at the alternatives (e.g. qstest).
not sure they are alternatives, but there could be approaches that are
worth incorporating into the current setup.