LATEX-L Archives

Mailing list for the LaTeX3 project

LATEX-L@LISTSERV.UNI-HEIDELBERG.DE

Options: Use Forum View

Use Monospaced Font
Show Text Part by Default
Condense Mail Headers

Message: [<< First] [< Prev] [Next >] [Last >>]
Topic: [<< First] [< Prev] [Next >] [Last >>]
Author: [<< First] [< Prev] [Next >] [Last >>]

Print Reply
Sender:
Mailing list for the LaTeX3 project <[log in to unmask]>
Date:
Fri, 10 Aug 2012 09:47:22 -0300
Reply-To:
Mailing list for the LaTeX3 project <[log in to unmask]>
Message-ID:
Subject:
MIME-Version:
1.0
Content-Transfer-Encoding:
7bit
In-Reply-To:
Content-Type:
text/plain; charset=UTF-8; format=flowed
From:
Paulo Roberto Massa Cereda <[log in to unmask]>
Parts/Attachments:
text/plain (167 lines)
Dear friends,

First of all, apologies for this messy reply (changing from Digest to 
Regular Mail). Thanks to Joseph for pointing me at the current 
discussion. :)

 > I think the big question is what is your goal here. My main goal
 > initially (for 2e and later for l3 code) was to have a robust test
 > suite that would enable us to identity issues upfront when making
 > changes or additions. And that on the whole has been very successful
 > in the past (provided as Joseph said, people wrote test files :-)

I can relate to that. :) Writing tests is one of the major concerns of 
the software industry. There's a lot of levels on how to test code, from 
unit to integration, and approaches, from the bowels to the interface - 
the Software Engineering wolves love these concepts. :P Of course, each 
project has its own needs and requiments. For LaTeX3, IMHO, it's 
critical to have every bit of code - from a lovely high level interface 
to an obscure undocumented feature - exhaustively tested.

Fair point, people don't write test suites as they should. /me is 
included :P

 > The important non-functional requirement is that it works
 >
 >     - automatically
 >     - and reasonably fast

Agreed. Though a bunch of tests might require a reasonable amount of 
time, each test should be as simple as possible, thus making its 
checking relatively fast.

 > Obviously you have to run TeX on the test files, but doing the
 > comparison using TeX is not very time efficient.

I trust you guys. :) I'm still in the middle of the TeXbook, fighting 
with some chapters there. :P

 > Now is it important that it is OS independent? Only if your goal is to
 > have *many* people use the mechanism and so far that wasn't really
 > part of the spec.

I believe that OS independent test suites might help us catch things 
that are OS-specific, or even detect possible issues with a certain 
vendor implementation. Again, I speak as an outsider; I have no idea how 
the code actually works.

 > Perl is easily available both for unix and windows so effectively for
 > developers the current system is not to difficult to  install or use.
 > The idea of using Lua is interesting, but as Joseph said, it is not
 > that this would then work out of the box for most people either (not
 > now anyway).

I share the same opinion. :) One idea I see is to pack the test system 
as a batteries-included file (.sh for Unix, .exe for Windows) packing 
the interpreter together. I've seen this before with some languages - 
Ruby, Python, Perl, Java - and it's doable. Of course, there's a 
drawback of having the payload of the virtual machine / interpreter 
bundled with the script, but at least it would be very easy to deploy. 
Or we can write the system with another language - say C - and generate 
a binary file.

I like the idea of using Lua. Enrico and I wrote a Lua script and the 
deployment in TeX Live was very, very easy. It runs under texlua, which 
is already available in the modern TeX distros. If the guy doesn't have 
a TeX distro, why in the Earth he wants to run a LaTeX3 test suite? :P

 > I would even claim that it works better than "pretty well" as it
 > allows to write the right kind of tests for typesetting results as
 > well as for testing functionality of code and interface specs and all
 > that in an automated way, and in fact it caught quite a number of
 > errors in the past.

I'm really impressed with the current test infrastructure. It looks 
awesome. :)

 > not sure they are alternatives, but there could be approaches that are
 > worth incorporating into the current setup.

One possibility is to have a test spec, so we can have a "generic" test 
infrastructure which reads this spec and "knows" how to perform a 
certain analysis.

I volunteer myself to help. :) You guys know that TeX is still a monster 
to me, but at least I think I can help in other battle fronts. :)

Paulo

Em 10/08/2012 09:00, LATEX-L automatic digest system escreveu:
> There is 1 message totaling 69 lines in this issue.
>
> Topics collected thus far:
>
>    1. Examples of l3doc & unit testing?
>
> ----------------------------------------------------------------------
>
> Date:    Fri, 10 Aug 2012 12:22:56 +0200
> From:    Frank Mittelbach <[log in to unmask]>
> Subject: Re: Examples of l3doc & unit testing?
>
> Am 09.08.2012 23:17, schrieb Joseph Wright:
>> On 09/08/2012 18:30, Bruno Le Floch wrote:
>>> We run our test suite using a Makefile (or make.bat on Windows), which
>>> - calls the appropriate TeX engine the appropriate number of times, then
>>> - calls a Perl script to remove paths and other parts of the log file
>>> specific to a given installation,
>>> - calls diff to compare the result with a saved result.
>>> The drawbacks are that it is os-dependent, it uses perl, which may not
>>> be installed everywhere.
>
> I think the big question is what is your goal here. My main goal
> initially (for 2e and later for l3 code) was to have a robust test suite
> that would enable us to identity issues upfront when making changes or
> additions. And that on the whole has been very successful in the past
> (provided as Joseph said, people wrote test files :-)
>
> The important non-functional requirement is that it works
>
>     - automatically
>     - and reasonably fast
>
> Obviously you have to run TeX on the test files, but doing the
> comparison using TeX is not very time efficient.
>
> Now is it important that it is OS independent? Only if your goal is to
> have *many* people use the mechanism and so far that wasn't really part
> of the spec.
>
> Perl is easily available both for unix and windows so effectively for
> developers the current system is not to difficult to  install or use.
> The idea of using Lua is interesting, but as Joseph said, it is not that
> this would then work out of the box for most people either (not now anyway).
>
> Midterm, or if we think that this should be a package outside expl3 or
> 2e core development, it would perhaps be a good option though, but for
> now my feeling is it would mean putting resources onto something that
> doesn't actually bring any new value.
>
>
>> The current LaTeX3 test system works pretty well, provided the tests are
>> written to actually test behaviour correctly :-) Checking log file info
>> seems to work well both for 'programmatic' information (writing the
>> result to the log), and for 'typesetting' (by using \box_show:N to again
>> write to the log). So it does not seem like a bad model. It might be
>> worth looking at the TeX part of the process (the test .tex file) to see
>> if it needs any tidying up to be more generally usable.
>
> I would even claim that it works better than "pretty well" as it allows
> to write the right kind of tests for typesetting results as well as for
> testing functionality of code and interface specs and all that in an
> automated way, and in fact it caught quite a number of errors in the past.
>
>> Of course, if you do want to look at this then it would also be worth
>> looking at the alternatives (e.g. qstest).
>
> not sure they are alternatives, but there could be approaches that are
> worth incorporating into the current setup.
>
> frank
>
> ------------------------------
>
> End of LATEX-L Digest - 9 Aug 2012 to 10 Aug 2012 - Unfinished
> **************************************************************
>

ATOM RSS1 RSS2