LATEX-L Archives

Mailing list for the LaTeX3 project

LATEX-L@LISTSERV.UNI-HEIDELBERG.DE

Options: Use Forum View

Use Monospaced Font
Show Text Part by Default
Show All Mail Headers

Message: [<< First] [< Prev] [Next >] [Last >>]
Topic: [<< First] [< Prev] [Next >] [Last >>]
Author: [<< First] [< Prev] [Next >] [Last >>]

Print Reply
Subject:
From:
Lars Hellström <[log in to unmask]>
Reply To:
Mailing list for the LaTeX3 project <[log in to unmask]>
Date:
Tue, 11 Aug 2009 23:10:51 +0200
Content-Type:
text/plain
Parts/Attachments:
text/plain (111 lines)
Manuel Pégourié-Gonnard skrev:
> Joseph Wright a écrit :
>> I'd wondered about something similar. Looking at xdoc2l3, I think there
>> are a couple of obvious post-processors. One is \detokenize given a
>> "design" name (\Detokenize or \MakeString?), and the other is
>> \MakeHarmless. Perhaps also a de-babel function to deal with active
>> characters: \PunctuationOther? (Perhaps all of the functions should make
>> it clear they are argument post-processors? \ArgDetokenize,
>> \ArgMakeHarmless, etc.)
>>
> I agree with this suggestion. IMO, xparse should provide general-use
> post-processing functions, for the convenience of the user and as example, and
> let the user define more such functions, with the provided function named
> \ArgFooBar and a recommandation in the doc that the user-defined preprocessing
> functions use a similar naming convention.

I don't think imposing a specific naming convention for processors is 
necessarily a good thing; half the time, a suitable function might 
already exist, with no intended connection document command argument 
parsing.

Sticking with xdoc2l3-style processors, one could imagine ?{<code>} as 
a generic "call out to function with :Nn syntax", with the meaning that 
xparse will first execute

   <code>\some_tl{<argument to process>}

and then take the contents of \some_tl as the processing result. This 
way, an xdoc2l3 @{h} or @{ho} argument could be specified to xparse as

   @{ ?{\MakeHarmless} }
and
   @{ ?{\MakeHarmless} o }

respectively.


>> I'd stick to post-processing if given the option. Pre-processing can
>> lead to things which only work if the defined function is not nested
>> (back with % again), and I'd prefer not to imply that can work. e-TeX
>> makes post-processing catcodes possible, so saying "first the argument
>> is grabbed, then it is modified" seems okay.
>>
> Maybe I confused pre/post in my previous message. I always meant post-grabbing
> and pre-actual-function-call processing :-)

Me too. When I've written about "pre-processing", it has always been 
with respect to the document command body, not with respect to when the 
document command seizes control over that part of the document text.


>> (2) + for \long, upper-case for defaults on optional arguments:
>>
>> \DeclareDocumentCommand \foo { o           >\MakeHarmless +m }
>> \DeclareDocumentCommand \foo { O{default}  >\MakeHarmless +m }
>> \DeclareDocumentCommand \foo { +o          >\MakeHarmless +m }
>> \DeclareDocumentCommand \foo { +O{default} >\MakeHarmless +m }
>>
>> Both seem to look okay with the post-processing idea added in, I think.
>> Based on Lars' comments about having a very clear syntax, can we agree
>> on option (2)? If so, I can implement some of this and then see how it
>> looks "in practice".
>>
> I'm in favour of option 2. It looks more "extensible", if extensions are ever
> needed in the future.

I'm not so sure it is more extensible. The tricky thing about being 
\long is that you have to pay attention to it throughout the 
implementation, often to the point of making separate definitions for 
long and short cases. Having \long-ness as a flag means you can't add 
just a short variant of a specifier, you always have to add short AND 
long versions.

But maybe new specifiers wasn't the kind of extension you were thinking 
about?

>>> In terms of offering expandable optional arguments, I think Joseph's
>>> "wright" that this can't possibly work with more complex argument types;
>>> I might be wrong, however. Perhaps this can be something to revisit with
>>> LuaTeX.
>> I'm very reluctant to introduce something where we end up with a complex
>> warning about what it is for. You can imagine using something like "e =
>> expandable test for optional argument", but explaining what it is for
>> and how to use it then becomes very complicated. If we really do want
>> something like this, I'd think a separate function
>> (\DeclareExpandableDocumentCommand) might be better.
> 
> I think I agree. As you pointed out, the capabilities of a purely expandable
> parser would probably be quite different (no post-processing, less precise
> peeking at least), so using a different function for defining it makes sense.

Good! The ability to define expandable macros in LaTeX3 is one thing 
about DeclareDocumentCommand that I've worried about, as I suspect 
there are prople using \newcommand to define shorthand macros that have 
to be expandable. A separate \DeclareExpandableDocumentCommand command 
solves this nicely.

> OTOH, as you mentioned in another message, it would be very interesting to
> detect trivial cases like only 'm' arguments and use the trivial (hence purely
> expandable) parser in this case with DeclareDocumentCommand.

Doesn't help much that the parser is fully expandable, if the command 
is \protected and therefore doesn't expand in the first place.

I find the claim (given in some previous posting) that e-TeX shouldn't 
have gotten \protected right for alignments curious, however; isn't 
that just about the only case of protection that _couldn't_ be handled 
satisfactorily the LaTeX2e way?

Lars Hellström

ATOM RSS1 RSS2