LATEX-L Archives

Mailing list for the LaTeX3 project

LATEX-L@LISTSERV.UNI-HEIDELBERG.DE

Options: Use Forum View

Use Monospaced Font
Show Text Part by Default
Show All Mail Headers

Message: [<< First] [< Prev] [Next >] [Last >>]
Topic: [<< First] [< Prev] [Next >] [Last >>]
Author: [<< First] [< Prev] [Next >] [Last >>]

Print Reply
Subject:
From:
Joseph Wright <[log in to unmask]>
Reply To:
Mailing list for the LaTeX3 project <[log in to unmask]>
Date:
Wed, 12 Aug 2009 20:33:15 +0100
Content-Type:
text/plain
Parts/Attachments:
text/plain (133 lines)
Lars Hellström wrote:
> Since it was a while ago I'm not so sure, but I think I arrived at the
> unified processor model only after I started coding. The seemingly less
> complex idea of a separate processing stage turned out to be more
> complex once you got down to do it.

This partly depends on your point of view!  In most cases,
post-processing is not needed, so under any of the xparse-like
implementations you end you with an arg spec. which doesn't look too
intimidating:

{ O{default} m o m }

or similar. I'd say that something like:

{ >{ \preprocessora \preprocessorb }O{default} m o m }

is not too bad in comparison to

{ @{ \preprocessora \preprocessorb O{default} }  @{} @{o} @{} }

My feeling is that for most people, most of the time the xparse method
edges it. (We seem agreed on post-processing, so it is just a question
of the syntax.)

>> First of all, I think we should drop the idea of using more letters
>> for shorthands. As we've discussed, each package will use different
>> letters and clashes will occur very quickly.
> 
> Agreed. 

Also agreed. Whatever is decided on, the "set" that xparse provides will
*not* be designed to change.

> (The functionality of the above
> \constrain_to_intrange:nnNn could be a candidate for inclusion into
> xparse, but \MakeHarmless probably isn't.)

I'm not sure \constrain_to_intrange:nnNn quite fits my idea of
post-processing. I'm thinking of it mainly for standardising input
rather than error-checking. However, that is a problem for rather later,
I think. If we can get the "basics" sorted, this type of thing can be
finalised later.

>> Secondly, for simplicity let's drop the idea of toks-grabbing as a
>> proxy for argument grabbing.
> 
> Not sure what you refer to here.

Same here, and I've read xparse really carefully! (I should add that the
only reason xparse-alt exists is so that I could understand what was
going on without breaking xparse and thus all of the xpackages. The
current xparse implementation needs re-writing one way or another.)

>> Thirdly, use actual function names rather than character strings to
>> represent argument processors.
>>
>>  From xdoc2l3, there seem to be two broad cases for argument processing:
>>     \argumentprocess{#1}
>> and
>>     \argumentprocess#1<delimiter>
>>
>> These can be referred to in the argument definitions as something like
>>     >\CS
>> and
>>     @\CS{DELIMITER}
>> where \CS is the processing function and DELIMITER is what's inserted
>> after the argument, in the second case.
> 
> Two points here:
> 
> 1. One may well want the argument processor to receive extra data (as
> \constrain_to_intrange:nnNn does), so don't assume \CS is just one token.

This partly depends how much "pre-processing" has to cover!

> 2. The <delimiter> kind of construction usually arises when one wants to
> do some low-level TeX hackery; the \TD@convert@colons macro that
> occurred in some examples is the kind of thing that would be at the core
> of \tl_replace_in:Nnn or similar. It is not unreasonable that such a :w
> thing is hidden under a :n macro, so don't waste effort on supporting it
> directly.

Totally agree. Even if we allow for extra arguments, the functions need
to be "Nn" (or perhaps "n" if we use a fixed return variable) at the end
of their arg spec.

>> Finally, allow these to be chained together if necessary, as in
>> something like this: (\foo takes one argument)
>>
>> \DeclareDocumentCommand \foo {
>>   @\foo_split_colons:n{::\q_nil} >\foo_range_check:nnn m
>> }{...}
> 
> I don't understand what you're saying here.

I'm lost too! This seems to be a mix of the two possible ideas we've
discussed, and then not right anyway!

> I'd say an expandable argument processor will necessarily have to be
> more restricted than one that can make assignments. Is it even possible
> to distinguish between
> 
>   \foo[bar]{!}
> and
>   \foo{[}bar]!
> 
> at expand-time?

My feeling is that \DeclareExpandableDocumentCommand, if implemented,
would be explicitly marked up:

"In general, document commands should be created using
\DeclareDocumentCommand. \DeclareExpandableDocumentCommand is intended
to be used only in *exceptional* circumstances where a fully-expandable
function is *essential*. It has very restricted processing
possibilities, when compared to the standard version."

I'd then expect us to basically support s, o, O and m type arguments,
with any one \long forcing all to be long (probably by insisting that
any + has to come before the first arg. spec.). I need to look at how s
and o grabbing is implemented in such cases, so the exact restrictions
are "yet to be determined". I would imagine that there would be no post
processing in this case. (Of course, if there are good examples of where
this needs thinking about, it can be looked at again.)

****

By the way, has anyone else got any comments on the very radical
approach proposed by Jonathan Fine?
-- 
Joseph Wright

ATOM RSS1 RSS2