LATEX-L Archives

Mailing list for the LaTeX3 project

LATEX-L@LISTSERV.UNI-HEIDELBERG.DE

Options: Use Classic View

Use Monospaced Font
Show Text Part by Default
Condense Mail Headers

Topic: [<< First] [< Prev] [Next >] [Last >>]

Print Reply
Sender: Mailing list for the LaTeX3 project <[log in to unmask]>
Date: Thu, 6 Aug 2009 01:25:57 +0200
Reply-To: Mailing list for the LaTeX3 project <[log in to unmask]>
Subject: Re: xparse
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
In-Reply-To: <[log in to unmask]>
Content-Type: text/plain; charset=windows-1252; format=flowed
From: Lars Hellström <[log in to unmask]>
Parts/Attachments: text/plain (225 lines)
Joseph Wright skrev:
> Hello all,
> 
> As promised yesterday, I'd like to discuss finalising xparse. 
[snip]
> Several questions have come up about xparse.  At the most basic level,
> the idea of using something like xparse rather than \def 

Well, it's more a replacement for \newcommand and friends than for \def...

> (with things
> like \@ifstar, \@ifnextchar, etc.) is a good "good thing". So it is a
> question of picking the best form for xparse.
> 
> The idea of using one letter per argument keeps the input simple but
> does cost in power. However, I think that, for reasons I'll outline in a
> bit, it works. 

For roughly the same reason as single letters work in tabular/array 
column specifiers, I'd say.

> The current system uses modifiers for \long arguments.
> In xparse-alt, 

I'll have to take a look at that (but not tonight).

I should perhaps point out that I've also explored some variations on 
the xparse theme (Jan 2008, in a package I called xdoc2l3 -- see 
http://abel.math.umu.se/~lars/xdoc/), but back then I didn't get much 
response to the questions that required *decisions* (and then I got 
distracted by other projects). I will mention the issues I recall at 
suitable locations below.

> I've explored the alternative idea that lower-case
> letters are always used for short arguments, and upper-case for \long ones:
> 
> \DeclareDocumentCommand \foo { o m } % Both short
> \DeclareDocumentCommand \foo { O M } % Both \long
> \DeclareDocumentCommand \foo { o M } % Optional short, mandatory \long

Seems like a good idea -- a modifier for long/short was one thing I 
didn't get around to doing, as it seemed rather messy.

> If you compare xparse and xparse-alt, you'll also find I've tried to
> simplify the input somewhat. I've allowed optional modifiers to some
> letters, to include a default input:
> 
> \DeclareDocumentCommand \foo { o m } % Optional argument with no default
> \DeclareDocumentCommand \foo { o[default] m } % now has a default!
> 
> which keeps the number of letters down and, I hope, is clear enough.

Optional arguments of argument specifiers?!? My feeling is that 
optional, star-type, etc. arguments should be considered user-level 
syntactic sugar, and that the programming APIs should rather stick with 
mandatory arguments and signals like \NoValue. \DeclareDocumentCommand 
could be user-level enough that optional arguments make sense, but 
there is the alternative of having it be

\DeclareDocumentCommand \foo { o{\NoValue} m }
     % Optional argument with no default (returns \NoValue)
\DeclareDocumentCommand \foo { o{default} m }
     % now has a default!

Recall that \newcommand and friends require you to specify a default 
value for the optional argument whenever there is one.


> The current xparse implementation lets you create your own specifier
> letters. This is probably very risky:  what if two packages use the same
> letter?  I'd drop this idea.

Agreed. An analogy with array package definitions of new column 
specifiers fails because there it is usually the document author who 
would make the definition, thus confining the definition to code under 
the control of that author, whereas with xparse it would be arbitrary 
package authors that don't have such confinement.

I believe the ability to compose more basic "particles" into an 
argument specifier -- a key feature in the system I deviced -- would 
furthermore reduce the demand for naming new specifiers; why invent a 
name for it if you can write down the steps that define it?

> Related, the current xparse also allows
> comparison using different methods (by charcode, by meaning, etc.). This
> is important for things like testing for a star:
> 
> \DeclareDocumentCommand \foo { s o m } % test for * as first arg.
> 
> Will has pointed out it would be best to only test by charcode: looking
> for a star should do exactly that. By dropping the "different test
> methods" idea, things are also simplified.

No objection from me.

> Thinking about catcode-sensitive input, most cases can be handled by the
> e-TeX tools we have available. For true verbatim input, that obviously
> does not work, but I feel that truing to "crowbar" verbatim input xparse
> is a mistake.  Almost all user functions are not verbatim, and the
> particular issues for those that are, I think, better handled by hand.
> (The way that xparse works means that it absorbs arguments before
> passing them to the code you program.)  I also suspect that xparse
> handling of verbatim will never be reliable enough for general use.

Is that a yes or no to my "harmless character sequences"? These can 
handle data like URLs which might prompt people to go verbatim, but 
they aren't implemented using \catcode changes.

> Will has suggested that we should insist optional arguments follow
> directly on, with no spaces, to mean that:
> 
> \foo*{arg}  % #1 = TRUE, #2 = "arg"
>  and
> 
> \foo   *{arg} % #1 = FALSE, #2 = "*"
> 
> are treated differently.

Bad example. You can't treat these differently, since they tokenise to 
the same thing. The motivating example should rather be

   \foo{arg}[opt]  % #1 = "arg", #2 = "opt"
   [Optional arguments] are sometimes seen where there are none.

versus

   \foo{arg}       % #1 = "arg", #2 = \NoValue
   [Optional arguments] are sometimes seen where there are none.

With the LaTeX2e \@ifnextchar that looks past spaces, the latter 
example comes out as #1 = "arg", #2 = "Optional arguments".

> The idea here is that it makes LaTeX syntax
> more "regular", useful for translation to other formats.  I think he's
> right, but wonder how others see it.  The current xparse allows both
> space-skipping and non-space-skipping tests.  I'd certainly say we
> should go with only one: either spaces are allowed or they are not.

Since TeX always skips spaces (i) after control sequences whose names 
consist of letters and (ii) in front of a mandatory argument, the only 
possibility if you want to make it more "regular" for non-TeX parsers 
is to skip spaces in front of all kinds of argument. I think that is 
the wrong way to go however, as it increases the risk of LaTeX grabbing 
text that isn't arguments.

> Finally, the question about expandability of \IfNoValue has been asked
> before.  As \DeclareDocumentCommand only ever makes \protected
> functions, I don't think we need to worry.  The idea is that the NoValue
> test should be done "early":
> 
> \DeclareDocumentCommand \foo { o m } {
>   \IfNoValueTF {#1} {
>     \int_function_one:n {#2}
>   }{
>     \int_function_two:nn {#1} {#2}
>   }
> }
> 
> and not in the internal functions.  So the test can never be expanded,
> and there is not an issue (I hope).

There's quite a lot which is wrong with that. First, you presume that 
optional arguments are mostly a method of combining two 
implementation-level commands into one user-level command; in that 
particular case, it would probably be easier to just expose 
\int_function_one:n and \int_function_two:nn separately. Second, you 
assume that the \IfNoValueTF test will happen close to the command 
which takes an optional argument (otherwise protection status becomes 
irrelevant), but a great advantage of \NoValue is that it can be passed 
around without syntactic difficulties, so the actual test might happen 
in a distant helper macro or even completely separate from the \foo 
command (if the argument is stored away in a token list). Finally, 
you're thinking too imperatively about the whole thing, thus discarding 
the possibility of performing such tests at expand-only time.

The following is a short extract from some code I've been using for 
years, and which occurs entirely within a \[log in to unmask] Here, I'd 
like to switch the idiom "\ifx\NoValue#1\@empty" to ”\IfNoValueTF{#1}":

       \LevelSorted{%
          #4 \ifx \NoValue#1\@empty
             \TD@namespace
          \else
             \TD@convert@colons #1::\relax
          \fi
       }{%
          \texttt{#3}%
          \ifx \NoValue#5\@empty \else\space(#5)\fi, %
          \ifx \NoValue#1\@empty
             \texttt{\TD@namespace} %
          \else\ifx $#1$%
             global %
          \else
             \texttt{\TD@convert@colons #1::\relax} %
          \fi\fi \namespacephrase
       }%

Good luck rewriting that in imperative style, as a non-expandable 
\IfNoValueTF would require (and you seem to prefer!!). Mind you, this 
is maybe a third of the full thing being \edef'ed...

> 
> ================================================================
> 
> That is a long e-mail, I'm afraid. 

Nowhere near as long as those I've written on the subject in the past ;-)

> There will probably need to be more
> explanation so that everyone can follow things: ask away.  I'd like to
> finalise xparse soon (I need it for siunitx, and I want people to be
> able to test that!).

Other possible xparse features I'd like to see discussed (and recall 
right now) are:

  * The ability to preprocess (e.g. evaluate as a calc expression)
    an argument before handing it to the command body.

  * The ability to have arguments assigned to token lists rather
    than handed as macro parameters (lets one get around the 9
    parameter limit).

Lars Hellström

ATOM RSS1 RSS2