Lars Hellström wrote: >> I agree with this suggestion. IMO, xparse should provide general-use >> post-processing functions, for the convenience of the user and as >> example, and >> let the user define more such functions, with the provided function named >> \ArgFooBar and a recommandation in the doc that the user-defined >> preprocessing >> functions use a similar naming convention. > > I don't think imposing a specific naming convention for processors is > necessarily a good thing; half the time, a suitable function might > already exist, with no intended connection document command argument > parsing. A convention would apply to what the team supply "out of the box". That doesn't imply in any way that you can't then use something else. > Sticking with xdoc2l3-style processors, one could imagine ?{<code>} as a > generic "call out to function with :Nn syntax", with the meaning that > xparse will first execute > > <code>\some_tl{<argument to process>} > > and then take the contents of \some_tl as the processing result. This > way, an xdoc2l3 @{h} or @{ho} argument could be specified to xparse as > > @{ ?{\MakeHarmless} } > and > @{ ?{\MakeHarmless} o } > > respectively. Very much how I see things (although as I've said in another post, I think the return has to be a toks). >>> (2) + for \long, upper-case for defaults on optional arguments: >>> >>> \DeclareDocumentCommand \foo { o >\MakeHarmless +m } >>> \DeclareDocumentCommand \foo { O{default} >\MakeHarmless +m } >>> \DeclareDocumentCommand \foo { +o >\MakeHarmless +m } >>> \DeclareDocumentCommand \foo { +O{default} >\MakeHarmless +m } >>> >>> Both seem to look okay with the post-processing idea added in, I think. >>> Based on Lars' comments about having a very clear syntax, can we agree >>> on option (2)? If so, I can implement some of this and then see how it >>> looks "in practice". >>> >> I'm in favour of option 2. It looks more "extensible", if extensions >> are ever >> needed in the future. > > I'm not so sure it is more extensible. The tricky thing about being > \long is that you have to pay attention to it throughout the > implementation, often to the point of making separate definitions for > long and short cases. Having \long-ness as a flag means you can't add > just a short variant of a specifier, you always have to add short AND > long versions. Okay, the point here is that \long-ness applies to the grabber. Everything else should be \long in any case: I've tried to indicate this in expl3.pdf, I hope. The general idea is that there are a very limited number of cases where \cs_set_nopar: (\def) may be used in place of \cs_set: (\long\def): - For grabbing user input which should not contain \par (and is handled by xparse anyway) - For functions the programmer knowns can never be \long (those with no arguments and those which process something internal that you know, say \tl_map_function:nN { a b c } \my_short_function:N ). Even then, I'm not sure the rest of the team even feel that the second set of functions should not be \long :-) -- Joseph Wright