LATEX-L Archives

Mailing list for the LaTeX3 project


Options: Use Forum View

Use Monospaced Font
Show HTML Part by Default
Condense Mail Headers

Message: [<< First] [< Prev] [Next >] [Last >>]
Topic: [<< First] [< Prev] [Next >] [Last >>]
Author: [<< First] [< Prev] [Next >] [Last >>]

Print Reply
Mailing list for the LaTeX3 project <[log in to unmask]>
Thu, 27 Oct 2011 16:47:48 +0200
Mailing list for the LaTeX3 project <[log in to unmask]>
text/plain; charset=ISO-8859-1; format=flowed
Lars Hellström <[log in to unmask]>
text/plain (275 lines)
Bruno Le Floch skrev 2011-10-27 00.50:
>> True -- merely that S and K form a basis for the combinatory logic doesn't
>> mean it makes sense to express everything in terms of these. It can however,
>> at times, be more convenient to express some act of argument juggling
>> in-line using combinators than it is to introduce an ad-hoc helper macro. At
>> least IMHO.
> I'd need some examples in use in an actual package to be convinced
> that argument juggling is a common requirement. Most packages in TeX
> have to do with typesetting things, which is inherently
> non-expandable, and as soon as you give up expandability, you have
> access to assignments,

Assignments break kerning. For the kind of task I'm currently contemplating, 
it would be important to stay fully expandable. This tends to suggest 
employing some "exotic" (read: non-imperative) programming techniques.

> leading to
>      \cs_new:Npn \mypkg_foo:nnnnn #1#2#3#4#5
>         {
>            \tl_set:Nn \l_mypkg_alpha_tl {#1}
>            \int_gset:Nn \g_mypkg_beta_int {#2}
>            \dim_set:Nn \l_mypkg_gamma_dim {#3}
>            \tl_set:Nn \l_mypkg_delta_tl {#4}
>            \tl_set:Nn \l_mypkg_epsilon_tl {#5}
>            %... do something with the various variables

This also breaks reentering the same command, which can be a no-go. (And 
groups are at times non-transparent, so those won't save you in general.)

>         }
> if you don't want key-value input, which I think is better suited
> there. Then adding a new argument is just a matter of shifting the
> numbers in one place, a minor annoyance.

I'm sorry if my remark about named arguments led you to believe I want this 
for document commands; I'm mostly thinking about very low-level stuff at the 
moment. Chalk the named arguments thing up to brainstorming (another idea 
that didn't quite make it).

> Besides, I think good style includes having shorter macros: then
> adding a macro parameter is rather cheap.

This is not a universal good. Many short macros can cause one's package to 
start resembling a finite state machine, which in a sense is the ultimate 
form of spaghetti code. While not the traditional goto-inflicted form of 
spaghetti, it can get just as tangled and unmaintainable.

> Defining one auxiliary macro just next to the main macro also keeps
> code all in one place,

One would think so, yes, but sometimes it turns out to be more disruptive 
than constructive when one looks at the source.

> more so than having some obscurely named
> \use_i_biii_biibiii:nnn macros, which you would have to look up in the
> doc.
> Perhaps a useful feature would be the ability to define and use an
> auxiliary on the fly, although again, I'm missing use cases to think
> about. Roughly defined as "\afterassignment\next \long\def\next ".

No, I think /that/ is even provably useless: \use_i:n achieves the same end, 
without all the side-effects. I would even go as far as saying most uses of 
"define \next, which will then be executed once" are suboptimal. Maybe not 
all, but certainly a lot of them.

>> Argument juggling is often necessary in situations where one needs to fit
>> one utility function to the API for callback functions expected by some
>> other function.
> Do you have practical examples of cases where using \Twiddle or
> \Compose would be clearer than defining a \mypkg_macro_aux:nnNnN ?

Well, I've had a couple of uses for

   \use_ii_bi:nnn{<some data>}

recently. This happened in a context where the top-level command takes two 
"continuations" as arguments -- these being some code that should be 
executed after the main command, and possibly being supplied some data by 
it. The above then discards the second continuation, chooses the first, and 
hands it <some data> as additional argument.

>>     \Lift{\Compose{\Compose{\Compose}}{\Twiddle{\use_i:n}}}
>> as an expression for the S combinator. (No, I don't quite see it either. But
>> it becomes readily apparent that this does the right thing when one tries
>> applying it to some arguments.)
> Doubtful programming technique if no one can see what it does. LaTeX3
> is already criticized by some as being quite unreadable (that's
> somewhat forced by the inability to properly enforce typing),

*Don't* get me started on typing.

> that's just bringing things to an entirely new level.
>> Be careful what you wish for, ;-) because the following turned out to be
>> very elaborate indeed. Basically the idea was (i) that introducing a named
>> parameter in a replacement text is somehow analogous to doing a
>> lambda-abstraction and (ii) since combinators can do everything that
>> lambda-abstraction can, but without explicit representation of any
>> variables, one could in principle get rid of the named parameters in the
>> same way, without using up any of the precious #1-#9.
> You do realize that
>    (1) Very few well-designed real world macros end up with more than 9
> parameters.

Star-type arguments can eat up parameter numbers surprisingly fast. I don't 
think I've broken the barrier yet with a document command, but I've 
certainly had one that went up to #8 (of which 3 were star-type, 3 optional, 
and 2 mandatory, I think). Not all of them were there to begin with, of 
course, but rather added over the course of several years, to respond to 
specific needs to make it just a bit more expressive...

Face it: a lot of commands aren't designed much at all, but evolve over 
time. They'll still be part of the LaTeX ecosystem.

If instead you're talking about low-level macros, then I've just hit one 
which needs to juggle ten pieces of data. The naive definition would be 
something like

\cs_new:Npn \ttt_do_infimum_t:w #1 #2#3 #4 #5#6 #7 #8 #9#10 {
    \ttt_key_cmp:nNnTF {#8}<{#5} {
       \ttt_key_cmp:nNnTF {#8}<{#2} {
          \ttt_action_do:nn{infimum}#1 {#8} {#9}{#10}
          \ttt_action_do:nn{infimum}#4 {#8} {#9}{ #9{#2}{#3} }
       \ttt_action_do:nn{infimum}#7 {#8} {#9}{ #9{#5}{#6} }

The combinator alternative here is

\cs_new:Npn \ttt_do_infimum_t:w #1 #2#3 #4 #5#6 #7 #8 #9 {
    \ttt_key_cmp:nNnTF {#8}<{#5} {
       \ttt_key_cmp:nNnTF {#8}<{#2} {
          \ttt_action_do:nn{infimum}#1 {#8} {#9}
          \use_i:nn{ \ttt_action_do:nn{infimum}#4 {#8} {#9}{ #9{#2}{#3} } }
       \use_i:nn{ \ttt_action_do:nn{infimum}#7 {#8} {#9}{ #9{#5}{#6} } }

but had #9 and #10 been in the opposite order then the required combinator 
would have been something else, and something like \Lift could have been 
useful. As it turned out, I've settled for a third approach in this 
particular case.

>    (2) Replacing some tokens by others in a token list is _hard_,
> particularly within braces. Hell, even counting the number of tokens
> in a token list is tricky business.

Yes. This difficulty kills the named-parameters-combinated-away idea.

>> undelimited parameter and whose replacement text is obtained by replacing
>> all[*] occurrencies of the token supplied in the N argument by #1. Thus,
>>     \lambda:Nn{y}{syzygy}{oe}
>> expands (in two steps) to
>>     soezoegoe
> So this lambda should be called \lambda:nnn, since it takes three
> arguments,

No, expanding \lambda:Nn{y}{syzygy}{oe} *once* produces


It is expansion of the <inaccessible> token that eats the {oe}.

> and the first one is a braced group.

I wrote it that way, true, but isn't it supposed to be N when only single 
tokens are valid for that argument? (But this is really quibbling over the 
number of angels that can dance on 1sp, since that lambda doesn't exist.)

> I'm curious to know by what magic you are going to implement such a
> lambda.

I'm not going to -- that's why I put it forth as a hypothetical /primitive/. 
It was only used for demonstrating that anything \lambda:Nn might achieve 
(had it existed), combinators can do too.

> Some of my earliest code to LaTeX-L was considering similar
> questions of going through a token list, even within groups (with the
> extra requirement of expandability, this ends up being much too slow).
> I don't think that this is a direction I want to pursue, but if you do
> have robust code, it could be interesting as an
> \tl_replace_all_nested:Nnn if anything.

OK, maybe combinators can't do precisely that, whereas \lambda:Nn would. I'm 
not inclined to pursue such a general tool either (I might even argue that 
code that would seek to use it would be poorly designed to begin with), but 
I might in passing end up doing something that amounts to special cases of 
it. If nothing else, it could provide experience.

>>     \cs_new:Npn #1 { \foo{ \bar{ \baz{#1} } } }
>> would. This is otherwise a situation that "ordinary" tricks for rewriting
>> token sequences tend to find problematic.
> I suspect that your technique is presuming that functions have only
> one argument. Maybe I missed something.

Yes and no. A partially applied function is also a function, so conversely 
it is sufficient to eliminate one argument at a time. What makes this 
unfeasible for TeX to do on its own is that the argument one needs to start 
with is the /last/ one.

But automatic lambda elimination for named parameters is a dead horse. No 
need to beat it further.

>> I had some vague notion at the beginning that one could implement that T in
>> TeX. In principle one can, but the fact that T starts by picking off the
>> /last/ argument of a command makes it a lot harder.
> Not the main issue IMO.
> How do you go from
>>                 T( \lambda:Nn{w}{
>>                    y{x}{z{w}}
>>                 })
> to
>>                 \Compose{ y{x} } {
>>                    T( \lambda:Nn{w}{ z{w} } )
>>                 }
> ?

By /manually/ applying the stated rules defining T. Teaching TeX to do it 
would require inventing a representation of the expression that TeX can 
reliably manipulate, and the raw TeX code obviously isn't that.

>>> I can write a fully robust, but entirely unpratical, conversion from
>>> named parameters to numbered parameters: pass the definition through
>>> ted.sty (or some adaptation thereof). Locate all #. Read the
>>> corresponding names. Convert to digits. Build the token list back
>>> (that piece is easy, see l3trial/cs-input/cs-input.dtx). For more than
>>> 9 arguments, things are harder, but also feasible.
>>> I'd argue, though, that it is useless. If you want named parameters,
>>> key-value input is much more powerful.
>> A lot of the time: yes; and I can certainly live with numbered parameters.
>> It does however become a bit awkward when you add another optional argument
>> to an xparse-defined command that already has a lot of arguments, since you
>> will then find yourself having to renumber most #n in the replacement text.
>> Trivially doable, but something of a maintenance problem.
> Joseph already answered that with "use keyval". I second him.

Not all arguments are nice to express in keyval form. Many are indeed well 
suited for this, and I have argued for using it in some cases, but not all 
arguments are best given in the form of a full-blown keyval. (Although I 
suppose that people with extensive HTML or XML background might not be that 
troubled by the increased number of characters the keyval form requires the 
user to write.)

Lars Hellström