Print

Print


> I wouldn't be surprised if they are hopeless for arithmetic; we're not too
> far from the representations used in formal logic, where 1 is S(0), 2 is
> S(S(1)), 3 is S(S(S(0))), etc.

Indeed, hopeless for arithmetic.

> True -- merely that S and K form a basis for the combinatory logic doesn't
> mean it makes sense to express everything in terms of these. It can however,
> at times, be more convenient to express some act of argument juggling
> in-line using combinators than it is to introduce an ad-hoc helper macro. At
> least IMHO.

I'd need some examples in use in an actual package to be convinced
that argument juggling is a common requirement. Most packages in TeX
have to do with typesetting things, which is inherently
non-expandable, and as soon as you give up expandability, you have
access to assignments, leading to

    \cs_new:Npn \mypkg_foo:nnnnn #1#2#3#4#5
       {
          \tl_set:Nn \l_mypkg_alpha_tl {#1}
          \int_gset:Nn \g_mypkg_beta_int {#2}
          \dim_set:Nn \l_mypkg_gamma_dim {#3}
          \tl_set:Nn \l_mypkg_delta_tl {#4}
          \tl_set:Nn \l_mypkg_epsilon_tl {#5}
          %... do something with the various variables
       }

if you don't want key-value input, which I think is better suited
there. Then adding a new argument is just a matter of shifting the
numbers in one place, a minor annoyance.

Besides, I think good style includes having shorter macros: then
adding a macro parameter is rather cheap.

>> No. Most often, the cases where those kinds of macros are useful in
>> LaTeX are when you try to control expansion, and that's taken care of
>> by the l3expan module. Maybe some of the functions from that module
>> look a little bit like that (not quite, though).
>
> It is indeed mostly within argument juggling that I might dare to employ
> these beasts: when one needs to rearrange some arguments that are in place
> up ahead, and the task seems slightly too minor to justify a specialised
> helper macro. (Trading a bit of speed for keeping all the code for something
> in one place, thus improving maintainability.)

Defining one auxiliary macro just next to the main macro also keeps
code all in one place, more so than having some obscurely named
\use_i_biii_biibiii:nnn macros, which you would have to look up in the
doc.

Perhaps a useful feature would be the ability to define and use an
auxiliary on the fly, although again, I'm missing use cases to think
about. Roughly defined as "\afterassignment\next \long\def\next ".

> Argument juggling is often necessary in situations where one needs to fit
> one utility function to the API for callback functions expected by some
> other function.

Do you have practical examples of cases where using \Twiddle or
\Compose would be clearer than defining a \mypkg_macro_aux:nnNnN ?

>    \Lift{\Compose{\Compose{\Compose}}{\Twiddle{\use_i:n}}}
>
> as an expression for the S combinator. (No, I don't quite see it either. But
> it becomes readily apparent that this does the right thing when one tries
> applying it to some arguments.)

Doubtful programming technique if no one can see what it does. LaTeX3
is already criticized by some as being quite unreadable (that's
somewhat forced by the inability to properly enforce typing), that's
just bringing things to an entirely new level.

> Be careful what you wish for, ;-) because the following turned out to be
> very elaborate indeed. Basically the idea was (i) that introducing a named
> parameter in a replacement text is somehow analogous to doing a
> lambda-abstraction and (ii) since combinators can do everything that
> lambda-abstraction can, but without explicit representation of any
> variables, one could in principle get rid of the named parameters in the
> same way, without using up any of the precious #1-#9.

You do realize that
  (1) Very few well-designed real world macros end up with more than 9
parameters.
  (2) Replacing some tokens by others in a token list is _hard_,
particularly within braces. Hell, even counting the number of tokens
in a token list is tricky business.


> undelimited parameter and whose replacement text is obtained by replacing
> all[*] occurrencies of the token supplied in the N argument by #1. Thus,
>
>    \lambda:Nn{y}{syzygy}{oe}
>
> expands (in two steps) to
>
>    soezoegoe

So this lambda should be called \lambda:nnn, since it takes three
arguments, and the first one is a braced group.


>    \Lift{
>       \lambda:Nn{x}{
>          \lambda:Nn{y}{
>             \lambda:Nn{z}{
>                \lambda:Nn{w}{
>                   y{x}{z{w}}
>                }
>             }
>          }
>       }
>    }{#1}{#2}{#3}
>
> will get x=#3, y=#1, z=#2, and w=#3, so the expansion becomes
>
>    #1{#3}{#2{#3}}

I'm curious to know by what magic you are going to implement such a
lambda. Some of my earliest code to LaTeX-L was considering similar
questions of going through a token list, even within groups (with the
extra requirement of expandability, this ends up being much too slow).
I don't think that this is a direction I want to pursue, but if you do
have robust code, it could be interesting as an
\tl_replace_all_nested:Nnn if anything.


> First, if <E> is some balanced text not containing the token <x>, then
> Second, if <F> is some balanced text containing the token <x>, then

Both of those can only be tested easily as "if <E/F> is some balanced
text containing (/not) the token <x> _at outer level (i.e., not within
braces)_, then [...]. If you want to test deeper, things are very
hairy.

> If the intent is to convert more general TeX code, some more cases would be
> needed. In particular, something would be needed for the case that two
> separate commands simply occur in sequence:
>
>    \foo{bar} \baz{foo}

How do you treat the simple case where a begin-group character appears
without a function before it. Simply \lambda:nnn {r} { {bar} } {z}.


>    \cs_new:Npn #1 { \foo{ \bar{ \baz{#1} } } }
>
> would. This is otherwise a situation that "ordinary" tricks for rewriting
> token sequences tend to find problematic.

I suspect that your technique is presuming that functions have only
one argument. Maybe I missed something.

> I had some vague notion at the beginning that one could implement that T in
> TeX. In principle one can, but the fact that T starts by picking off the
> /last/ argument of a command makes it a lot harder.

Not the main issue IMO.

How do you go from

>                T( \lambda:Nn{w}{
>                   y{x}{z{w}}
>                })

to

>                \Compose{ y{x} } {
>                   T( \lambda:Nn{w}{ z{w} } )
>                }

?

>> I can write a fully robust, but entirely unpratical, conversion from
>> named parameters to numbered parameters: pass the definition through
>> ted.sty (or some adaptation thereof). Locate all #. Read the
>> corresponding names. Convert to digits. Build the token list back
>> (that piece is easy, see l3trial/cs-input/cs-input.dtx). For more than
>> 9 arguments, things are harder, but also feasible.
>>
>> I'd argue, though, that it is useless. If you want named parameters,
>> key-value input is much more powerful.
>
> A lot of the time: yes; and I can certainly live with numbered parameters.
> It does however become a bit awkward when you add another optional argument
> to an xparse-defined command that already has a lot of arguments, since you
> will then find yourself having to renumber most #n in the replacement text.
> Trivially doable, but something of a maintenance problem.

Joseph already answered that with "use keyval". I second him.

Regards,
Bruno