Dongen wrote (yes, your mail got through on the list the first time):
> I've been reading this thread for a while and I've used these combinators
> to implement some simple arithmetic. (IIRC it uses Church Numerals and
For the bystanders: Church numerals effectively represent the number N as
Nfold composition. Thus one would have
\cs_new:Npn \three #1 #2 { #1{ #1{ #1{ #2 } } } }
if I recall correctly.
> the arithmetic is called Church Arithmetic.) For example, I could write
> something like the following (rewritten for readability):
> <numeral> X to get <numeral> copies of X, so <2> a > aa.
> <numeral> + <numeral> to get <numeral + numeral>, so (<1> + <2>)a > aaa,
> <numeral>  <numeral> to get <numeral  numeral>, and so on. This is
> all well understood.
>
> You can define any lambda expression (and therefore LaTeX) in terms of
> the S and K you mention. See for example [Curry:Feys:Craig:68], which
> also mentions some other names for commonly used combinators.
>
> These combinators are __hopelessly__ inefficient.
I wouldn't be surprised if they are hopeless for arithmetic; we're not too
far from the representations used in formal logic, where 1 is S(0), 2 is
S(S(1)), 3 is S(S(S(0))), etc.
> For example, the
> following definitions are from ``LaTeX and Friends:''
> \newcommand\K[2]{#1}
> \newcommand\S[3]{#1#3{#2#3}}
> \newcommand\I{\S\K\K}
> \newcommand\X{\S{\K{\S\I}}{\S{\K\K}\I}}
> The combinator \X is defined in terms of \S and \K. All it does is swop
> its (two) arguments, so ``\X ab'' gives ``ba.'' Using ``\Xab'' requires
> 17 reductions, which is sad because X is pretty simple.
True  merely that S and K form a basis for the combinatory logic doesn't
mean it makes sense to express everything in terms of these. It can however,
at times, be more convenient to express some act of argument juggling
inline using combinators than it is to introduce an adhoc helper macro. At
least IMHO.
Bruno Le Floch skrev 20111026 12.59:
> Hello Lars,
>
> First, sorry for the delay on the other thread about Church booleans.
> Quite a lot of things cropped up in parallel these days. (And I
> personally have more nonLaTeX things to do than expected.) It is not
> forgotten.
>
> Inasmuch as I like the Church boolean idea, my first reaction to
> combinatory logic is "why would we need this in (La)TeX?" I have to
> admit, though, that I never studied lambda calculus carefully enough
> to understand any practical use for it (the only course I took that
> used it mostly mentionned category theory and Yoneda's lemma).
I can imagine that being a horrifying experience; category theory can be a
fantastic obfuscation tool. Although sometimes it actually guides you right.
>> My thought with this mail is mainly to ask whether there are any more of
>> these (or other standard combinators) that are defined in LaTeX3 already.
>
> No. Most often, the cases where those kinds of macros are useful in
> LaTeX are when you try to control expansion, and that's taken care of
> by the l3expan module. Maybe some of the functions from that module
> look a little bit like that (not quite, though).
It is indeed mostly within argument juggling that I might dare to employ
these beasts: when one needs to rearrange some arguments that are in place
up ahead, and the task seems slightly too minor to justify a specialised
helper macro. (Trading a bit of speed for keeping all the code for something
in one place, thus improving maintainability.) It's not primarily about
controlling expansion, as there is nothing like an \expandafter or \noexpand
in sight, but you do have a point that this kind of thing might have ended
up in the l3expan module anyway; I'll need to revisit that.
Argument juggling is often necessary in situations where one needs to fit
one utility function to the API for callback functions expected by some
other function.
>> the primary intent for which is that #1{#4} should behave as a Church
>> boolean, and thus select whether to apply #2 or #3 to the second #4. I
>> suspect \Lift, \Twiddle, \Compose, and maybe some additional macro could be
>> combined into an S combinator, but it is not immediately clear to me how.
During the afternoon, it became clear that the wikipedia page implicitly
contains an algorithm that will produce such an expression for me, and using
that I could produce an expression. Applying some suitable simplifications
to intermediate results, I managed to end up with
\Lift{\Compose{\Compose{\Compose}}{\Twiddle{\use_i:n}}}
as an expression for the S combinator. (No, I don't quite see it either. But
it becomes readily apparent that this does the right thing when one tries
applying it to some arguments.)
> I don't have enough lambda background to figure that one out (and I've
> still got a few things to change in l3regex before I leave for two
> weeks).
>
>> three years was a lot of time in TeX history, back then.
>
> It still is. The engines themselves change, nowadays :).
Heresy!!! :)
>> Somewhat relatedly, it occurs to me that the process of converting
>> lambdaterms to combinator formulae /might/ be the beginning of an approach
>> to having "named command parameters" in (highlevel) LaTeX without radical
>> modifications of the TeX engine  the idea being that the named parameters
>> are (upon macro definition) removed from the replacement text as a matter of
>> lambda elimination (conversion of a lambda term to an equivalent
>> combinatorial term). Whether that would be practical is of course an
>> entirely different matter. ;)
>
> Care to elaborate??
Be careful what you wish for, ;) because the following turned out to be
very elaborate indeed. Basically the idea was (i) that introducing a named
parameter in a replacement text is somehow analogous to doing a
lambdaabstraction and (ii) since combinators can do everything that
lambdaabstraction can, but without explicit representation of any
variables, one could in principle get rid of the named parameters in the
same way, without using up any of the precious #1#9.
Suppose first that there was a primitive \lambda:Nn that is an "anonymous
macro constructor"  technically it should expand to an inaccessible
control sequence which just so happens to be defined as a macro taking one
undelimited parameter and whose replacement text is obtained by replacing
all[*] occurrencies of the token supplied in the N argument by #1. Thus,
\lambda:Nn{y}{syzygy}{oe}
expands (in two steps) to
soezoegoe
Given that, it is not too hard to see that
\Lift{
\lambda:Nn{x}{
\lambda:Nn{y}{
\lambda:Nn{z}{
\lambda:Nn{w}{
y{x}{z{w}}
}
}
}
}
}{#1}{#2}{#3}
will get x=#3, y=#1, z=#2, and w=#3, so the expansion becomes
#1{#3}{#2{#3}}
In other words, the above combination of \Lift and one argument is
equivalent to the S combinator. In order to derive a pure combinator
expression for the S combinator, one must then eliminate the \lambda:Nn.
Defining the conversion is not so hard; I only have to transcribe some
material from the Combinatorial logic wikipedia page in TeXy notation.
[*] Actually, \lambda:Nn is not supposed to replace all occurrencies of the
parameter token, but only those that are not in the body of some inner
\lambda:Nn which also binds the same token; theories with variables are
messy that way. Correspondingly "containing the token <x>" below is
technically "without free occurrencies of the token <x>", for those who care
about the difference.
Lambdaelimination is carried out by a recursively defined function T.
First, if <E> is some balanced text not containing the token <x>, then
T( \lambda:Nn{<x>}{ <E> } ) = \use_i:nn{ T(<E>) }
Second, if <F> is some balanced text containing the token <x>, then
T( \lambda:Nn{<x>}{ \lambda:Nn{<y>}{ <F> } } ) =
T( \lambda:Nn{<x>}{ T( \lambda:Nn{<y>}{ <F> } ) } )
Third, there are the cases that the body of a \lambda:Nn{<x>} is a macro
application. If <E> as above is some balanced text not containing the token
<x>, whereas <F> and <F'> are balanced texts containing the token <x>, then
T( \lambda:Nn{<x>}{ <E>{<F>} } ) =
\Compose{ T(<E>) }{ T( \lambda:Nn{<x>}{<F>} ) }
T( \lambda:Nn{<x>}{ <F>{<E>} } ) =
\Twiddle{ T( \lambda:Nn{<x>}{<F>} ) }{ T(<E>) }
T( \lambda:Nn{<x>}{ <F>{<F'>} } ) =
\combinator_S:nnn{ T( \lambda:Nn{<x>}{<F>} ) }
{ T( \lambda:Nn{<x>}{<F'>} ) }
The remaining cases needed for combinatory logic are
T( <F>{<F'>} ) = T(<F>){ T(<F'>) }
T( \lambda:Nn{<x>}{<x>} ) = \use_i:n
T( <other token> ) = <other token>
If the intent is to convert more general TeX code, some more cases would be
needed. In particular, something would be needed for the case that two
separate commands simply occur in sequence:
\foo{bar} \baz{foo}
as this is not a kind of expression that can occur in lambda calculus;
sequences there are always function application. Note, on the other hand,
that the machinery is fully in place for substituting something inside a
brace group, like
\cs_new:Npn #1 { \foo{ \bar{ \baz{#1} } } }
would. This is otherwise a situation that "ordinary" tricks for rewriting
token sequences tend to find problematic.
I had some vague notion at the beginning that one could implement that T in
TeX. In principle one can, but the fact that T starts by picking off the
/last/ argument of a command makes it a lot harder. One would probably have
to express code to be rewritten in some special form to make its structure
visible to T, and that spoils most of the idea in the first place.
T can still be used to prove that the S combinator is expressible using
\Twiddle, \Compose, and \use_i:n however. The point is that
\lambda:Nn{x}{
\lambda:Nn{y}{
\lambda:Nn{z}{
\lambda:Nn{w}{
y{x}{z{w}}
}
}
}
}
uses each variable token exactly once, so the \lambda:Nn{<x>}{ <F>{<F'>} }
rule is never invoked (nor is the \lambda:Nn{<x>}{<E>} rule), and thus T
will rewrite that nested composition of lambda expressions using \Compose,
\Twiddle, and \use_i:n only. Initial rewrite steps may feature unsightly
intermediate results such as
T( \lambda:Nn{x}{
T( \lambda:Nn{y}{
T( \lambda:Nn{z}{
T( \lambda:Nn{w}{
y{x}{z{w}}
})
})
})
})
T( \lambda:Nn{x}{
T( \lambda:Nn{y}{
T( \lambda:Nn{z}{
\Compose{ y{x} } {
T( \lambda:Nn{w}{ z{w} } )
}
})
})
})
T( \lambda:Nn{x}{
T( \lambda:Nn{y}{
T( \lambda:Nn{z}{
\Compose{ y{x} } {
\Compose{z}{ \use_i:n }
}
})
})
})
but that can be simplified to
T( \lambda:Nn{x}{
T( \lambda:Nn{y}{
T( \lambda:Nn{z}{
\Compose{ y{x} }{ z }
})
})
})
since \Compose{ <whatever> }{ \use_i:n } is equivalent to just <whatever>.
T( \lambda:Nn{x}{
T( \lambda:Nn{y}{
\Compose{
\Compose{ y{x} }
}{
T( \lambda:Nn{z}{ z } )
}
})
})
T( \lambda:Nn{x}{
T( \lambda:Nn{y}{
\Compose{
\Compose{ y{x} }
}{
\use_i:n
}
})
})
Again using the \Compose{<whatever>}{\use_i:n} rule, that simplifies to
T( \lambda:Nn{x}{
T( \lambda:Nn{y}{
\Compose{ y{x} }
})
})
and so on, until
\Compose{
\Compose{\Compose}
}{
\Twiddle{\use_i:n}
}
> I can write a fully robust, but entirely unpratical, conversion from
> named parameters to numbered parameters: pass the definition through
> ted.sty (or some adaptation thereof). Locate all #. Read the
> corresponding names. Convert to digits. Build the token list back
> (that piece is easy, see l3trial/csinput/csinput.dtx). For more than
> 9 arguments, things are harder, but also feasible.
>
> I'd argue, though, that it is useless. If you want named parameters,
> keyvalue input is much more powerful.
A lot of the time: yes; and I can certainly live with numbered parameters.
It does however become a bit awkward when you add another optional argument
to an xparsedefined command that already has a lot of arguments, since you
will then find yourself having to renumber most #n in the replacement text.
Trivially doable, but something of a maintenance problem.
Lars HellstrÃ¶m
